text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Combinatorial explosion
In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to how the combinatorics of the problem is affected by the input, constraints, and bounds of the problem. Combinatorial explosion is sometimes used to justify the intractability of certain problems.[1][2] Examples of such problems include certain mathematical functions, the analysis of some puzzles and games, and some pathological examples which can be modelled as the Ackermann function.
Examples
Latin squares
Main article: Latin square
A Latin square of order n is an n × n array with entries from a set of n elements with the property that each element of the set occurs exactly once in each row and each column of the array. An example of a Latin square of order three is given by,
123
231
312
A common example of a Latin square would be a completed Sudoku puzzle.[3] A Latin square is a combinatorial object (as opposed to an algebraic object) since only the arrangement of entries matters and not what the entries actually are. The number of Latin squares as a function of the order (independent of the set from which the entries are drawn) (sequence A002860 in the OEIS) provides an example of combinatorial explosion as illustrated by the following table.
nThe number of Latin squares of order n
11
22
312
4576
5161,280
6812,851,200
761,479,419,904,000
8108,776,032,459,082,956,800
95,524,751,496,156,892,842,531,225,600
109,982,437,658,213,039,871,725,064,756,920,320,000
11776,966,836,171,770,144,107,444,346,734,230,682,311,065,600,000
Sudoku
Main article: Mathematics of Sudoku
A combinatorial explosion can also occur in some puzzles played on a grid, such as Sudoku.[2] A Sudoku is a type of Latin square with the additional property that each element occurs exactly once in sub-sections of size √n × √n (called boxes). Combinatorial explosion occurs as n increases, creating limits to the properties of Sudokus that can be constructed, analyzed, and solved, as illustrated in the following table.
nThe number of Sudoku grids of order n
(boxes are size√n×√n)
The number of Latin squares of order n
(for comparison)
11 1
4288 [4]576
96,670,903,752,021,072,936,960 [4][5]5,524,751,496,156,892,842,531,225,600
(n = 9 is the commonly played 9 × 9 Sudoku. The puzzle does not include grids where √n is irrational.)
Games
One example in a game where combinatorial complexity leads to a solvability limit is in solving chess (a game with 64 squares and 32 pieces). Chess is not a solved game. In 2005 all chess game endings with six pieces or fewer were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity.[6][7]
Furthermore, the prospect of solving larger chess-like games becomes more difficult as the board-size is increased, such as in large chess variants, and infinite chess.[8]
Computing
Combinatorial explosion can occur in computing environments in a way analogous to communications and multi-dimensional space. Imagine a simple system with only one variable, a boolean called A. The system has two possible states, A = true or A = false. Adding another boolean variable B will give the system four possible states, A = true and B = true, A = true and B = false, A = false and B = true, A = false and B = false. A system with n booleans has 2n possible states, while a system of n variables each with Z allowed values (rather than just the 2 (true and false) of booleans) will have Zn possible states.
The possible states can be thought of as the leaf nodes of a tree of height n, where each node has Z children. This rapid increase of leaf nodes can be useful in areas like searching, since many results can be accessed without having to descend very far. It can also be a hindrance when manipulating such structures.
A class hierarchy in an object-oriented language can be thought of as a tree, with different types of object inheriting from their parents. If different classes need to be combined, such as in a comparison (like A < B) then the number of possible combinations which may occur explodes. If each type of comparison needs to be programmed then this soon becomes intractable for even small numbers of classes. Multiple inheritance can solve this, by allowing subclasses to have multiple parents, and thus a few parent classes can be considered rather than every child, without disrupting any existing hierarchy.
An example is a taxonomy where different vegetables inherit from their ancestor species. Attempting to compare the tastiness of each vegetable with the others becomes intractable since the hierarchy only contains information about genetics and makes no mention of tastiness. However, instead of having to write comparisons for carrot/carrot, carrot/potato, carrot/sprout, potato/potato, potato/sprout, sprout/sprout, they can all multiply inherit from a separate class of tasty whilst keeping their current ancestor-based hierarchy, then all of the above can be implemented with only a tasty/tasty comparison.
Arithmetic
Suppose we take the factorial of n:
$n!=n\cdot (n-1)\cdot \ldots \cdot 2\cdot 1$
Then 1! = 1, 2! = 2, 3! = 6, and 4! = 24. However, we quickly get to extremely large numbers, even for relatively small n. For example, 100! ≈ 9.33262154×10157, a number so large that it cannot be displayed on most calculators, and vastly larger than the estimated number of fundamental particles in the observable universe.[9]
Communication
In administration and computing, a combinatorial explosion is the rapidly accelerating increase in communication lines as organizations are added in a process. (This growth is often casually described as "exponential" but is actually polynomial.)
If two organizations need to communicate about a particular topic, it may be easiest to communicate directly in an ad hoc manner—only one channel of communication is required. However, if a third organization is added, three separate channels are required. Adding a fourth organization requires six channels; five, ten; six, fifteen; etc.
In general, it will take $l={\frac {n(n-1)}{2}}={n \choose 2}$ communication lines for n organizations, which is just the number of 2-combinations of n elements (see also Binomial coefficient).[10]
The alternative approach is to realize when this communication will not be a one-off requirement, and produce a generic or intermediate way of passing information. The drawback is that this requires more work for the first pair, since each must convert its internal approach to the common one, rather than the superficially easier approach of just understanding the other.
See also
• Birthday problem
• Exponential growth
• Metcalfe's law
• Curse of dimensionality
• Information explosion
• Intractability (complexity)
• Second half of the chessboard
References
1. Krippendorff, Klaus. "Combinatorial Explosion". Web Dictionary of Cybernetics and Systems. PRINCIPIA CYBERNETICA WEB. Retrieved 29 November 2010.
2. http://intelligence.worldofcomputing/combinatorial-explosion Combinatorial Explosion.
3. All completed puzzles are Latin squares, but not all Latin squares can be completed puzzles since there is additional structure in a Sudoku puzzle.
4. Sloane, N. J. A. (ed.). "Sequence A107739 (Number of (completed) sudokus (or Sudokus) of size n^2 X n^2)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 14 April 2017.
5. "Sudoku enumeration problems". Afjarvis.staff.shef.ac.uk. Retrieved 20 October 2013.
6. http://chessok.com/Lomonosov Endgame Tablebases Lomonosov Endgame Tablebases
7. "7-piece-endgame-tablebase (chess)". Stack Exchange.
8. Aviezri Fraenkel; D. Lichtenstein (1981), "Computing a perfect strategy for n×n chess requires time exponential in n", J. Combin. Theory Ser. A, 31 (2): 199–214, doi:10.1016/0097-3165(81)90016-9
9. "The Universe By Numbers - The Physics of the Universe". www.physicsoftheuniverse.com. Retrieved 2021-08-27.
10. Benson, Tim. (2010). Principles of health interoperability HL7 and SNOMED. New York: Springer. p. 23. ISBN 9781848828032. OCLC 663097524.
| Wikipedia |
\begin{document}
\title{ Spin contamination and noncollinearity in general complex Hartree-Fock wave functions }
\author{Patrick Cassam-Chena\"\i}
\institute{Patrick Cassam-Chena\"\i \at Univ. Nice Sophia Antipolis, CNRS, LJAD, UMR 7351, 06100 Nice, France\\ Tel.: +33-4-92076260\\ Fax: +33-4-93517974\\ \email{[email protected]} }
\date{\today}
\maketitle
\begin{abstract} An expression for the square of the spin operator expectation value, $\langle S^2 \rangle$, is obtained for a general complex Hartree-Fock (GCHF) wave function and decomposed into four contributions: The main one whose expression is formally identical to the restricted (open-shell) Hartree-Fock expression. A spin contamination one formally analogous to that found for spin unrestricted Hartree-Fock wave functions. A noncollinearity contribution related to the fact that the wave function is not an eigenfunction of the spin-$S_z$ operator. A perpendicularity contribution related to the fact that the spin density is not constrained to be zero in the xy-plane. All these contributions are evaluated and compared for the H$_2$O$^+$ system. The optimization of the collinearity axis is also considered. \keywords{ spin contamination \and collinearity \and general complex Hartree-Fock}
\end{abstract}
\section{Introduction} Particle-independent models based on single Slater determinant wave functions, have enjoyed considerable interest in quantum chemistry, since the pioneering works of Hartree, Slater and Fock \cite{Hartree28,Slater29,Fock30}.
When a quantum system is described by a spin-free Hamiltonian, which obviously commutes with the spin operators $S_z$ and $S^2$, a spin-symmetry respectful way of using the Hartree-Fock method consists in:\\ 1) Using spin-orbitals of pure $\alpha$- or $\beta$-spin, so that the HF optimized Slater determinant is an eigenfunction of $S_z$;\\ 2) Imposing the spin-equivalence restriction \cite{Berthier64}, which means that paired $\alpha$- and $\beta$-spin-orbitals are formed from the same set of linearly independent orbitals. We have proved mathematically \cite{Cassam92,Cassam92-cras,Cassam93-ijqc} that this additional constraint is a necessary and sufficient condition to insure that a Slater determinantal wave function is an eigenfunction of the spin operator $S^2$. In other words, we have shown that relaxing the $S^2$-symmetry constraint exactly amounts to allow different ``paired orbitals'', in the sense of Refs. \cite{Amos61,Lowdin62}, to have different spins. This equivalence enabled us to characterize the variational space explored by the restricted open-shell Hartree-Fock (ROHF) method \cite{Cassam94}, which precisely consists in optimizing a Slater determinant subject to constraints 1) and 2) (plus spatial-symmetry constraints if any) \cite{Roothaan60}. The equivalence was also discovered independently \cite{Andrews91} by optimizing a Slater determinant with a Lagrange multiplier, enforcing $\langle S^2\rangle$ to be arbitrarily close to the ROHF value, instead of applying the spin-equivalence restriction. Not surprisingly, the determinant was approaching the ROHF solution.
Similar to the spin-free case is the ``complex-free'' one: When a quantum system is described by a real Hamiltonian, which obviously commutes with complex conjugation, one can restrict oneself to the calculation of real eigenfunctions. Then, it is also possible to employ only real spin-orbitals to construct the HF Slater determinant \cite{Brandas68}. (However, difficulties may occur when the symmetry group of the molecule cannot be represented over real numbers, and nonetheless, one wishes the spin-orbitals to be adapted to spatial-symmetry).
However, it has been proposed by various authors to relax some or all of the above-mentioned constraints, to gain variational freedom. For example, the different orbitals for different spins method (DODS) of Refs. \cite{Berthier54,Pople54}, (which is usually just called ``unrestricted Hartree-Fock'' (UHF), but in this paper we use ``DODS'' to avoid confusions), relaxes the spin-equivalence restriction, hence the HF solution is no longer an eigenfunction of $S^2$. Other authors \cite{Bunge67,Lefebvre67,Lunell72} have advocated the use of general spin-orbitals, mixing $\alpha$-spin and $\beta$-spin parts, in conjunction with the use of projectors \cite{Lowdin55}.
Along the same line of thought, the use of complex spin-orbitals has been proposed \cite{Lefebvre67,Hendekovic74} to increase variational freedom in the case of real Hamiltonian. Prat and Lefebvre went a step further with so-called ``hypercomplex'' spin-orbitals to construct Slater determinants of arbitrary accuracy \cite{Prat69}. However, the coefficients of their spin-orbitals were elements of a Clifford algebra of dimension $2^{2n}$, that was not a normed division algebra, also known as Cayley algebra, for arbitrary values of $n$. This was unfortunate, since such a structure appears to be a minimal requirement for a quantum formalism, if, for example, Born's interpretation of the wave function is to hold firmly. For $n=1$, the Clifford algebra of Prat et al. was actually the non-commutative field of Quaternions, therefore, \textit{a fortiori}, a normed division algebra. The only larger normed division algebra is the Octonion algebra. It is a Clifford algebra of dimension $8$, which has also been proposed in a quantum mechanical context \cite{Penney68}, but this algebra is neither commutative nor associative. The lack of these properties rises difficulties for its use for multipartite quantum systems, nevertheless these difficulties can be overcome by keeping the product of octonion coefficients in the form of a tensor product. So, octonion-unrestricted HF appears to be the largest Cilfford algebra-unrestricted single determinantal method that can be considered in the spirit of Prat and Lefebvre's proposal. However, octonions seem incompatible with the desirable requirement that the algebra of quantum observables be what is now called a formally real Jordan algebra \cite{Jordan34} acting on a vector space of arbitrarily large dimension. Octonions are also ruled out by the requirement of orthomodularity in infinite dimension according to Sol\`er's theorem \cite{Soler95,Holland95}, which restricts quantum Hilbert spaces to be real, complex or at most quaternionic.
The first HF molecular calculations with general complex spin-orbitals, without projecting out the symmetry-breaking part of the wave function, are maybe those of Ref. \cite{Mayer93}. It was found on the BH molecule around its equilibrium geometry that the general complex Hartree-Fock (GCHF) energy was indeed lower than the DODS one, which itself was lower than the restricted Hartree-Fock (RHF) solution. So necessarily, the corresponding GCHF wave functions had $S^2$-spin contamination and $S_z$-spin contamination, that is to say, the expectation values of these operators were different from $0$, the value expected for a singlet ground state. (It is not clear whether complex numbers were used for this molecule, but the authors did mention that they performed complex calculations for $2$-electron systems.)
Relaxing the ``$S_z$-constraint'' hence the ``collinearity constraint'', becomes perfectly legitimate
when hyperfine or spin-orbit couplings are considered, since the operator $S_z$ no longer commutes with the Hamiltonian.
As a matter of fact, real physical systems do exhibit either light \cite{Cassam02-jcp,Cassam12b-jcp} or strong \cite{Coey87,Libby91} noncollinearity of their spin densities. Similarly, the use of complex spin-orbitals is natural, when considering relativistic corrections resulting in a complex Hamiltonian operator. So, in such a context, one should use no less than general complex spin-orbitals in HF calculations \cite{Jayatilaka98}. The ``spin-same-orbit'' coupling term used in these calculations does not commute with the $S^2$-operator. Therefore, one cannot strictly speak of ``$S^2$-spin contamination'' in relativistic GCHF wave functions. However, calculating the expectation value of $S^2$, a \textit{bona fide} quantum observable, can still provide valuable physical information about the system.
A general expression for the expectation value of $S^2$ has been obtained in the DODS case \cite{Amos61}, and has served as a measure of $S^2$-spin contamination. However, as far as we are aware, no such formula has been published in the case of a GCHF wave function. This gap will be filled in the next section.
Studying departure from collinearity is more difficult because of arbitrariness in the quantification axis. One possible way to overcome the difficulty would be to apply an external magnetic field to fix the $z$-axis but small enough not to perturb the GCHF solution. However, an elegant alternative has been proposed recently by Small et al. \cite{Small2015}. It is based on studying the lowest eigenvalue of a $(3\times 3)$-matrix built from expectation values of spin operator components and their products. In the GCHF case, the authors provided the expressions required to compute the matrix elements in
a compact form. In the third section, we give a more extended formula
in terms of molecular orbital overlap matrix elements. We also illustrate the connections between spin contamination, noncollinearity and its correlative: ``perpendicularity'' on the H$_2$O$^+$ cation example. We sum up our conclusions in the last section.
\section{Spin contamination in GCHF}
A General Complex Hartree Fock (GCHF) wave function
\begin{equation}
\Phi_{GCHF}=\phi_1\wedge\cdots\wedge\phi_{N_e} \label{GCHF-wf} \end{equation} is the antisymmetrized product (or wedge product, denoted by $\wedge$) of orthonormal spinorbitals, or ``two-component spinors'',
\begin{equation} \phi_i= \left( \begin{array}{c} \phi_{i \alpha} \\ \phi_{i \beta} \end{array}\right), \label{MOi2c} \end{equation}
\begin{equation}
\langle \phi_i|\phi_j\rangle=\delta_{i,j}, \label{orthonormal} \end{equation}
where the scalar product $\langle \cdot |\cdot\rangle$ means integration over space variables and summation (i.e. taking the trace) over spin variables: $\langle \phi_i|\phi_j\rangle=\langle \phi_{i \alpha}|\phi_{j \alpha}\rangle+\langle \phi_{i \beta}|\phi_{j \beta}\rangle$, (where the same bracket symbol is used for the scalar product between orbital parts). We define the ``number of $\alpha$-spin electrons'' (respectively ``number of $\beta$-spin electrons'') as $N_\alpha:= \sum\limits_{i=1}^{N_e}\langle\phi_{i \alpha}|\phi_{i \alpha}\rangle$ (respectively, $N_\beta:= \sum\limits_{i=1}^{N_e}\langle\phi_{i \beta}|\phi_{i \beta}\rangle$). It is the expectation value of the projection operator on the $\alpha$- (respectively $\beta$-) one-electron Hilbert subspace (more rigorously speaking, the operator induced onto the $n$-electron Hilbert space by this one-electron projection operator). Note that these two numbers need not be integer numbers, however their sum is an integer: $N_\alpha+N_\beta=N_e$.
Let us work out the expectation value of the spin operator, \begin{equation} S^2=S_z^2+\frac{1}{2}(S^+S^- + S^-S^+), \label{S2} \end{equation} on a general GCHF wave function.\\ The action of $S_z$ is given by, \begin{equation} S_z \Phi_{GCHF}=\frac{1}{2}\sum\limits_{i=1}^{N_e}\hat{\Phi}_{GCHF}^{i}, \label{Sz} \end{equation} where, \begin{equation} \hat{\Phi}_{GCHF}^{i}=\phi_1\wedge\cdots\wedge\phi_{i-1}\wedge\hat{\phi}_{i}\wedge\phi_{i+1}\wedge\cdots\wedge\phi_{N_e}, \label{hat-i} \end{equation} and, \begin{equation} \hat{\phi}_{i}= \left( \begin{array}{c} +\phi_{i \alpha} \\ -\phi_{i \beta} \end{array}\right). \label{MO-hat-i} \end{equation} Note that, \begin{equation}
\langle \hat{\phi}_{i}|\hat{\phi}_{j}\rangle=\langle \phi_i|\phi_j\rangle=\delta_{i,j}. \label{orthonorm-hat} \end{equation} So, the expectation value of $S_z$ is
\begin{eqnarray}
\langle\Phi_{GCHF}|S_z|\Phi_{GCHF}\rangle=\frac{1}{2}\sum\limits_{i=1}^{N_e}\langle \Phi_{GCHF}| \hat{\Phi}_{GCHF}^{i}\rangle=\frac{1}{2}\sum\limits_{i=1}^{N_e}\langle \phi_{i}| \hat{\phi}_{i}\rangle=\frac{1}{2}\sum\limits_{i=1}^{N_e}\left(\langle \phi_{i \alpha}|\phi_{i \alpha}\rangle-\langle \phi_{i \beta}|\phi_{i \beta}\rangle\right)=\frac{N_\alpha-N_\beta}{2}.\nonumber\\ \label{Sz-expect} \end{eqnarray}
and that of $S_z^2$: \begin{eqnarray}
\lefteqn{\langle\Phi_{GCHF}|S_z^2|\Phi_{GCHF}\rangle=\langle S_z\Phi_{GCHF}|S_z\Phi_{GCHF}\rangle}\nonumber\\
&=\frac{1}{4}\sum\limits_{i,j=1}^{N_e}\langle \hat{\Phi}_{GCHF}^{i}| \hat{\Phi}_{GCHF}^{j}\rangle&\nonumber\\
&=\frac{1}{4}\left(\sum\limits_{i=1}^{N_e}\langle \hat{\Phi}_{GCHF}^{i}| \hat{\Phi}_{GCHF}^{i}\rangle+\sum\limits_{\stackrel{i,j=1}{i\neq j}}^{N_e}\langle \hat{\Phi}_{GCHF}^{i}| \hat{\Phi}_{GCHF}^{j}\rangle\right)&\nonumber\\
&=\frac{1}{4}\sum\limits_{i=1}^{N_e}\left(\langle \hat{\phi}_{i}| \hat{\phi}_{i}\rangle+\sum\limits_{\stackrel{j=1}{j\neq i}}^{N_e}(-1)\arrowvert\langle \hat{\phi}_{i}| \phi_{j}\rangle\arrowvert^2+\langle \hat{\phi}_{i}| \phi_{i}\rangle\langle \phi_{j}| \hat{\phi}_{j}\rangle\right)&\nonumber\\
&=\frac{1}{4}\left(N_e+\sum\limits_{\stackrel{i,j=1}{i\neq j}}^{N_e}(-1)\arrowvert\langle\phi_{i \alpha}| \phi_{j \alpha}\rangle-\langle\phi_{i \beta}| \phi_{j \beta}\rangle\arrowvert^2+ \left(\langle\phi_{i \alpha}| \phi_{i \alpha}\rangle-\langle\phi_{i \beta}| \phi_{i \beta}\rangle\right)\left(\langle \phi_{j\alpha}| \phi_{j \alpha}\rangle-\langle \phi_{j \beta}| \phi_{j \beta}\rangle\right)\right)&\nonumber\\
&=\frac{1}{4}\left(N_e+\sum\limits_{i,j=1}^{N_e} \left(\langle\phi_{i \alpha}| \phi_{i \alpha}\rangle-\langle\phi_{i \beta}| \phi_{i \beta}\rangle\right)\left(\langle \phi_{j\alpha}| \phi_{j \alpha}\rangle-\langle \phi_{j \beta}| \phi_{j \beta}\rangle\right)-\arrowvert\langle\phi_{i \alpha}| \phi_{j \alpha}\rangle-\langle\phi_{i \beta}| \phi_{j \beta}\rangle\arrowvert^2\right) &\nonumber\\
&=\left(\frac{N_\alpha}{2}-\frac{N_\beta}{2}\right)^2+\frac{1}{4}\left(N_e-\sum\limits_{i,j=1}^{N_e} \arrowvert\langle\phi_{i \alpha}| \phi_{j \alpha}\rangle-\langle\phi_{i \beta}| \phi_{j \beta}\rangle\arrowvert^2\right) .& \label{Sz2} \end{eqnarray}
This equation reduces to $\left(\frac{N_\alpha}{2}-\frac{N_\beta}{2}\right)^2$ in the case of a DODS wave function. So, the second term on the right-hand side (rhs), which is $(\langle\Phi_{GCHF}|S_z^2|\Phi_{GCHF}\rangle-\langle\Phi_{GCHF}|S_z|\Phi_{GCHF}\rangle^2)$ is directly related to relaxation of the $S_z$-constraint and will be called the ``$z$-noncollinearity'' contribution. Note, however, that for a GCHF wave function, the first term on the rhs does not necessarily correspond to an eigenvalue of $S_z^2$, according to the definition of $N_\alpha$ and $N_\beta$.\\ The action of $S^+$ is given by, \begin{equation} S^+ \Phi_{GCHF}=\sum\limits_{i=1}^{N_e}\acute{\Phi}_{GCHF}^{i}, \label{S+} \end{equation} where, \begin{equation} \acute{\Phi}_{GCHF}^{i}=\phi_1\wedge\cdots\wedge\phi_{i-1}\wedge\acute{\phi}_{i}\wedge\phi_{i+1}\wedge\cdots\wedge\phi_{N_e}, \label{i+} \end{equation} and, \begin{equation} \acute{\phi}_{i}= \left( \begin{array}{c} +\phi_{i \beta} \\ 0 \end{array}\right). \label{MO-i+} \end{equation} Similarly, the action of $S^-$ is given by, \begin{equation} S^- \Phi_{GCHF}=\sum\limits_{i=1}^{N_e}\grave{\Phi}_{GCHF}^{i}, \label{S-} \end{equation} where, \begin{equation} \grave{\Phi}_{GCHF}^{i}=\phi_1\wedge\cdots\wedge\phi_{i-1}\wedge\grave{\phi}_{i}\wedge\phi_{i+1}\wedge\cdots\wedge\phi_{N_e}, \label{i-} \end{equation} and, \begin{equation} \grave{\phi}_{i}= \left( \begin{array}{c} 0 \\ +\phi_{i \alpha} \end{array}\right). \label{MO-i-} \end{equation} So, the expectation value of $S^-S^+$ is,
\begin{eqnarray}
\lefteqn{ \langle\Phi_{GCHF}|S^-S^+|\Phi_{GCHF}\rangle=\langle S^+\Phi_{GCHF}|S^+\Phi_{GCHF}\rangle }\nonumber\\
&=\sum\limits_{i,j=1}^{N_e}\langle \acute{\Phi}_{GCHF}^{i}| \acute{\Phi}_{GCHF}^{j}\rangle&\nonumber\\
&=\sum\limits_{i=1}^{N_e}\langle \acute{\Phi}_{GCHF}^{i}| \acute{\Phi}_{GCHF}^{i}\rangle + \sum\limits_{\stackrel{i,j=1}{i\neq j}}^{N_e} \langle \acute{\Phi}_{GCHF}^{i}| \acute{\Phi}_{GCHF}^{j}\rangle &\nonumber\\
&=\sum\limits_{i=1}^{N_e}\left(\langle \acute{\phi}_{i}| \acute{\phi}_{i}\rangle+\sum\limits_{\stackrel{j=1}{j\neq i}}^{N_e}(-1)\arrowvert\langle \acute{\phi}_{i}| \phi_{j}\rangle\arrowvert^2+\langle \acute{\phi}_{i}| \phi_{i}\rangle\langle \phi_{j}| \acute{\phi}_{j}\rangle\right)&\nonumber\\
&=\sum\limits_{i=1}^{N_e}\left(\langle\phi_{i \beta}| \phi_{i \beta}\rangle+\sum\limits_{\stackrel{j=1}{j\neq i}}^{N_e}(-1)\arrowvert\langle\phi_{i \beta}| \phi_{j \alpha}\rangle\arrowvert^2+ \langle\phi_{i \beta}| \phi_{i \alpha}\rangle\langle \phi_{j\alpha}| \phi_{j \beta}\rangle\right)&\nonumber\\
&=N_{\beta}+\sum\limits_{i,j=1}^{N_e} \langle\phi_{i \beta}| \phi_{i \alpha}\rangle\langle \phi_{j\alpha}| \phi_{j \beta}\rangle- \langle\phi_{i \beta}| \phi_{j \alpha}\rangle\langle\phi_{j \alpha}| \phi_{i \beta}\rangle.& \label{S-S+} \end{eqnarray}
Similarly, the expectation value of $S^+S^-$ is, \begin{eqnarray}
\lefteqn{ \langle\Phi_{GCHF}|S^+S^-|\Phi_{GCHF}\rangle=\langle S^-\Phi_{GCHF}|S^-\Phi_{GCHF}\rangle }\nonumber\\
&=\sum\limits_{i=1}^{N_e}\left(\langle \grave{\phi}_{i}| \grave{\phi}_{i}\rangle+\sum\limits_{\stackrel{j=1}{j\neq i}}^{N_e}(-1)\arrowvert\langle \grave{\phi}_{i}| \phi_{j}\rangle\arrowvert^2+\langle \grave{\phi}_{i}| \phi_{i}\rangle\langle \phi_{j}| \grave{\phi}_{j}\rangle\right)&\nonumber\\
&=\sum\limits_{i=1}^{N_e}\left(\langle\phi_{i \alpha}| \phi_{i \alpha}\rangle+\sum\limits_{\stackrel{j=1}{j\neq i}}^{N_e}(-1)\arrowvert\langle\phi_{i \alpha}| \phi_{j \beta}\rangle\arrowvert^2+ \langle\phi_{i \alpha}| \phi_{i \beta}\rangle\langle \phi_{j\beta}| \phi_{j \alpha}\rangle\right)&\nonumber\\
&=N_{\alpha}+\sum\limits_{i,j=1}^{N_e} \langle\phi_{i \alpha}| \phi_{i \beta}\rangle\langle \phi_{j\beta}| \phi_{j \alpha}\rangle- \langle\phi_{i \alpha}| \phi_{j \beta}\rangle\langle\phi_{j \beta}| \phi_{i \alpha}\rangle.& \label{S+S-} \end{eqnarray} Using Eq.(\ref{S2}) and putting together Eqs.(\ref{Sz2}), (\ref{S-S+}) and (\ref{S+S-}), one obtains the expectation value of $S^2$, \begin{eqnarray}
\lefteqn{ \langle\Phi_{GCHF}|S^2|\Phi_{GCHF}\rangle=\left(\frac{N_\alpha}{2}-\frac{N_\beta}{2}\right)^2+\frac{N_\alpha}{2}+\frac{N_\beta}{2}+\frac{1}{4}\left(N_e-\sum\limits_{i,j=1}^{N_e} \arrowvert\langle\phi_{i \alpha}| \phi_{j \alpha}\rangle-\langle\phi_{i \beta}| \phi_{j \beta}\rangle\arrowvert^2\right)}\nonumber\\
&+\sum\limits_{i,j=1}^{N_e} \langle\phi_{i \alpha}| \phi_{i \beta}\rangle\langle \phi_{j\beta}| \phi_{j \alpha}\rangle- \langle\phi_{i \alpha}| \phi_{j \beta}\rangle\langle\phi_{j \beta}| \phi_{i \alpha}\rangle.& \label{expect-S2} \end{eqnarray} The expression reduces to the known formula in the case of a DODS wave function. Assuming, without loss of generality, that $N_\alpha\geq N_\beta$, we rewrite Eq. (\ref{expect-S2}) as, \begin{eqnarray}
\lefteqn{ \langle\Phi_{GCHF}|S^2|\Phi_{GCHF}\rangle=\left(\frac{N_\alpha}{2}-\frac{N_\beta}{2}\right)\left(\frac{N_\alpha}{2}-\frac{N_\beta}{2}+1\right)+\frac{1}{4}\left(N_e-\sum\limits_{i,j=1}^{N_e} \arrowvert\langle\phi_{i \alpha}| \phi_{j \alpha}\rangle-\langle\phi_{i \beta}| \phi_{j \beta}\rangle\arrowvert^2\right)}\nonumber\\
&+\left(N_\beta-\sum\limits_{i,j=1}^{N_e} \langle\phi_{i \alpha}| \phi_{j \beta}\rangle\langle\phi_{j \beta}| \phi_{i \alpha}\rangle\right) + \lvert\sum\limits_{i=1}^{N_e}\langle \phi_{i\beta}| \phi_{i\alpha}\rangle\rvert^2.& \label{spin-contam} \end{eqnarray} In this formula we identify four contributions: The first term is formally identical to the ROHF expression also found in the DODS case. However, care must be taken that it is actually different, because the numbers of $\alpha$- and $\beta$-electrons are not good quantum numbers in the GCHF case. The second term on the first line is the ``$z$-noncollinearity'' contribution. The third term on the second line is formally analogous to the ``spin contamination'' of a DODS wave function as defined in \cite{Cassam93-ijqc,Amos61}. Finally, the last term on the second line is the square of the expectation value of the lowering or raising operator: \begin{eqnarray}
\lvert\sum\limits_{i=1}^{N_e}\langle \phi_{i\beta}| \phi_{i\alpha}\rangle\rvert^2=\lvert\langle\Phi_{GCHF}|S^+|\Phi_{GCHF}\rangle\rvert^2=\lvert\langle\Phi_{GCHF}|S^-|\Phi_{GCHF}\rangle\rvert^2. \label{nonperp} \end{eqnarray} A non zero contribution of this term can only arise from the release of the $S_z$-constraint, which allows for the $\alpha$- and $\beta$-components of a given, general spin-orbital to be both non zero. But it originates from $S^+S^-$ and $S^-S^+$, and is maximal when $\phi_{i\beta}=\exp(\imath\theta)\phi_{i\alpha}$ for all i, that is to say when the $\phi_i$'s are eigenfunctions of $cos\theta S_x + sin\theta S_y$ for some angle $\theta$. It is related to the emergence of a non-zero spin density in the $x,y$-plane, correlatively to the loss of $z$-collinearity. We tentatively call this term the ``$x,y$-perpendicularity'' contribution.
The present formulas have been implemented in the code TONTO \cite{Tonto} and applied in a recent article (third column of Tab. 4 in \cite{Bucinsky15}). Let us discuss further the different contributions to $S^2$ for a $H_2O^+$ GCHF calculation similar to that reported in \cite{Bucinsky15}. The $z$-quantification axis was the axis perpendicular to the plane of the molecule. The results, see Tab. \ref{tab-contam}, shows that the main contribution to $\langle\Phi_{GCHF}|S^2|\Phi_{GCHF}\rangle$ beside the reference expression (first term in eq.(\ref{spin-contam})) is the so-called spin contamination contribution (we set $\hbar=1$ throughout the paper). The $x,y$-perpendicularity and $z$-noncollinearity contributions are of the same order of magnitude and more than one order of magnitude smaller. Added to $\left(\frac{N_\alpha}{2}-\frac{N_\beta}{2}\right)\left(\frac{N_\alpha}{2}-\frac{N_\beta}{2}+1\right)$, they almost make up the reference value of $+0.75$. So, the spin contamination value of $0.007033$ amounts almost exactly to the difference between the exact expectation value $\langle\Phi_{GCHF}|S^2|\Phi_{GCHF}\rangle$ and this reference ``ROHF value''. This demonstrates that, in Table 4 of Ref.\cite{Bucinsky15}, the equality of the entries in column 2 (reference ROHF value plus spin contamination term) and column 4 (our $\langle\Phi_{GCHF}|S^2|\Phi_{GCHF}\rangle$ value) does not imply no noncollinearity. In contrast, if for a given line of the table, these two quantities differ, then necessarily there will be some noncollinearity in the corresponding GCHF wave function. This can be shown \textit{reductio ad absurdum}. Suppose that the $z$-collinearity constraint is fullfiled, then $N_\alpha$ and $N_\beta$ will be good quantum numbers and the first term in our expression of $\langle\Phi_{GCHF}|S^2|\Phi_{GCHF}\rangle$ will be equal to the ROHF reference value. The spin contamination contribution being included in both quantities, the difference between them must arise from the $x,y$-perpendicularity and $z$-noncollinearity contributions. One at least of the contributions arising from the release of the collinearity constraint must be non zero, hence a contradiction. This hints that the following systems $Cl$, $HCl^+$, $Fe$, $Cu$, $Cu^{2+}$ and $[OsCl_5(Hpz)]^-$ reported in Table 4 of Ref.\cite{Bucinsky15} would present stronger non collinearity than $H_2O^+$.
\section{Collinearity in GCHF}
In the previous section, we have encountered a $z$-(non)collinearity measure, $col_z:=(\langle S_z^2\rangle-\langle S_z \rangle^2)$. This quantity can be generalized to an arbitrary quantization direction defined by a unit vector $\vec{u}=\left( \begin{array}{c} u_x \\ u_y\\ u_z \end{array}\right) $ of the unit sphere $\mathcal{S}^2$ of $\mathbb{R}^3$ by replacing $S_z$ by $\vec{u}\cdot \vec{S}=\sum\limits_{\mu\in\{x,y,z\}}u_\mu S_\mu$. Then, $\vec{u}$-(non)collinearity is measured by: \begin{equation} col(\vec{u}):=\sum\limits_{\mu,\nu\in\{x,y,z\}}u_\mu u_\nu(\langle S_\mu S_\nu\rangle-\langle S_\mu \rangle\langle S_\nu \rangle). \end{equation} Small et al. \cite{Small2015} defined a (non)collinearity measure by: \begin{equation} col:=\min\limits_{\vec{u}\in\mathcal{S}^2}\ col(\vec{u}), \end{equation} which corresponds to the lowest eigenvalue of the matrix $A$ whose elements are given by, \begin{equation} A_{\mu\nu}=\Re(\langle S_\mu S_\nu\rangle)-\langle S_\mu \rangle\langle S_\nu \rangle, \end{equation} where $\Re(z)$ is the real part of $z$. The associated eigenvector gives the optimal collinearity direction. Setting
$^x\tilde{\phi}=\frac{1}{2}(\acute{\phi}+\grave{\phi})$, $^y\tilde{\phi}=\frac{-\imath}{2}(\acute{\phi}-\grave{\phi})$ and $^z\tilde{\phi}=\hat{\phi}$, we have in this notation, \begin{equation}
\forall \mu, \nu \in \{x,y,z\}\qquad A_{\mu\nu}=\delta_{\mu\nu}\frac{N_e}{4}-\sum\limits_{i,j=1}^{N_e} \langle^\mu\tilde{\phi}_i| \phi_{j}\rangle\langle\phi_{j}| ^\nu\tilde{\phi}_i\rangle, \end{equation} where $\delta_{\mu\nu}$ is the Kr\"onecker symbol.\\
\noindent Returning to the $H_2O^+$ example and applying these formulae, we obtain \begin{equation} A=\begin {pmatrix} +0.253128 & +0.000145 & -0.009774 \\ +0.000145 & +0.253451 & +0.003745 \\ -0.009774 & +0.003745 & +0.000461 \end{pmatrix} \end{equation} The diagonalization of the $A$-matrix gives the optimal collinear direction: \begin{equation} \vec{u_0}^t=\left(+0.0385908, -0.014789,+0.999146 \right) , \end{equation}
which is only slightly tilted with respect to the $z$-direction, and the system is quasi-collinear in this direction since $col=0.000028$ is very close to zero. This shows that the noncollinearity contribution to $\langle\Phi_{GCHF}|S^2|\Phi_{GCHF}\rangle$ could be further reduced by more than one order of magnitude by selecting the optimal quantization axis corresponding to $\vec{u_0}$ instead of the spatial $z$-axis. The perpendicularity contribution would decrease accordingly.
\section{Conclusion}
We have decomposed the expectation value of the spin operator $S^2$ into (i) a term formally identical to its expression for a ROHF reference wave function, (ii) a term called ``spin contamination'' because it is formally analogous to that derived by Amos and Hall \cite{Amos61} for DODS wave functions, (iii) a noncollinear contribution which can be minimized by following a procedure recently introduced \cite{Small2015}, (iv) a term called the ``perpendicularity contribution'' which arises from the release of the $z$-collinearity constraint but which should rather be regarded as arising from the release of the ``nonperpendicularity constraint'' on the spin-density. The collinearity and nonperpendicularity constraints are correlatives.
We have evaluated these four different contributions for a GCHF calculation on the $H_2O^+$ cation. Note that we used the IOTC relativistic Hamiltonian \cite{Barysz01} so that the term ``spin contamination'' is not really appropriate in this context, departure from the ROHF reference value being legitimate. However, the so-called spin contamination contribution has been found to dominate the noncollinearity and perpendicularity ones. This could be made even more so, by tilting the quantification axis to the optimal collinearity direction.
\begin{table}[ht]
\begin{center}
\begin{tabular}{cc}
\hline $N_\alpha$ & $+4.999546$ \\ $N_\beta$ & $+4.000454$ \\ $(\frac{N_\alpha}{2}-\frac{N_\beta}{2})(\frac{N_\alpha}{2}-\frac{N_\beta}{2}+1)$ & $+0.749091$ \\ $z$-noncollinearity& $+0.000461$ \\ $x,y$-nonperpendicularity& $+0.000427$ \\ spin contamination& $+0.007033$ \\ $\langle S^2\rangle$ &$+0.757013$ \\
\hline \end{tabular} \end{center}
\caption{Expectation value of $\langle S^2\rangle$ and related quantities for an H$_2$O$^+$ GCHF optimized wave function. The geometry parameters were $r_{OH}= 0.99192 \mathring{A}$, $\widehat{HOH}=101.411 deg$. The basis set consisted of the primitives Gaussian functions left uncontracted of Dunning's cc-pVDZ hydrogen and oxygen basis sets \cite{Dunning89}. The infinite-order two-component (IOTC) relativistic Hamiltonian of Barysz and Sadlej \cite{Barysz01} was employed. } \label{tab-contam}
\end{table}
\end{document} | arXiv |
A regular polygon has sides of length 5 units and an exterior angle of 120 degrees. What is the perimeter of the polygon, in units?
If an exterior angle has measure $120$ degrees, an interior angle has measure $60$ degrees. A regular polygon with $60$ degree angles must be an equilateral triangle, so the perimeter is $3(5)=\boxed{15}$ units. | Math Dataset |
\begin{document}
\title{Rapid Flow Behavior Modeling of Thermal Interface Materials Using Deep Neural Networks}
\author{Simon~Baeuerle, Marius~Gebhardt, Jonas~Barth, Andreas~Steimer and Ralf~Mikut \thanks{The work of R. Mikut was supported by the Helmholtz Association’s Initiative and Networking Fund through Helmholtz AI. \textit{(A. Steimer and R. Mikut contributed equally to this work.) (Corresponding author: S. Baeuerle.)}} \thanks{S. Baeuerle and R. Mikut are with the Institute for Automation and Applied Informatics (IAI), Karlsruhe Institute of Technology (KIT), D-76344 Eggenstein-Leopoldshafen, Germany. (e-mail: [email protected])} \thanks{S. Baeuerle, J. Barth and M. Gebhardt are with the Robert Bosch GmbH, D-72762 Reutlingen, Germany.} \thanks{A. Steimer is with the Bosch Center for Artificial Intelligence (BCAI), Robert Bosch GmbH, D-71272 Renningen, Germany.} }
\markboth{} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals}
\maketitle
\begin{abstract}
Thermal Interface Materials (TIMs) are widely used in electronic packaging.
Increasing power density and limited assembly space pose high demands on thermal management.
Large cooling surfaces need to be covered efficiently.
When joining the heatsink, previously dispensed TIM spreads over the cooling surface.
Recommendations on the dispensing pattern exist only for simple surface geometries such as rectangles.
For more complex geometries, Computational Fluid Dynamics (CFD) simulations are used in combination with manual experiments.
While CFD simulations offer a high accuracy, they involve simulation experts and are rather expensive to set up.
We propose a lightweight heuristic to model the spreading behavior of TIM.
We further speed up the calculation by training an Artificial Neural Network (ANN) on data from this model.
This offers rapid computation times and further supplies gradient information.
This ANN can not only be used to aid manual pattern design of TIM, but also enables an automated pattern optimization.
We compare this approach against the state-of-the-art and use real product samples for validation.
\end{abstract}
\begin{IEEEkeywords} Deep learning, electronics packaging, flow behavior, Thermal Interface Materials, thermal management \end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\IEEEPARstart{A}{utomotive} industry is putting an increasing effort into electric and autonomous vehicles.
Demand for efficient and reliable electronic components is rising accordingly.
This is valid for small Electronic Control Units (ECUs) used to control e.g. the engine, but also for power electronics components such as inverters or chargers.
Time-to-market tends to be shorter and is a crucial factor for global competitiveness.
While power ratings increase, tight restrictions are imposed on assembly space as well.
Thus, the thermal performance is a crucial factor in electronic packaging. \\
Thermal Interface Materials (TIMs) are widely utilized to lower the thermal resistance between individual components and thus enable an efficient heat transfer.
Typically, they are applied onto a cooling surface with a dispensing machine.
An example of dispensed TIM is shown on the left-hand side of Fig.~\ref{fig:overview}.
During the joining process of the heatsink, TIM is compressed and spreads over the surface.
The state after compression is shown on the right-hand side of Fig.~\ref{fig:overview}.
Design engineers determine the pattern, along which TIM is dispensed.
They need to consider multiple aspects.
The most evident is a \textit{high area coverage ratio} of the cooling surface with TIM after the joining process.
Along with the high heat conductivity of TIM, this results in the aforementioned low thermal resistance.
However, applying too large amounts of TIM, with excess material flowing beyond the cooling surface, leads to preventable \textit{material cost}.
This is especially relevant in high-volume series production, where even a small amount per part adds up to significant costs.
Sensitive electronic components or product features such as screw holes may be placed close to the cooling area.
They are regarded as \textit{taboo zones} during the design process and may not be covered by material. \\
A low coverage may be caused by simply applying too little TIM.
Another cause for low coverage are air entrapments, which develop during the joining process.
Air within closed contours such as circular shapes cannot escape while TIM is being compressed.
The formation of such \textit{voids} depends on the individual pattern shape and may not be easily recognized in all cases.
Furthermore, the design of a dispense pattern is directly linked to the \textit{cycle time} of the respective dispense process. \\
Apart from the pattern, the design process itself needs to be optimized as well.
The design process is relevant for the time-to-market and thus needs to be short.
The cost of human experts is also a relevant factor.
Recommendations regarding optimal patterns can help engineers during their work.
Specific guidelines exist for simple cooling area geometries such as rectangles~\cite{licari_adhesives_2011}.
However especially for larger and more complex surfaces, the dispense pattern needs to be adjusted for each individual product. \\
To evaluate a given pattern, the respective compressed state after joining needs to be known.
It can be acquired by simulating the flow behavior of TIM.
Computational Fluid Dynamics (CFD) simulations, carried out by highly specialized experts, are widely used.
Modeling and evaluation take time, but yield very accurate results.
Besides simulations, mechanical experiments are carried out with real product samples.
In an iterative fashion of trial-and-error, the dispense patterns are optimized by development engineers.
After several trials, a dispense pattern is defined.
However, mechanical tolerances are prevalent in real products.
When joining parts together, tolerances from multiple parts add up.
The high accuracy of CFD simulations, which is achieved with high efforts, needs to be weighed against the variations of real products.
A light-weight model with a lower accuracy but faster setup and computation times can fulfill the demands of dispense pattern design better. \\
CFD simulations of this kind are computationally expensive.
Artificial Neural Networks (ANNs) can be trained on data from simulation models.
They can typically be executed much faster.
Since a rather high number of training samples is needed, an automated simulation setup is typically necessary. \\
Furthermore, ANNs or other Machine Learning models are well suited to build Digital Twins, since they are generally fit to be fine-tuned by using real-world data.
They can support many design and production processes, e.g. for pose estimation from image data~\cite{baeuerle_cad--real:_2021} or for quality monitoring in resistance welding~\cite{zhou_machine_2022}.
Digital Twins are considered to be a \textit{significant enabler for Industry 4.0 initiatives}~\cite{chiabert_digital_2018}. \\
In this work, we present two flow behavior models for TIM.
They can be used to support development engineers both during manual dispense pattern design and by enabling an automated dispense pattern optimization.
The first flow behavior model is a light-weight heuristic.
The second one is an ANN.
The heuristic can be used to analyze a large range of different dispense patterns automatically.
Thus, training data can be generated for the ANN, which offers an even higher computational speed.
\begin{figure}
\caption{Material flow of Thermal Interface Material (TIM) during joining the heatsink of an Electronic Control Unit (ECU). Left: state before joining, right: state after joining.}
\label{fig:overview}
\end{figure}
The remainder of this paper is organized as follows.
Section~\ref{sec:relatedwork} provides an overview over the state-of-the-art methods, which are relevant for the dispense pattern design process.
In Section~\ref{sec:simulation_methods}, we give a detailed insight into our light-weight heuristic.
It further includes the extension with an ANN and specific details on spatial resolution and the training setup.
The experiments, which we carry out to validate both of our models, are described in Section~\ref{sec:experimental_setup}.
Results in Section~\ref{sec:results} include a study of the achieved computation speed and the accuracy both on samples from the laboratory and on a real product.
Advantages and limitations of the heuristic itself as well as the combination with an ANN are discussed in Section~\ref{sec:discussion}.
August 8th, 2022
\section{Related Work}\label{sec:relatedwork}
Several works have highlighted that a high surface coverage with TIM enhances thermal performance of an electronic package.
Ekpu et al.~\cite{ekpu_effects_2012} set up a numeric simulation model including a chip, a heatsink and a TIM layer between both.
They analyze the influence of TIM area coverage on thermal resistance.
They report a lower thermal resistance with higher coverage percentages and recommend a coverage ratio of at least 75\,\%.
They anticipate that their results will aid design engineers.
Kesarkar et al.~\cite{kesarkar_how_2019} also set up a numeric simulation model.
Their model reproduces the thermal management problem as found in an ECU, with a TIM layer below a heat sink.
They analyze different TIM coverage percentages in various configurations and report a better thermal performance for a higher TIM area coverage.
Gowda et al.~\cite{gowda_voids_2004} state that \textit{the negative effect of [...] voids on the thermal resistance of a TIM layer can be devastating}. \\
CFD simulation is a powerful tool to support design engineers during the design of dispense patterns.
They have been used in the past both to model thermal performance and to model the flow of fluid materials.
For example, Lee et al.~\cite{lee_investigation_2000} analyze both the heat conductance within a thermal package and the heat transfer to ambient air and compare different techniques to enhance heat dissipation.
Comminal et al.~\cite{comminal_numerical_2018-1} use CFD simulations to model the flow of extruded material in additive manufacturing.
They study the extrusion and deposition of highly viscous material with different settings of parameters such as nozzle velocity or extrusion velocity.
This demonstrates the feasibility of using CFD simulations to model the flow behavior of TIM materials. \\
CFD simulations require a definition of the material behavior, which may be complex.
Thermal paste is typically made up of two components, e.g. a silicone grease filled with aluminum oxide particles.
In such a case, the viscosity may change both with shear stress and filler ratio~\cite{prasher_thermal_2006}.
The rheology of TIM has further been studied (see e.g.~\cite{lin_rheological_2009, sinh_thermal_2012}).
CFD simulations typically aim to model both complex material behavior and geometries accurately. \\
Gu et al.~\cite{gu_novo_2018} create training data from a Finite Element (FE) model.
They modify the distribution of two materials within a composite material structure and solve for mechanical properties such as toughness and strength.
They train both a linear model and a Convolutional Neural Network (CNN) on this data and speed up computation times by a factor of 250, while maintaining sufficient accuracy.
Koeppe et al.~\cite{koeppe_efficient_2018} also train an ANN on data from an FE model.
They calculate the mechanical stress for a lattice structure at given load conditions.
The computation time for a single FE simulation is approximately 5-10 hours, while the ANN takes less than one second.
They use 85 training examples to train an ANN, which has 16 output features. \\
A major limitation of all proposed approaches is the efficient generation of a training dataset that is sufficient for the design of complex ANNs.
Those prerequisites are difficult to fulfill with experiments or state-of-the-art numeric simulation models.
Therefore, we use our proposed heuristic flow behavior model to create a sufficiently high number of training samples.
\section{Simulation methods}\label{sec:simulation_methods}
In this section, we give an overview of our approach as depicted in Fig.~\ref{fig:approach}.
First, we describe the in- and output data representation and the 2D discretization, which is used as a pre-processing step.
We then take a closer look at each of our proposed flow behavior models: a heuristic and an ANN trained on data from the heuristic.
Specific setup details regarding e.g. dataset generation and training procedure are outlined in Section~\ref{sec:experimental_setup}. \\ \\
\begin{figure}
\caption{Overview of our approach for a single line of TIM. Inputs are the start point coordinate, feed rate and end point coordinate. The TIM distribution is spatially discretized before and after the compression step.}
\label{fig:approach}
\end{figure}
\subsection{Data representation and 2D discretization}
\begin{figure}
\caption{Visual representation of the 2D discretization for a line segment}
\label{fig:visual_discretization}
\end{figure}
We define the dispense pattern, which is the path along which TIM is applied, using a polygonal chain.
In the simplest case, this equals a single line with five parameters:
both endpoints of this line have continuous x- and y-coordinates.
The fifth parameter is the feed rate of TIM along the line segment.
For longer patterns, we iteratively append another point and a respective feed rate.
The input parameters are shown on the left-hand side of Fig.~\ref{fig:approach}.
The state after dispensing is represented as a two-dimensional grid.
The number on each grid cell represents the amount of TIM in each cell.
The previously introduced input parameterization is transferred onto this two-dimensional representation.
This process is visualized as \textit{2D discretization} in Fig.~\ref{fig:approach}.
We apply \textit{Unweighted Area Sampling}~\cite{hughes_computer_2014}, which is a technique in the field of computer graphics to draw anti-aliased lines.
It works as follows in our case:
each segment of the pattern is assigned a width of one, i.e. all lines become rectangles.
The intersection of each grid cell with each rectangle is calculated.
The amount, which is specified via the feed rate for each line segment, is assigned to each grid cell proportional to this intersection.
Fig.~\ref{fig:visual_discretization} contains a visual depiction of how the amount for each grid cell is calculated.
This discretized state of the spatial TIM distribution after dispensing serves as input to a flow behavior model, which outputs the state after compression.
This output is again a spatial distribution of TIM and is discretized in the same way.
An example is shown on the right-hand side of Fig.~\ref{fig:approach}.
The flow behavior model can be either the heuristic or an ANN.
Both take the dispensed state as input and output the compressed state.
\subsection{Heuristic}
\begin{algorithm}[!t]
\caption{Pseudocode of our heuristic}\label{alg:pseudocode}
\begin{algorithmic}[1]
\STATE {\textsc{COMPRESS}}(initial)
\STATE \hspace{0.5cm}artificial\_height = maximum(initial)
\STATE \hspace{0.5cm}compressed = initial
\STATE \hspace{0.5cm}\textbf{while}(artificial\_height \textgreater~1)
\STATE \hspace{0.5cm}~reduce\_artificial\_height()
\STATE \hspace{0.5cm}~\textbf{while}(max(compressed) \textgreater~artificial\_height)
\STATE \hspace{0.5cm}~~temp = zeros(max(x\_coords), max(y\_coords))
\STATE \hspace{0.5cm}~~\textbf{for} x \textbf{in} x\_coords
\STATE \hspace{0.5cm}~~~\textbf{for} y \textbf{in} y\_coords
\STATE \hspace{0.5cm}~~~~diff = compressed[x,y] - artificial\_height
\STATE \hspace{0.5cm}~~~~\textbf{if}(diff \textgreater~0)
\STATE \hspace{0.5cm}~~~~~compressed[x,y] -= diff
\STATE \hspace{0.5cm}~~~~~temp[next\_neighbors(x,y)] += diff $/$ 4
\STATE \hspace{0.5cm}~~compressed += temp
\STATE \hspace{0.5cm}\textbf{return} compressed
\end{algorithmic} \end{algorithm}
\begin{figure}
\caption{Visual representation of an exemplary iteration of the heuristic. Left: top view; right: sectional view A-A.}
\label{fig:visual_heuristic}
\end{figure}
We now look into the details of our proposed heuristic.
Algorithm~\ref{alg:pseudocode} contains the respective pseudocode.
Fig.~\ref{fig:visual_heuristic} visualizes how the material spreads to neighboring cells during a single iteration of our algorithm.
First, we define an artificial height value~$h_{art}$ corresponding to the maximum TIM amount.
We then enter a loop that is executed until we reach a final~$h_{art}$ equal to one.
During this loop, we iteratively reduce~$h_{art}$.
While the TIM amount in any grid cell exceeds the current~$h_{art}$, we loop over all grid cells.
For each cell, we check its current TIM amount~$h_{cur}$ against~$h_{art}$.
If~$h_{cur} > h_{art}$, we divide the excess amount $h_{exc} = h_{cur} - h_{art}$ by four and add it to each of the next neighboring cells in a temporary array.
We subtract $h_{exc}$ from the current cell of the compressed state.
After we have looped over every cell, we update the compressed state with the temporary array.
This avoids that the order of the cells within the loop has an influence on the result.
Increasing the dispensed amount on the input side has the same effect as compressing down to a lower gap height.\\
\subsection{Artificial Neural Network}
The ANN is trained on data generated by the heuristic flow behavior model.
As such, an advantage with regard to accuracy can not be expected.
However, it can map the complex relationship between in- and output more efficiently.
Programming libraries such as \textit{Keras}~\cite{chollet_keras_2015} conveniently implement ANNs ready to be executed in parallel on a Graphical Processing Unit (GPU).
The computation can be executed quickly.
Furthermore, gradient information is provided. \\
ANNs can be made up of various types of layers.
Well-known architectures such as \textit{VGG}~\cite{simonyan_very_2014}, \textit{ResNet}~\cite{he_deep_2015} or \textit{Inception}~\cite{szegedy_rethinking_2016} rely on the use of convolutional layers followed by dense layers.
They have proven to work very well with image data.
Since our data can be interpreted as gray scale images, we opt to work with a similar architecture.
Details regarding the architecture definition and the training process are presented in Section~\ref{sec:experimental_setup}. \\
\section{Experimental setup}\label{sec:experimental_setup}
This section contains information on how we set up our models and experiments.
This includes the performance benchmarking for both flow behavior models.
For the ANN, we describe the generation of the training data, the architecture design and the training process.
For the experimental data, we describe the laboratory setup.
\subsection{Training the ANN}
The training dataset consists of 200\,000 automatically generated random dispense patterns.
The patterns are similar to the ones used during benchmarking as depicted in Fig.~\ref{fig:randompaths}.
We obtain the architecture of the ANN from an automated hyperparameter optimization.
A template for the architecture is visualized in Fig.~\ref{fig:ANNarchitecture}.
The layers indicated in blue are always used.
Yellow layers are optionally activated by the optimizer.
We vary the number of convolutional layers from two to six and the number of dense layers from zero to two.
The convolutional layers have either 8, 32, 128 or 512 filters with a kernel size of either three or five.
If present, each dense layer has 2500 neurons.
The batch size may be 8, 32 or 128.
The optimizer to train the ANN is \textit{Adam}~\cite{kingma_adam:_2014} with a learning rate between $10^{-5}$ and $10^{-2}$.
We use the activation function \textit{ReLu} for all layers except for the last, where we apply the \textit{Sigmoid} function.
The loss function to be optimized is \textit{binary cross-entropy}.
The weights of the ANN are initialized randomly.
Therefore, training an ANN multiple times on the same dataset yields fluctuating results.
Preliminary manual trials of architecture tuning have shown convergence issues during the training of some hyperparameter configurations.
To account for fluctuating performance and convergence issues, we train 10 ANNs for each configuration during the hyperparameter optimization.
We return the lowest loss over the 10 respective runs value back to the optimizer.
We use the hyperparameter optimization framework \textit{Optuna}~\cite{akiba_optuna:_2019}.
The hyperparameter optimization runs for 1\,000 iterations.
To make a high number of iterations possible, we train within each iteration for one epoch on 16\,000 patterns and validate on 4\,000 patterns.
After finishing the hyperparameter optimization, we fine-tune the ANN by training on 160\,000 patterns for 10 epochs and validate using 40\,000 patterns. \\
When using a different resolution, the ANN needs to be retrained as just described. \\
The ANN is trained for a constant output gap height.
However, a change in the gap height has the same effect on the result as changing the input amounts.
When using the ANN, different gap heights can thus be accounted for by scaling the input amounts respectively. \\
Since the ANN has a fixed input format, the ANN input dimensionality is set according to the maximum pattern length.
To process shorter patterns, we simply append pattern segments with zero amounts up to the maximum length. \\
\begin{figure}
\caption{Architecture of the Artificial Neural Network (ANN). Hyperparameters such as the number of optional layers (yellow) are optimized. Mandatory layers (blue) are always included.}
\label{fig:ANNarchitecture}
\end{figure}
\subsection{Physical experiments}\label{sec:physical_experiments}
To validate our model, we carry out physical experiments.
We dispense TIM in various different patterns and compress it as when joining a heatsink. \\
The machine used for dispensing is an automated Computerized Numerical Control (CNC) machine.
Its type is almost identical to the ones that are used in automotive series production.
We transfer the patterns into G-code, which is a format that is readable on this kind of machine.
The TIM is dispensed onto glass plates with a dimension of 70$\times$70\,mm.
Thin metal plates with a carefully machined height are put on the edges of the glass plate.
This ensures a uniform final gap height when putting a second glass plate on top and pushing it downwards.
The dispensed and compressed states of TIM are shown for an exemplary pattern in Fig.~\ref{fig:laboratoryexperiment}.
An image of the compressed state is recorded.
An automatic segmentation of the blue color hue is applied and yields a representation in the same discretized format as introduced previously.
Thus, each pixel is either entirely full or empty.
Since the final gap height is low, the error at the area boundary made by this assumption is sufficiently small.
After segmentation, the resolution is scaled down to the same resolution as in the heuristic model.
During downscaling, we apply a linear interpolation between neighboring cells.
The zoom level is adjusted uniformly for all experiments.
This is done as a post-processing step and has the same effect as adjusting the vertical camera position.
Since the experiments are carried out manually, some samples are shifted slightly.
Those translational errors are corrected by re-centering each pattern during post-processing.
The post-processing does, of course, not involve a modification of the overall pattern shape, since this would distort the error evaluation of the model. \\
We further evaluate the TIM flow behavior in a physical experiment using a real product sample.
This product involves a Printed Circuit Board (PCB) with mounted electronic components to be cooled.
The housing is pressed onto the PCB and TIM spreads over both joining partners to form a thermal connection.
In contrast to our laboratory experiments as just described, we cannot control the actual gap height that precisely in this case:
multiple parts are joined together, with each part having individual mechanical tolerances.
Furthermore, the PCB itself bends during the joining process.
We thus use this experiment for a qualitative rather than a quantitative assessment.
For comparison with our model we choose the total TIM amount to fit the actually observed amount.
Instead of evaluating the general fit of our model, we evaluate the fit with respect to only the shape of the predicted coverage area outline.
\begin{figure}
\caption{Laboratory experiments using transparent glass plates to compress TIM. Left: state before compression; right: state after compression.}
\label{fig:laboratoryexperiment}
\end{figure}
\subsection{Benchmarking}
During this study, we use a resolution of 50$\times$50 cells.
This resolution was selected through preliminary trials with different resolutions and represents a compromise between sufficient accuracy and computational effort. \\
One part of our benchmarking covers the error of our simulation models.
This involves the comparison of the entire simulation pipeline consisting of the discretization and either flow behavior model to the physical experiments.
It further involves the error between the outputs of the heuristic and the ANN.
In all cases, we calculate the absolute error of the respective compressed states:
\begin{equation}
e_{comp} = \sum_{i=1}^{50} \sum_{j=1}^{50} | m_{a,comp,ij} - m_{b,comp,ij} |, \end{equation}
with $m_{a,comp,ij}$ and $m_{b,comp,ij}$ being the TIM amounts per grid cell $(i,j)$ in the compressed states.
The indices $a$ and $b$ refer to either the experiment and a flow behavior model or the heuristic and the ANN.
We then divide the absolute error by the sum of the covered cells
\begin{equation}\label{eq:erelsingle} e_{rel} = \frac{e_{comp}} {\sum_{i=1}^{50} \sum_{j=1}^{50} m_{a,comp,ij}} \end{equation}
and calculate its mean
\begin{equation}\label{eq:erelmean} \overline{e}_{rel} = \frac{1}{N_{pat}} \sum_{k=1}^{N_{pat}} e_{rel,k} \end{equation}
across $N_{pat} = 50$ dispense patterns.
This relative error measure yields a better intuition of the model accuracy across the different dispense patterns. \\
Besides the error, we also evaluate the computation speed for our model.
Both flow behavior models are called from a Python script.
The library \textit{timeit}~\cite{python_software_foundation_timeit_2022} measures the computation time of a code snippet.
Setup code, such as code for loading data and models, is executed separately and not included into the measurement.
Background processes may interfere with the program being measured and spuriously lengthen the computation time.
For this reason, it is specifically not recommended to report mean and average computation times for multiple runs of the same code~\cite{python_software_foundation_timeit_2022}.
Thus, we execute $N_{runs} = 10$ runs per measurement and store the minimum value
\begin{equation} t_{min} = \min t_{l}, l \in \{1, ... , N_{runs}\} \end{equation}
for further evaluation.
Since measurement time varies for different patterns, we measure the computation time for the compression of $N_{pat}$ individual patterns.
Examples are shown in Fig.~\ref{fig:randompaths}.
\begin{figure}
\caption{Three exemplary patterns used in our computation time and error benchmarking. Top row: before compression, bottom row: after compression.}
\label{fig:randompaths}
\end{figure}
We report the mean value \begin{equation} \overline{t} = \frac{1}{N_{pat}} \sum_{n=1}^{N_{pat}} {t}_{min,n} \end{equation} for the computation time $t_{min,n}$ across $N_{pat} = 50$ paths and the respective standard deviation \begin{equation} s = \sqrt{\frac{1}{N_{pat}-1} \sum_{n=1}^{N_{pat}} ({t}_{min,n} - \overline{t})^{2} }. \end{equation}
The computation time $t_{min,n}$ for an individual pattern is, as just described, the minimum time across 10 runs per individual pattern.
All computations are executed on a workstation with an INTEL E5-2680 processor and four GPUs of type NVIDIA RTX 2080Ti. \\
\section{Results}\label{sec:results}
This section contains the results regarding the heuristic, the ANN and the physical experiments.
We record the setup and computation time for all three approaches and calculate the relative absolute error of the compressed states as described previously.
Results are listed in Table~\ref{tab:model_comparison}.\\
First, we give a deeper insight into the process of setting up the ANN.
We determine the hyperparameters by carrying out a hyperparameter optimization as described in Section~\ref{sec:experimental_setup}.
We obtain the following architecture:
the first layers are five convolutional layers with 128 filters and a filter size of 5$\times$5.
They are followed by the mandatory convolutional layer with one filter and a filter size of 3$\times$3.
No dense layers are appended.
The best remaining hyperparameters are a batch size of 8 and a learning rate of 0.0011.
The entire hyperparameter optimization process with 1\,000 iterations takes one week.
The fine-tuning of the final architecture takes about 2 hours.
It takes about one week to create the training dataset for the ANN, which involves the simulation of 200\,000 patterns.
Those steps need to be carried out only once for a specific input resolution. \\
We compare the trained ANN with the original heuristic approach.
The error according to Equation~\ref{eq:erelmean} is \textbf{5\,\%}.
Fig.~\ref{fig:ANNheuristic} shows the output of the trained ANN as compared to the heuristic.
While some errors are prevalent in the outermost cells, the ANN manages to fit the data well. \\
We now look closer into the laboratory experiments, which are carried out as described in Section~\ref{sec:physical_experiments} and form an independent test dataset with unseen patterns.
An exemplary sample of those dispense patterns is shown in Fig.~\ref{fig:randompaths}.
Each experiment takes 30\,minutes.
This time includes sample preparation, dispensing, compression and post-processing of the results.
The experiments serve as ground truth and therefore are listed with an error equal to zero.
We are aware that they are still subject to error sources such as mechanical tolerances or measurement noise. \\
For both of our flow behavior models, we calculate the mean relative error with respect to the experiments according to Equation~\ref{eq:erelmean}.
The heuristic and the ANN are both able to predict the compressed shape well, with an error of \textbf{11\,\%} and \textbf{13\,\%} respectively across the 50~evaluated patterns.
A visual comparison of the ANN with the experiments is presented in Fig.~\ref{fig:experimental_validation}.
Further samples are shown in the appendix.
The left column shows the compressed state as output from the ANN for three different dispense patterns.
The middle column shows the compressed state acquired from the experiments.
The right column shows the difference of both along with the error score after Equation~\ref{eq:erelsingle}.
Errors occur mainly in outermost cells of the covered area. \\
Patterns with high errors are often characterized by the entrapment of air.
An example is shown in Fig.~\ref{fig:void}, where the relative error according to Equation~\ref{eq:erelsingle} is 21.4\,\%.\\
The initial dispense pattern can be calculated rather straightforward from the pattern parameters as described previously.
The setup time for simulating a certain dispense pattern is therefore relatively low and takes up to one minute.
This procedure is equal for both flow behavior models. \\
The computation time amounts to \textbf{3.41\,s} on average for the heuristic and displays a rather large variance across different patterns.
The ANN can be executed consistently in \textbf{0.07\,s}.
\begin{table}[!t]
\centering
\caption{Comparison of simulation methods during deployment in manual pattern design}\label{tab:model_comparison}
\begin{tabular}{K{22mm} X{17mm} X{14mm} X{17mm}}
\textbf{Method} & \textbf{Mean relative error} & \textbf{Setup time} & \textbf{Computation time} \\
& $\overline{e}_{rel}$ & & $\overline{t}$ \\
\rule{0pt}{11pt} \textit{Experiment} & 0 & 30\,$min$ & - \\
\rule{0pt}{11pt} \textit{CFD} & - & 10\,-\,60\,\,$min$ & 60\,-\,120\,$min$ \\
\rule{0pt}{11pt} \textit{Numeric heuristic} & 11\,\% & 1 \,$min$ & 3.41\,$\pm$\,2.71$s$ \\
\rule{0pt}{11pt} \textit{Neural network} & 13\,\% & 1 \,$min$ & 0.07\,$\pm$\,0.001$s$ \\
\end{tabular} \end{table}
\begin{figure}
\caption{Output of the ANN as compared to data from the heuristic model. Left column: output of ANN for different dispense patterns, center column: output from heuristic, right column: difference of both and error score after Equation~\ref{eq:erelsingle}.}
\label{fig:ANNheuristic}
\end{figure}
\begin{figure}
\caption{Validation on experimental data. Left column: output of ANN for different dispense patterns, center column: experimental data, right column: difference of both and error score after Equation~\ref{eq:erelsingle}.}
\label{fig:experimental_validation}
\end{figure}
\begin{figure}
\caption{Overlay with the compressed TIM on a real ECU. Left: image of compressed TIM in a physical experiment; right: additional overlay of the compressed state as obtained from the heuristic.}
\label{fig:ECU}
\end{figure}
\section{Discussion}\label{sec:discussion}
\begin{figure}
\caption{Dispense pattern involving a void area due to air entrapment. Left: output of ANN, center: experimental data with marked air entrapment, right: difference of both and error score after Equation~\ref{eq:erelsingle}.}
\label{fig:void}
\end{figure}
The CFD simulations can be considered state-of-the-art in this field of simulation.
We do not aim to analyze them deeper within this work, since they have been used extensively for over 20 years in a wide range of different applications.
We have shown our samples to two experienced simulation experts and asked them for their professional opinion.
They are regularly occupied with simulations of similar dispense patterns.
They estimate the setup time for such dispense patterns to be in the range of 10\,min up to 60\,min.
10\,min would include a very basic setup without much detail.
60\,min would include a more elaborate setup, e.g. a fine modeling of 3D roundings of the dispense pattern.
The computation time is estimated by consulting the simulation logging files for similar patterns and is in the range of 60\,-\,120\,min.
Due to the high effort of CFD simulations, we omit the exact error calculation on our 50 samples.
We do not claim that our new simulation approach offers an advantage over the CFD simulation with regard to the error.
However regarding computational speed, they are clearly outperformed by both of our proposed surrogate flow behavior models. \\
The heuristic model captures the flow behavior of TIM during compression generally well.
It is shown that the overall shape is almost identical in most cases.
The difference to experimental trials occurs mainly at the outer areas.
The accuracy is high enough to support manual development work.
The computation time is low. \\
While the heuristic model can be executed within a few seconds, the ANN delivers results almost instantly.
The computation time improves by a factor of almost 50, but the accuracy suffers only marginally.
The application of our model can thus save a significant effort during dispense pattern design for electronics packages.
CFD simulations and physical experiments will still be necessary, but only for fine-tuning during the last design cycles.
This is specifically the case in tests involving the design limits, e.g. experiments, which cover the highest expected mechanical tolerances.
We do not claim to replace CFD simulations or experiments fully, but rather to reduce the number of trials. \\
The speed-up, which is achieved by using the ANN, allows the use not only for manual pattern design, but also for an automated pattern design.
The ANN further supplies gradient information.
Thus, the advantage regarding computational speed would be even higher, if the flow behavior model is combined with a gradient-based optimizer.
Automated pattern design with state-of-the-art optimizers could explore a much larger range of the design space and thus lead to better solutions than those found via manual trial-and-error iterations.
Several solution candidates can be generated by executing the optimization with slightly varied settings.
The design engineer can then choose the most promising patterns. \\
During training of the ANN, we only used open source software libraries.
This allows the integration of our model into a custom user interface, i.e. independently from proprietary simulation software.
Especially a web-based implementation could provide easy access to design engineers without any license costs as seen with e.g. CFD supported tools. \\
The hyperparameter optimization supported the training process of the ANN.
Compared to preliminary manual hyperparameter tuning, the automated hyperparameter optimization yielded better results.
This is valid not only with regard to the model performance, but also with regard to the convergence behavior of the training process.
The shown hyperparameters parameterize an ANN, which can fit our dataset very well.
We thus recommend using an automated hyperparameter optimization when training on data of this kind.\\
The duration to set up the ANN is quite long.
The creation of the training dataset and the hyperparameter optimization takes almost two weeks in total.
Both processes run fully automatic and do not need any human intervention.
The advantage with regard to computation time is realized during actual usage of the model for dispense pattern design.
In contrast to the initial setup of the ANN, the duration of the design process is relevant for the time-to-market and needs to be carried out for each individual product.
It is thus beneficial to invest more time into training data generation and ANN training and in turn to speed-up the individual design process. \\
While the accuracy of our model is generally good, there is one specific drawback we would like to discuss further.
As mentioned before, we cannot predict the compressed shape well if air entrapments are present.
They are not desired in practical application, since the air significantly impairs thermal performance~\cite{gowda_voids_2004}.
Once the TIM pattern forms such an enclosed void area and is then compressed further, the entrapped air is put under pressure as well.
This pressure counteracts the material flow into the void area.
This effect is not taken into account by our model.
It can be seen in Fig.~\ref{fig:void} that our model predicts a quite small void, while the experimental data suggests that the TIM flows rather towards the outer areas than towards the center.
While this is an evident limitation of our model, it is not relevant for practical application:
since a pattern design involving voids will be neglected, an exact prediction of the compressed state is not necessary in those cases.
The predicted shape is generally still reasonable even in those cases. \\
The example of a real ECU depicted in Fig.~\ref{fig:ECU} shows that the model fits the TIM behavior not only in a laboratory environment, but also in the real product.
Our model assumes a infinitely wide planar surface and thus might model the compression behavior even beyond the physical area boundaries.
In real products, the cooling surface area is bounded and excess TIM flowing beyond the boundaries will not be compressed any further.
The experiments we conducted in the laboratory ensure a complete compression of the entire shape.
While the real product is certainly an important benchmark, the laboratory experiments give a deeper insight into the model accuracy.
The effect of compressed material extending beyond the cooling area can be observed for example at the bottom part of Fig.~\ref{fig:ECU}.
This indicates that the dispense pattern, which is the result of a conventional pattern design process, could be improved.
When the dispense pattern would fit the cooling area perfectly, no overflowing material would be visible. \\
\section{Conclusion}\label{sec:conclusion}
We present two flow behavior models, which can quickly predict the flow behavior of TIM when joining the heatsink.
Our proposed heuristic aids design engineers during the definition of the initial dispense pattern by providing a quick and easy method to estimate the compressed state.
This reduces the need for elaborate CFD simulations and manual experiments with product samples.
The time-to-market can thus be shortened for a variety of ECUs and power electronics components.
Training an ANN on data from our heuristic reduces accuracy only slightly, but yields a significant speed-up of computation time.
Using an ANN thus makes the manual design process even more convenient.
It further allows the efficient usage of optimizers for an automated dispense pattern optimization.
We show that the predicted compressed state fits experimental results well.
This is true not only in the laboratory, but also for a real ECU.
Future work includes the development of a method for automated dispense pattern optimization on the basis of this model.
\appendix \section{}
In Fig.~\ref{fig:experimental_validation_fullA}, we present further examples from our experimental dataset.
\begin{figure}
\caption{Validation on experimental data. Left column: output of ANN for different dispense patterns, center column: experimental data, right column: difference of both and error score after Equation~\ref{eq:erelsingle}.}
\label{fig:experimental_validation_fullA}
\end{figure}
\section*{Acknowledgment}
We thank Ralph Nyilas and Andras Horvath for the helpful discussions regarding the flow behavior of TIM.
We thank Balazs Solymossy and Istvan Horvath from the simulation department for their professional opinion regarding CFD models.
We further thank Hack-Min Kim, Vivien Reuscher and Roderich Zeiser for their support in carrying out the laboratory experiments.
\section*{Author statement}
We describe the individual contributions of Simon Baeuerle (SB), Marius Gebhardt (MG), Jonas Barth (JB), Andreas Steimer (AS) and Ralf Mikut (RM) using CRediT~\cite{brand_beyond_2015}: \textit{Writing - Original Draft}: SB; \textit{Writing - Review \& Editing}: JB, AS, RM; \textit{Conceptualization}: SB, JB, AS, RM; \textit{Investigation}: SB, MG; \textit{Methodology}: SB, AS; \textit{Software}: SB, MG; \textit{Supervision}: JB, AS, RM; \textit{Project Administration}: JB, RM; \textit{Funding Acquisition}: JB, RM.
\ifCLASSOPTIONcaptionsoff
\fi
\end{document} | arXiv |
Started work on a page on Bilinear regression.
Interesting. What's the relationship between bilinear regression and the "error in variables" problem?
Hi Jan, I can't access that link, but I found this on wikipeida. Assuming it's talking about the same thing, I think they're talking about two different problems. One of the motivations in the papers I've found for bilinear regression is providing a way to enforce sparsity in the model: if each sample is an $A \times B$ matrix,then simply flattening it to a vector and doing linear regression gives a model with $AB$ coefficients, whereas bilinear regression with $m$ pairs of $(u_i,v_i)$ has $m (A+B)$ coefficients. This reduction has been obtained by some loss of flexibility, but the reports I've read seem to indicated that overall it does well at preventing over-fitting. If there is a connection I'd be very interested to know more about it.
I'm going to have a go at using bilinear regression on the El Nino dataset, so the entry is partly just a place to store my calculations of the derivatives -- which are done for the way more complicated logistic regression model in the papers I've found.
Comment Source:Hi Jan, I can't access that link, but I found this on [wikipeida](http://en.wikipedia.org/wiki/Errors-in-variables_models). Assuming it's talking about the same thing, I think they're talking about two different problems. One of the motivations in the papers I've found for bilinear regression is providing a way to enforce sparsity in the model: if each sample is an $A \times B$ matrix,then simply flattening it to a vector and doing linear regression gives a model with $AB$ coefficients, whereas bilinear regression with $m$ pairs of $(u_i,v_i)$ has $m (A+B)$ coefficients. This reduction has been obtained by some loss of flexibility, but the reports I've read seem to indicated that overall it does well at preventing over-fitting. If there is a connection I'd be very interested to know more about it. I'm going to have a go at using bilinear regression on the El Nino dataset, so the entry is partly just a place to store my calculations of the derivatives -- which are done for the way more complicated logistic regression model in the papers I've found.
If you send your email address to me at empirical_bayesian -at- ieee -dot- org happy to send you a copy. Mystery to me why you can't access. I could put it in Google.
This page is unable to be displayed because the website owner has disabled directory indexing for this site and has not provided an index.html file.
Please contact the website owner to find the correct URL.
If this is your site you may wish to add an index.html page with more useful information.
I've been a bit lazy in putting in references, but one of the papers I've been looking at is Sparse Bilinear Logistic Regression. One thing I've found is that there seem to be several things that go under the name of bilinear regression, but it's the kind of formulation in that paper that I'm looking at.
I've sent you an email to give you my email address.
Comment Source:I've been a bit lazy in putting in references, but one of the papers I've been looking at is [Sparse Bilinear Logistic Regression](ftp://ftp.math.ucla.edu/pub/camreport/cam14-12.pdf). One thing I've found is that there seem to be several things that go under the name of bilinear regression, but it's the kind of formulation in that paper that I'm looking at. I've sent you an email to give you my email address.
We couldn't find a page for the link you visited. Please check that you have the correct link and try again.
If you are the owner of this domain, you can setup a page here by creating a page/website in your account.
Suddenly figured why trying to get Jan's document didn't work: the forum has merged two dashes into an n-dash in the output, so pasting doesn't work. If you select "Source" and paste the link from there, it works and I can see the paper.
Comment Source:Suddenly figured why trying to get Jan's document didn't work: the forum has merged two dashes into an n-dash in the output, so pasting doesn't work. If you select "Source" and paste the link from there, it works and I can see the paper.
Comment Source:This seems like a relevant reference in Jan's paper: ] N. Cahill, A. C. Parnell, A. C. Kemp, B. P. Horton, "Modeling sea-level change using errors-in-variables integrated Gaussian processes", 24th December 2013, http://arxiv.org/pdf/1312.6761.pdf .
Just to note I've been trying to figure out a way to implement the bilinear model in a not-too-wasteful way in an array language like Matlab or NumPy (in order to avoid writing low-level code myself), but it's not proving obvious to me how to do this.
Comment Source:Just to note I've been trying to figure out a way to implement the bilinear model in a not-too-wasteful way in an array language like Matlab or NumPy (in order to avoid writing low-level code myself), but it's not proving obvious to me how to do this.
Just for the record, here's some Matlab/Octave code for evaluating a set of bilinear regression coefficients. I'm not putting it on a more permanent place because it's so ugly, inefficient and is unbelievably slow even for smaller test examples when put into Octaves fminunc optimization function. I think I do need to think about writing some lower level code that will perform even remotely reasonably on the El Nino data. | CommonCrawl |
\begin{definition}[Definition:Existential Quantifier/Exact]
The symbol $\exists_n$ denotes the existence of an exact number of objects fulfilling a particular condition.
:$\exists_n x: \map P x$
means:
:'''There exist exactly $n$ objects $x$ such that $\map P x$ holds'''.
It is a variant of the existential quantifier $\exists$: '''there exists at least one'''.
\end{definition} | ProofWiki |
\begin{document}
\title{Characterization of transport optimizers via graphs and\\ applications to Stackelberg-Cournot-Nash equilibria } \author{Beatrice Acciaio\thanks{ETH Zurich, Department of Mathematics, \emph{[email protected]}}\;\; and\;\, Berenice Anne Neumann\thanks{Trier University, Department IV - Mathematics, \emph{[email protected]}}} \maketitle
\allowdisplaybreaks
\abstract{ We introduce graphs associated to transport problems between discrete marginals, that allow to characterize the set of all optimizers given one primal optimizer. In particular, we establish that connectivity of those graphs is a necessary and sufficient condition for uniqueness of the dual optimizers. Moreover, we provide an algorithm that can efficiently compute the dual optimizer that is the limit, as the regularization parameter goes to zero, of the dual entropic optimizers. Our results find an application in a Stackelberg-Cournot-Nash game, for which we obtain existence and characterization of the equilibria.\\[0.4cm] {\emph{Key words:} optimal transport, connected graphs, entropic regularization, Stackelberg-Cournot-Nash equilibria} }
\section{Introduction.}
Starting with the seminal works by Monge \cite{monge1781memoire} and Kantorovich \cite{kantorovich1942translocation}, optimal transport theory became a vibrant field of research with many applications in the most various fields, including economics, finance and machine learning, see e.g. \cite{galichon2018optimal,acciaio2016model,acciaio2020causal,backhoff2020adapted,acciaio2020cot,acciaio2021cournot}. Optimal transport is concerned with the question of how a given probability distribution can be coupled with another given probability distribution in a cost efficient way. Under weak conditions, one can show equivalence of this (primal) problem and a dual one. In the case of continuous measures on Euclidean spaces, in many situations of interest the dual optimizer is unique (up to translation), see \cite[Appendix B]{BerntonUniqueness}, \cite[Cor. 2.7]{delBarrioUniqueness}, \cite[Prop. 7.18]{santambrogio2015optimal} and \cite{StaudtNonUniqueness}. However, in the case where the marginals have finite support, there are several natural examples illustrating how uniqueness can easily fail. To the best of our knowledge, only a sufficient criterion for uniqueness has been described in \cite{StaudtNonUniqueness}. Since non-uniqueness happens regularly, it is of interest to determine the set of all dual optimizers. However, the classical algorithms for the optimal transport problem (see \cite{ComputationalOT} for an overview) output only one (nearly optimal) solution.
In the present work, we use tools from graph theory in order to characterize the set of all optimizers in optimal transport problems with finitely supported marginals. Other authors have been working at the intersection of the two theories, for example studying optimal transport on graphs as in \cite{leonard2016lazy}, or in order to analyze a multi-marginal optimal transport problem as in \cite{pass2021monge}. Our approach goes in a different direction. The starting point consists in associating to each optimal transport problem (in a discrete setting) a family of graphs $G^\gamma$, where $\gamma$ varies over all (primal) optimizers, and $G^\gamma$ describes its support. What is crucial for our analysis is the connectivity of the graph $G$ obtained as union of all graphs $G^\gamma$. This allows us to characterize uniqueness (up to translation) of the optimizer to the dual transport problem, in the sense that uniqueness holds if and only if $G$ is connected. On the other hand, when connectivity fails, we can describe in a simple way the set of all dual optimizers by starting with one primal optimizer $\gamma$ of the original transport problem. This is achieved by decomposing the optimal transport problem in subproblems (corresponding to the connected components of the graph $G^\gamma$). For every subproblem, the corresponding graph is connected and a unique dual optimizer is obtained. From the dual optimizers for the subproblems, we can then determine the set of all dual optimizers for the original problem.
A second contribution of this paper regards the approximation by entropic regularization. Notably, by adding an entropic penalization term to the transport problem, one obtains a strictly convex problem such that both primal and dual problem admit a unique solution. The entropic transport problem gained popularity because of its computational tractability, e.g. via the Sinkhorn algorithm of Cuturi \cite{cuturi2013sinkhorn}, and since it provides an approximation of the original transport problem, in the sense that the optimal costs converge and that the optimizers of the regularized problems converge to an optimizer of the original problem, see \cite{NutzWieselConvergenceDual,ComputationalOT}. In a discrete setting, Cominetti and San Mart\'in \cite{CominettiSanMartin} show that the limit of the dual optimizers of the regularized problems is a specific dual optimizer for the original problem and in \cite{weed2018entropic_convergence} it is shown that the convergence is exponentially fast. The limit optimizer is called centroid because it is characterized geometrically as a particular ``central'' point of the convex set of all optimizers. However, the computation is complex, as it is necessary to solve several nested convex optimization problems. Our contribution in this direction is twofold. First of all, we provide a simple algorithm to compute the centroid. Further, by leveraging on the previously exposed results, we can describe the set of all dual optimizers of the original problem, starting from the unique primal optimizers of the entropic ones. Given the efficiency in computing the latter, this provides a tractable way to find all dual transport optimizers.
Our last main contribution concerns an application to static games with a continuum of agents as introduced by Aumann \cite{AumannContinuum1964,aumann1966continuum}. In these games, agents with different types choose among a set of actions to minimize their costs which depend on their own type and action as well as on the actions of all other agents (mean-field interaction). Existence of (Cournot-Nash) equilibria has then been established by Schmeidler~\cite{SchmeidlerStaticContinuum}, Mas-Colell~\cite{MasColellStaticContinuum} and Khan~\cite{khan1989cournot}. However, relying on classical game theory, no further results regarding for example uniqueness or characterization of equilibria had been obtained until the work of Blanchet and Carlier~\cite{BlanchetCN}. The authors there introduced a class of games with separable cost functions, where equilibria can be characterized by minimizing a cost function that includes an optimal transport problem, and proposed a uniqueness criterion. In this work we consider a Stackelberg version of this game, where in addition a principal is participating to the game, setting up some cost to be paid by the agents according to their action, and at the same time facing a cost that depends both on this and on the distribution of actions of the agents. Relying on the connection of these games with optimal transport established in \cite{BlanchetCN,BlanchetCNFinite}, we find conditions to ensure existence of equilibria. Interestingly, the optimal choice of costs for the principal corresponds to finding a dual optimizer to an optimal transport problem. We can therefore apply the results illustrated above to describe the optimal strategies of the principal. We conclude by investigating whether entropic regularization gives nearly optimal solutions and providing a numerical example. \\
\noindent{\bf Organization of the rest of the paper.} In Section~\ref{sec:prelim} we recall the optimal transport problem, the relevant graph theoretic notions, and important results from both optimal transport and graph theory. In Section~\ref{sec:graph_for_OT} we introduce the graph associated to the optimal transport problem and prove the first two main results of the paper: the uniqueness criterion and the characterization of the set of all optimizers. In Section~\ref{sec:entropic} we turn to the entropic regularization and provide the characterization of the limit of the regularized dual optimizers. Finally, in Section~\ref{sec:SCNE} we consider the game between a continuum of agents and a principal, describe existence results and connect the optimization problem of the principal to the problem of finding all dual optimizers for certain optimal transport problems. We conclude the section by providing approximation arguments and a numerical illustration.
\section{Preliminaries.} \label{sec:prelim} In this section we recall some fundamental results in optimal transport and in graph theory, that will be used throughout the paper.
\subsection{Finite Optimal Transport.}\label{sect.fOT} Let $\mu$ and $\nu$ be two discrete probability measures on $\mathbb{R}^d$, with finite supports $\mathcal{X}$ and $\mathcal{Y}$ having cardinality $n_\mathcal{X}$ and $n_\mathcal{Y}$, respectively. For a function $c:\mathcal{X}\times \mathcal{Y} \to\mathbb{R}_+$, the optimal transport problem between $\mu$ and $\nu$ with respect to the cost $c$ is given by \begin{equation}\label{eq.OT} \text{OT}(\mu,\nu, c) = \inf_{\gamma \in \Pi(\mu, \nu)} \int c(x,y)\gamma(dx,dy), \end{equation} where $\Pi(\mu, \nu)\subseteq\mathcal{P}(\mathcal{X}\times\mathcal{Y})$ is the set of probability measures on $\mathcal{X}\times\mathcal{Y}$ with first marginal equal to $\mu$ and second marginal equal to $\nu$. An element $\gamma$ of $\Pi(\mu,\nu)$ is called a coupling of $\mu$ and $\nu$, and is called an \emph{optimal coupling}, or \emph{primal optimizer}, if it is an optimizer for problem \eqref{eq.OT}. The associated dual optimization problem reads as \begin{align}\label{eq.DOT} \text{DOT} (\mu, \nu, c) = \sup \Big\{ \textstyle{ \int_\mathcal{X} \varphi d\mu + \int_\mathcal{Y}\psi d\nu : \varphi:\mathcal{X}\to\mathbb{R}, \psi:\mathcal{Y}\to\mathbb{R}},\ \varphi(x) + \psi(y) \le c(x,y) \, \forall x \in \mathcal{X}, y \in \mathcal{Y}\Big\}. \end{align} A pair $(\varphi,\psi)$ satisfying the constraints in \eqref{eq.DOT} is called \emph{feasible}, and referred to as \emph{dual optimizer} if it is an optimizer for problem \eqref{eq.DOT}. Since the finite optimal transport problem is a linear optimization problem, we immediately see that both the set of all primal optimizers and the set of all dual optimizers are convex. We refer the reader to the manuscript of Villani~\cite{VillaniOldAndNew} for a thorough exposition of the optimal transport theory. We summarize below some of the crucial results about problems \eqref{eq.OT} and \eqref{eq.DOT} that will be useful for later reference. For this, we recall that a set $\Gamma\subseteq \mathcal{X}\times\mathcal{Y}$ is called \emph{$c$-cyclically monotone} if, for any $N\in\mathbb{N}$ and any collection $(x_1,y_1),\ldots,(x_N,y_N)\in\Gamma$, the inequality \[ \sum_{i=1}^N c(x_i,y_i)\leq \sum_{i=1}^N c(x_i,y_{i+1}) \] holds, with the convention $y_{N+1}=y_1$. A coupling $\gamma\in\mathcal{P}(\mathcal{X}\times\mathcal{Y})$ is said to be \emph{$c$-cyclically monotone} if it is concentrated on a $c$-cyclically monotone set.
\begin{theorem}[\cite{VillaniOldAndNew}, Theorem 5.10, Remark 5.12]\label{thm.510} In the above discrete setting, we have: \begin{itemize} \item[(i)] duality holds, i.e.\ $\text{OT} (\mu, \nu, c)=\text{DOT} (\mu, \nu, c)$; \item[(ii)] both the primal problem \eqref{eq.OT} and the dual problem \eqref{eq.DOT} admit solutions; \item[(iii)] there is a $c$-cyclically monotone set $\Gamma\subseteq \mathcal{X}\times\mathcal{Y}$ such that, for $\gamma\in\Pi(\mu,\nu)$, the following are equivalent: \begin{itemize} \item[1.] $\gamma$ is optimal for \eqref{eq.OT}; \item[2.] $\gamma$ is concentrated on $\Gamma$; \item[3.] $\gamma$ is $c$-cyclically monotone; \end{itemize} \item[(iv)] let $\gamma\in\Pi(\mu,\nu)$, and $(\varphi,\psi)$ be a feasible pair for the dual problem, then $\gamma$ and $(\varphi,\psi)$ are optimal solutions for the primal resp. dual problem if and only if they are complementary, i.e. \[ \varphi(x)+\psi(y)=c(x,y)\, \gamma\text{-a.s.}; \] \item[(v)] the union of the supports of all primal optimizers is the smallest $c$-cyclically monotone set contained in $\mathcal{X}\times\mathcal{Y}$ and such that all primal optimizers are concentrated on it. \end{itemize} \end{theorem}
\begin{remark}[uniqueness] \label{remark:uniqueness} It is immediate to see that if $(\varphi, \psi)$ is a dual optimizer, then also the pair obtained by translation, $(\varphi + a, \psi - a)$, $a\in\mathbb{R}$, is a dual optimizer. One of the main results of the present paper consists in characterizing all dual optimizers, and for this we decide to adopt a normalization assumption. In light of this, we say that the dual optimizer is \emph{unique} if it is unique up to translation.
$\diamond$ \end{remark}
\subsection{Graph Theory.} \label{sec:GT} A graph is a pair $G=(V,E)$ of two sets, such that $V \neq \emptyset$ is a finite set and $E\subseteq [V]^2$, where $[V]^2$ is the set of all two-element subsets of $V$. Any element of $V$ is called a vertex. Moreover, we say that $e=\{v,w\}\in E$ is an edge and $v$ and $w$ are the end vertices of $e$. A graph $G'=(V',E')$ with $V' \subseteq V$ and $E' \subseteq E$ is called a subgraph of $G$, and in this case we write $G' \subseteq G$. We say that a graph $G$ is maximal with respect to some property if there is no graph $H\neq G$ with the same property and such that $G\subseteq H$. Given a graph $G=(V,E)$ and a set $U \subseteq V$, the induced subgraph $G[U]:=(U,E')\subseteq G$ is the graph such that $E'=\{\{v,w\} \in E: v,w \in U\}$, i.e. it contains all edges whose both end vertices lie in $U$.
A \emph{path} in $G=(V,E)$ is a sequence of vertices $P=v_0 \ldots v_l$ ($l \ge 0$) such that all $v_i$ are distinct and $\{v_i,v_{i+1}\} \in E$ for all $i \in \{0, \ldots, l-1\}$. We call $v_0$ and $v_l$ the end vertices of the path $P$ and say that $P$ joins the vertices $v_0$ and $v_l$. Note that we allow paths of length zero, i.e. consisting of one vertex only. We say that a graph $G=(V,E)$ is \emph{connected} if any two of its vertices are linked by a path. We say that a set $U\subseteq V$ is connected in $G$ if the induced subgraph $G[U]$ is connected.
\begin{prop}[\cite{DiestelGT}, Proposition 1.4.1]\label{lemma.ordering} Let $G=(V,E)$ be a connected graph and let $v \in V$ be an arbitrary vertex. We can order the vertices of $G$ as $v_1, \ldots, v_n$ such that $v=v_1$ and $G[\{v_1, \ldots, v_i\}]$ is connected for all $i \in \{2, \ldots, n\}$. \end{prop}
A maximal connected subgraph of $G$ is a \emph{component} of $G$. We highlight that components are induced subgraphs of $G$, that any component can be identified by its vertex set, and that the vertex sets $V_1, \ldots, V_N$ of all components of $G$ partition the vertex set of $G$. Hence, with a slight abuse of notation, we will say that $V_1, \ldots, V_N$ are the components of $G$. We say that a graph $G=(V,E)$ is \emph{bipartite} if we can partition the vertex set into two classes $W_1$ and $W_2$ such that every edge has one end vertex in $W_1$ and the other one in $W_2$. If $G$ is a bipartite graph and $e=\{w_1,w_2\}\in E$, then we write $e=[w_1,w_2]$ to indicate that $w_1 \in W_1$ and $w_2 \in W_2$. A \emph{cycle} is a sequence $v_0 v_1\ldots v_{l-1} v_0$ such that $v_0\ldots v_{l-1}$ is a path and $\{v_{l-1},v_0\} \in E$. A connected graph without cycles is called a \emph{tree}.
\begin{example} Figure~\ref{fig:graphExamples} shows examples of graphs: The graph in Figure~\ref{fig:bipartite} is bipartite graph with vertex set $V=\{x_1, x_2, y_1,y_2\}$ and edge set $E=\{ [x_1,y_1], [x_1, y_2], [x_2, y_1], [x_2,y_2]\}$. It is bipartite with classes $W_1=\{x_1,x_2\}$ and $W_2=\{y_1,y_2\}$. Moreover, it is connected since there is a path from every vertex to every other vertex. For example, a path from $x_1$ to $y_1$ is given by $x_1y_1$ and a path from $x_1$ to $x_2$ is given by $x_1y_1x_2$. The graph in Figure~\ref{fig:components} is not connected since there is no path from $5$ to $1$. Instead, it has two components $V_1 = \{1,2,3,4\}$ and $V_2=\{5,6,7\}.$ The graph is not a tree because it contains the cycle $2,3,4$, nor bipartite because we cannot partition $\{2,3,4\}$ into two sets such that any edge goes from one set to the other. Finally, the graph in Figure~\ref{fig:tree} is a tree because it is connected and does not contain any cycles.
\begin{figure}
\caption{Examples of graphs}
\label{fig:bipartite}
\label{fig:components}
\label{fig:tree}
\label{fig:graphExamples}
\end{figure}
$\diamond$ \end{example}
\section{Connectivity and Transport Optimizers.} \label{sec:graph_for_OT} In this section we introduce graphs that allow to derive structural results on the optimal transport problem. In particular, we provide a necessary and sufficient criterion for uniqueness of the dual optimizer and a characterization of all dual optimizers given one primal optimizer. For this, we fix $\mu,\nu$ and $c$ as in Section~\ref{sect.fOT}, and consider the corresponding primal and dual problems \eqref{eq.OT} and \eqref{eq.DOT}.
Let $\gamma$ be a primal optimizer of $\text{OT}(\mu, \nu, c)$. Then we define the bipartite graph $G^\gamma =(V^\gamma, E^\gamma)$ with vertex set $V^\gamma = \mathcal{X} \cup \mathcal{Y}$ (with partition $W_1=\mathcal{X}$ and $W_2=\mathcal{Y}$) and edge set \begin{align*} E^\gamma = \Big\{ [x,y] : \gamma(x,y)>0 \Big\}. \end{align*} Moreover, we consider the bipartite graph $G=(V,E)$ with vertex set $V= \mathcal{X}\cup\mathcal{Y}$ and edge set given by the union of all $E^\gamma$ with $\gamma$ optimal for \eqref{eq.OT}, that is \begin{equation}\label{eq.setE} E = \Big\{ [x,y] : \gamma(x,y)>0 \text{ for some primal optimizer $\gamma$}\Big\}. \end{equation} We call $G$ the graph associated to $\text{OT}(\mu,\nu,c)$.
\begin{remark}\label{rem.supp} Note that the set $E$ in \eqref{eq.setE} corresponds to the set described in Theorem~\ref{thm.510}-(v). Moreover, by Theorem~\ref{thm.510}-(iii),(v), $\gamma \in \Pi(\mu, \nu)$ is a primal optimizer if and only if \[ \gamma(x,y)=0 \quad \text{for all } [x,y] \notin E. \]
$\diamond$ \end{remark}
\begin{example} \label{ex1} Let us illustrate the definitions as well as the differences between the sets $G^\gamma$ and $G$ introduced above in a simple example. Consider $\mathcal{X} = \{1,2,3\}$ and $\mathcal{Y} = \{1,2,3,4\}$ and the measures \[ \mu = \tfrac{1}{4} \delta_{\{1\}} + \tfrac{1}{4} \delta_{\{2\}} + \tfrac{1}{2} \delta_{\{3\}} \quad \text{and} \quad \nu = \tfrac{1}{10} \delta_{\{1\}} + \tfrac{2}{5} \delta_{\{2\}} + \tfrac{1}{5} \delta_{\{3\}} + \tfrac{3}{10} \delta_{\{4\}}. \] Set $F=\left\{ (1,1), (1,2), (2,1), (2,2), (2,3), (3,2), (3,3), (3,4) \right\}$ and define $ c=1_{F^c}$. By definition of $c$, we have $\text{OT}(\mu, \nu,c)\ge 0$. Since $\gamma: \mathcal{X} \times \mathcal{Y} \rightarrow [0,1]$ with \[ \gamma(1,1) = \tfrac{1}{10}, \, \gamma(1,2) = \tfrac{3}{20}, \, \gamma(2,2) = \tfrac{1}{4}, \, \gamma(3,3) = \tfrac{1}{5}, \, \gamma(3,4) = \tfrac{3}{10} \] is feasible and has cost $0$, we have $\text{OT}(\mu, \nu, c) =0$. The graph $G^\gamma$ is depicted in Figure~\ref{fig:ex1_Ggamma}. It is clear that any feasible coupling $\gamma$ is optimal if and only if, for any $(x,y) \in \mathcal{X} \times \mathcal{Y}$, $\gamma(x,y) >0 \Rightarrow (x,y) \in F$. Moreover, one can easily see that there exists a coupling with support $F$, which implies $E=F$. The graph $G$ is depicted in Figure~\ref{fig:ex1_G}. Note that $G^\gamma$ is a proper subgraph of $G$, and that it is not connected although $G$ is.
\begin{figure}
\caption{The graphs $G^\gamma$ and $G$ from Example~\ref{ex1}}
\label{fig:ex1_Ggamma}
\label{fig:ex1_G}
\label{fig:ex1}
\end{figure}
$\diamond$ \end{example}
\begin{lemma} \label{lemma:GvsGgamma} There is a primal optimizer $\gamma^\ast$ such that $G=G^{\gamma^\ast}$. \end{lemma}
\begin{proof} Since the graph $G$ has finitely many vertices, it also has finitely many edges $e_1, \ldots, e_M$, with $e_m=[x_m,y_m]$, $x_m\in\mathcal{X}$ and $y_m\in\mathcal{Y}$, for each $m\in\{1,\ldots,M\}$. By construction of $G$, there is for each $m \in \{1, \ldots, M\}$ a primal optimizer $\gamma^m$ such that $\gamma^m(x_m,y_m)>0$. Since the set of all primal optimizers is convex, the coupling defined as \[ \gamma^\ast = \frac{1}{M} \sum_{m=1}^M \gamma^m \] is a primal optimizer as well. Moreover, it satisfies \[ \gamma^\ast(x_m,y_m) \ge \frac{1}{M} \gamma^m(x_m,y_m)>0\quad \text{for all } m \in \{1, \ldots, M\}. \] Hence, $G^{\gamma^\ast}=G$. \end{proof}
Next we show that the graph $G$ also characterizes dual optimizers.
\begin{prop} \label{prop:Gcharacterizes} Let $(\varphi, \psi)$ be feasible for the dual problem. Then it is a dual optimizer if and only if \begin{equation}\label{eq.ug_E} \varphi(x) + \psi(y) = c(x,y) \quad \text{for all } [x,y] \in E. \end{equation} \end{prop}
\begin{proof} Let $\gamma^\ast$ be the optimizer from Lemma~\ref{lemma:GvsGgamma}. Then $G=G^{\gamma^\ast}$ and the support of $\gamma^*$ is $E$, i.e. the union of the supports of all primal optimizers. Therefore, if $\varphi(x) + \psi(y) = c(x,y)$ for all $[x,y] \in E$, then $\varphi(x) + \psi(y) = c(x,y)$ $\gamma^\ast-$a.s. Hence, $\gamma^\ast$ and $(\varphi, \psi)$ are complementary, which by Theorem~\ref{thm.510} means that $(\varphi, \psi)$ is a dual optimizer.
Vice versa, if $(\varphi, \psi)$ is a dual optimizer, then, by Theorem~\ref{thm.510}, $\gamma^\ast$ and $(\varphi,\psi)$ have to be complementary. This exactly means that $\varphi(x) + \psi(y) = c(x,y)$ for all $[x,y] \in E$. \end{proof}
\subsection{Uniqueness.} In this section, we provide sufficient conditions for the dual optimizer to be unique (meaning uniqueness up to translation, see Remark~\ref{remark:uniqueness}).
\begin{prop} \label{prop:unique} Let $G$ be the graph associated to $\text{OT}(\mu,\nu,c)$. Then: \begin{itemize} \item[(i)] if for all non-empty subsets $X\subsetneq \mathcal{X}$ and $Y \subsetneq \mathcal{Y}$ we have $\mu(X)\neq \nu(Y)$, then the graph $G$ is connected; \item[(ii)] if the graph $G$ is connected, then the dual optimizer is unique. \end{itemize} In particular, the dual optimizer is unique whenever for all non-empty subsets $X\subsetneq \mathcal{X}$ and $Y \subsetneq \mathcal{Y}$ we have $\mu(X)\neq \nu(Y)$. \end{prop}
The last statement of the proposition has already been described by Staudt et al. \cite{StaudtNonUniqueness}. The sufficient condition in (ii) is new, and we will see later in Corollary~\ref{cor:UniqueMeansConnected} that is actually necessary.
\begin{proof} (i): We are going to show that, for any primal optimizer $\gamma$, the graph $G^\gamma$ is connected. This in turn implies the claim, by considering $\gamma^\ast$ from Lemma~\ref{lemma:GvsGgamma}. Assume that there is a primal optimizer $\gamma$ such that $G^\gamma$ is not connected. In this case there are two non-empty sets $U_1$ and $U_2$ such that $U_1 \cap U_2 = \emptyset$, $U_1 \cup U_2 = \mathcal{X} \cup \mathcal{Y}$ and that there is no edge from $U_1$ to $U_2$ in $E^\gamma$, i.e., for all $(e_1,e_2) \in E^\gamma$ we have that either $e_1, e_2 \in U_1$ or $e_1,e_2 \in U_2$. By construction of $G^\gamma$, we can write $U_1 = X_1 \cup Y_1$ and $U_2=X_2 \cup Y_2$ for some subsets $X_1, X_2 \subseteq \mathcal{X}$ and $Y_1, Y_2 \subseteq \mathcal{Y}$. Note that all subsets $X_1,X_2,Y_1,Y_2$ are non-empty. Indeed, suppose by contradiction that for example $Y_1=\emptyset$. Then we would have $X_1=U_1\neq\emptyset$. Since $\mu(x)>0$ for all $x\in\mathcal{X}$, then in particular for any $x\in X_1$ there is $y\in\mathcal{Y}$ such that $\gamma(x,y)>0$. By the above decomposition of $E^\gamma$ we would then need to have $y\in U_1$ as well, that is $y\in Y_1$, which leads to the desired contradiction. Similarly, we would find a contradiction by assuming any of the other sets to be empty.
Now note that, since there are no edges from $U_1$ to $U_2$, then \[ \gamma(x,y)=0 \text{ for all } (x,y) \in(X_1 \times Y_2) \cup (X_2 \times Y_1). \] Together with the fact that $\gamma\in\Pi(\mu,\nu)$, this yields \begin{align*} \mu(X_1) = \sum_{x \in X_1} \mu(x) = \sum_{x \in X_1} \sum_{y \in \mathcal{Y}} \gamma(x,y) = \sum_{x \in X_1} \sum_{y \in Y_1} \gamma(x,y) = \sum_{y \in Y_1} \sum_{x \in X_1} \gamma(x,y) = \sum_{y \in Y_1} \nu(y) = \nu(Y_1), \end{align*} which is a contradiction.
(ii): To prove uniqueness up to translation, we fix an arbitrary $x_0 \in \mathcal{X}$ and show that there is exactly one dual optimizer with $\varphi(x_0)=0$. By Proposition~\ref{lemma.ordering}, there is an ordering $v_1, v_2, \ldots, v_{n_\mathcal{X}+n_\mathcal{Y}}$ of the vertices of $G$, with $v_1=x_0$ and such that $G[\{v_1, \ldots, v_i\}]$ is connected for all $i \in \{2, \ldots, n_\mathcal{X}+n_\mathcal{Y}\}$. Consider any dual pair $(\varphi,\psi)$ with $\varphi(v_1)=0$. Since $G[\{v_1, v_2\}]$ is connected, then $[v_1,v_2]\in E$ and, by Proposition~\ref{prop:Gcharacterizes}, we have that $\psi(v_2)=c(v_1,v_2)-\varphi(v_1)$. In the same way, we argue that since $G[\{v_1, v_2, v_3\}]$ is connected, then either $[v_1,v_3]\in E$ or $[v_3,v_2]\in E$. In the first case, $v_3\in\mathcal{Y}$ and Proposition~\ref{prop:Gcharacterizes} yields $\psi(v_3)=c(v_1,v_3)-\varphi(v_1)$, while in the second case we have $v_3\in\mathcal{X}$ and $\varphi(v_3)=c(v_3,v_2)-\psi(v_2)$. By iterating this process, we see that all values $\varphi(x), x\in\mathcal{X}$, and $\psi(y), y\in\mathcal{Y}$, are uniquely identified. Hence, the dual optimizer $(\varphi, \psi)$ is unique up to translation. \end{proof}
Note that the proof of the second part of Proposition~\ref{prop:unique} also describes a way to determine the unique optimizer given the graph $G$. Namely, it suffices to find an ordering $v_1, \ldots, v_{n_\mathcal{X} + n_\mathcal{Y}}$ of the vertices of $G$ such that $v_1 = x_0$ and $G[\{v_1, \ldots, v_i\}]$ is connected for all $i \in \{2, \ldots, n_\mathcal{X}+n_\mathcal{Y}\}$, and thereafter successively set the values of $\varphi$ resp. $\psi$ according to \eqref{eq.ug_E}.
\begin{continueexample}{ex1} Since $G$ is connected, there is a unique dual optimizer. Note that $\mu(\{1,2\}) = \nu(\{1,2\})$, which means that the condition that $\mu(X)\neq\nu(Y)$ for all $X \subsetneq \mathcal{X}$ and $Y \subsetneq \mathcal{Y}$ is not necessary for uniqueness. Moreover, as explained before, we can easily compute the dual optimizer: an ordering satisfying the desired conditions is $1_\mathcal{X}, 1_\mathcal{Y}, 2_\mathcal{X}, 2_\mathcal{Y}, 3_\mathcal{X}, 3_\mathcal{Y}, 4_\mathcal{Y}$, where indexes here are to show the belonging set. Hence, following the method described in the proof of Proposition~\ref{prop:unique}-(ii) and starting with $\varphi(1)=0$, we find that $(\varphi,\psi)$ with $\varphi\equiv 0$ and $\psi\equiv 0$ is the unique dual optimizer.
$\diamond$ \end{continueexample}
For a connected graph $G$, the unique dual optimizer immediately characterizes the graph $G$. Later we will see that a weaker statement holds also for general (not necessarily connected) graphs. Namely, we can show that there are some dual optimizers that satisfy \eqref{eq:CharacterizationGConnected} below, while there are always some dual optimizers for which $E \subsetneq \{[x,y]: x\in\mathcal{X}, y\in\mathcal{Y}, c(x,y) = \varphi(x) + \psi(y)\}$ (see Corollary~\ref{cor:relationEandPhi}).
\begin{prop} \label{prop:ChracterizationGconnected} Assume that the graph $G$ associated to $\text{OT}(\mu,\nu,c)$ is connected and let $(\varphi, \psi)$ be the unique dual optimizer. Then \begin{equation} \label{eq:CharacterizationGConnected} E = \{[x,y]: x\in\mathcal{X}, y\in\mathcal{Y}, c(x,y) = \varphi(x) + \psi(y)\}. \end{equation} \end{prop}
\begin{proof} By Proposition~\ref{prop:Gcharacterizes}, we have that $E \subseteq \{[x,y]: x\in\mathcal{X}, y\in\mathcal{Y}, c(x,y) = \varphi(x) + \psi(y)\}$. To prove the other inclusion, assume by contradiction that there are $\hat x\in\mathcal{X}$ and $\hat y\in\mathcal{Y}$ such that $[\hat{x},\hat{y}] \notin E$ while $c(\hat{x}, \hat{y}) = \varphi(\hat{x}) + \psi(\hat{y})$. We are now going to construct a primal optimizer $\hat{\gamma}$ such that $\hat{\gamma}(\hat{x},\hat{y})>0$, which yields the desired contradiction. Since $G$ is connected and bipartite, there is a path $x_1 y_1 x_2 y_2 \ldots x_l y_l$ from $\hat{x}=x_1$ to $\hat{y}=y_l$ with $x_1,\ldots x_l\in\mathcal{X}$ and $y_1,\ldots,y_l\in\mathcal{Y}$. Now let $\gamma$ be a primal optimizer such that $G=G^\gamma$. Note that this implies $\gamma(x_j,y_j)>0$ for $j\in\{1,\ldots,l\}$, and $\gamma(x_j,y_{j-1})>0$ for $j\in\{2,\ldots,l\}$. Then we set \[ \delta = \min_{j \in \{1, \ldots, l\}} \gamma(x_j,y_j)\ \wedge \min_{j \in \{2, \ldots, l\}} \big(1- \gamma(x_{j},y_{j-1})\big). \] Note that $\delta \in(0,1)$. We define \[ \hat{\gamma}(x,y) =\begin{cases} \gamma(x,y) - \delta &\text{if } (x,y) = (x_j,y_j) \text{ for some } j \in \{1, \ldots, l\} \\ \gamma(x,y) + \delta &\text{if } (x,y) = (x_j, y_{j-1}) \text{ for some } j \in \{1, \ldots, l\} \\ \gamma(x,y) &\text{else}, \end{cases} \] with the convention $y_0:=y_l$.
Now we show that $\hat{\gamma} \in \Pi(\mu, \nu)$. First, note that $\hat{\gamma}(x,y) \in [0,1]$ for all $(x,y) \in \mathcal{X} \times \mathcal{Y}$ by choice of $\delta$. Now, let $x \in \mathcal{X} \setminus \{x_1, \ldots, x_l\}$. Then \[ \sum_{y \in \mathcal{Y}} \hat{\gamma}(x,y) = \sum_{y \in \mathcal{Y}} \gamma(x,y) = \mu(x). \] Now assume that $x= x_j$ for some $j \in \{1, \ldots, l\}$. Then we obtain \begin{align*} \sum_{y \in \mathcal{Y}} \hat{\gamma}(x_j,y) = \sum_{y \in \mathcal{Y} \setminus \{y_j,y_{j-1}\}}\gamma(x_j,y) + \gamma(x_j,y_j) - \delta + \gamma(x_j, y_{j-1}) + \delta = \sum_{y \in \mathcal{Y}} \gamma(x_j,y)=\mu(x_j). \end{align*} Analogous computations for $y \in \mathcal{Y}$ show that $\hat{\gamma} \in \Pi(\mu,\nu)$.
To show that $\hat{\gamma}$ and $(\varphi, \psi)$ are complementary, it suffices to note that, for all $(x,y) \neq (\hat{x},\hat{y})$, we have \[ \hat{\gamma}(x,y) >0 \Rightarrow \gamma(x,y) >0. \] Hence, the complementarity of $\gamma$ and $(\varphi, \psi)$, as well as the choice of $\hat{x}$ and $\hat{y}$, yield that $\hat{\gamma}$ and $(\varphi, \psi)$ are complementary. By Theorem~\ref{thm.510}-(iv) we therefore obtain that $\hat{\gamma}$ is a primal optimizer, which yields $[\hat{x}, \hat{y}] \in E$. This is the desired contradiction. \end{proof}
\subsection{Optimal Transport on Connected Components.} In this section we show that the optimal transport problem on $\mathcal{X} \times \mathcal{Y}$ is closely related to the optimal transport problems restricted to the components of the relevant graph. For this we introduce the following notation. Let $X\subseteq\mathcal{X}$ and $Y\subseteq\mathcal{Y}$ be such that $\mu(X) = \nu(Y)$. Then we denote the transport problem restricted to $X\times Y$ by \[
\text{OT}_{X,Y}(\mu,\nu,c):=\text{OT} \left( \frac{1}{\mu(X)} \mu|_{X}, \frac{1}{\nu(Y)} \nu|_{Y}, c|_{X \times Y} \right), \] and use the notation $\text{DOT}_{X,Y}(\mu,\nu,c)$ for its dual problem.
\begin{theorem} \label{thm:Components} Let $\gamma$ be a primal optimizer of $\text{OT}(\mu,\nu,c)$, and let $V_1, \ldots, V_N$ be the connected components of $G^\gamma$, with $V_n=X_n \cup Y_n$ for subsets $X_n \subseteq \mathcal{X}$ and $Y_n\subseteq \mathcal{Y}$ for $n \in \{1, \ldots, N\}$. Then: \begin{itemize} \item[(i)] for all $n,m \in \{1, \ldots, N\}$ with $n \neq m$, $\gamma(X_n\times Y_m) =0$;
\item[(ii)] for all $n \in \{1, \ldots, N\}$, $\frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n}$ is a primal optimizer for $\text{OT}_{X_n,Y_n}(\mu,\nu,c)$; \item[(iii)] it holds that \begin{align*} \text{OT}(\mu,\nu, c) = \sum_{n=1}^N \mu(X_n) \text{OT}_{X_n,Y_n}(\mu,\nu,c); \end{align*}
\item[(iv)] if $(\varphi, \psi)$ is an optimizer for the dual problem $\text{DOT}(\mu,\nu, c)$, then, for all $n \in \{1, \ldots, N\}$, $(\varphi|_{X_n}, \psi|_{Y_n})$ is an optimizer for the dual problem $\text{DOT}_{X_n,Y_n}(\mu,\nu,c)$. \end{itemize} \end{theorem}
\begin{proof} (i): This is clear by definition of $G^\gamma$.
(ii): This is an immediate consequence of Theorem 4.6 in Villani~\cite{VillaniOldAndNew}, which asserts, in the current discrete setting, that if $\gamma$ is a primal optimizer and $\gamma'$ is a non-negative measure on $\mathcal{X}\times\mathcal{Y}$ such that $\gamma' \le \gamma$ with $\gamma'(\mathcal{X} \times \mathcal{Y})>0$, then $\gamma'/\gamma'(\mathcal{X} \times \mathcal{Y})$ is optimal for $\text{OT}(\mu',\nu',c)$ where $\mu'$ and $\nu'$ are the marginals of $\gamma'/\gamma'(\mathcal{X} \times \mathcal{Y})$. In our case, for any $n\in\{1,\ldots,N\}$, we choose \[ \gamma' (x,y) = \begin{cases} \gamma(x,y) &\text{if }x \in X_n, y \in Y_n \\ 0 &\text{otherwise} \end{cases} \] and note that, by (i), \[ \gamma'(\mathcal{X} \times \mathcal{Y}) = \gamma(X_n \times Y_n) = \sum_{x \in X_n, y \in Y_n} \gamma(x,y) = \sum_{x \in X_n, y \in \mathcal{Y}} \gamma(x,y) = \mu(X_n)>0. \] Finally, using (i) we have \begin{align*} \mu'(x) &= \frac{1}{\gamma'(\mathcal{X} \times \mathcal{Y})} \sum_{y \in Y_n} \gamma'(x,y) = \frac{1}{\mu(X_n)}\sum_{y \in \mathcal{Y}} \gamma(x,y) = \frac{1}{\mu(X_n)} \mu(x) \quad \text{for } x \in X_n, \\ \nu'(y) &= \frac{1}{\gamma'(\mathcal{X} \times \mathcal{Y})} \sum_{x \in X_n} \gamma'(x,y) = \frac{1}{\nu(Y_n)}\sum_{x \in \mathcal{X}} \gamma(x,y) = \frac{1}{\nu(Y_n)}\nu(y) \quad \text{for } y \in Y_n, \end{align*} which shows that
$\frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n}$ is a primal optimizer for $\text{OT} \left( \frac{1}{\mu(X_n)} \mu|_{X_n}, \frac{1}{\nu(Y_n)} \nu|_{Y_n}, c|_{X_n \times Y_n} \right)$.
(iii): By optimality of $\gamma$ and from (i), we have \[
\text{OT}(\mu, \nu, c)=\int_{\mathcal{X} \times \mathcal{Y}} c\ d\gamma = \sum_{n=1}^N \int_{X_n\times Y_n} c|_{X_n \times Y_n} \ d\gamma|_{X_n \times Y_n}. \] Then by (ii), for all $n \in \{1, \ldots, N\}$, \[
\frac{1}{\mu(X_n)} \int_{X_n\times Y_n} c|_{X_n \times Y_n}\ d \gamma|_{X_n \times Y_n} = \text{OT}_{X_n,Y_n}(\mu,\nu,c). \] Combining the two equations gives the desired statement.
(iv): To show that $(\varphi|_{X_n}, \psi|_{Y_n})$ is an optimizer for $\text{DOT}_{X_n,Y_n}(\mu,\nu,c)$, by Theorem~\ref{thm.510}-(iv) it suffices to show that $\frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n}$ and $(\varphi|_{X_n}, \psi|_{Y_n})$ are complementary. Note that, for all $x \in X_n$ and $y \in Y_n$, we have \[
\frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n} (x,y)>0 \Leftrightarrow \gamma(x,y)>0. \] Since $\gamma$ and $(\varphi,\psi)$ are primal resp. dual optimizers, by Theorem~\ref{thm.510}-(iv) they are complementary, thus the statement follows. \end{proof}
Given that for any primal optimizer $\gamma$ we have $E^\gamma \subseteq E$, the above theorem immediately allows us to draw conclusions about the graph $G$, and to characterize all primal optimizers given $G$.
\begin{corollary} \label{cor:Components} Let $G$ be the graph associated to $\text{OT}(\mu,\nu,c)$ and $V_1, \ldots, V_N$ its connected components, with $V_n=X_n \cup Y_n$ for subsets $X_n \subseteq \mathcal{X}$ and $Y_n\subseteq \mathcal{Y}$ for $n \in \{1, \ldots, N\}$. Then: \begin{itemize} \item[(i)] any primal optimizer $\gamma$ satisfies $\gamma(X_n\times Y_m) = 0$ for all $n,m \in \{1, \ldots, N\}$ with $n \neq m$;
\item[(ii)] $\gamma \in \Pi(\mu, \nu)$ is a primal optimizer if and only if $\frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n}$ is optimal for the problem $\text{OT}_{X_n,Y_n}(\mu,\nu,c)$ for all $n=1, \ldots, N$;
\item[(iii)] any dual optimizer $(\varphi, \psi)$ satisfies that, for all $n \in \{1, \ldots, N\}$, the vector $(\varphi|_{X_n}, \psi|_{Y_n})$ is an optimizer for $\text{DOT}_{X_n,Y_n}(\mu,\nu,c)$. \end{itemize} \end{corollary}
\begin{proof} (i) and (iii) are immediate consequences of Theorem~\ref{thm:Components}, and so is the ``only if" implication in (ii). To see the converse implication, let $\hat{\gamma}$ be a primal optimizer such that $G^{\hat{\gamma}}=G$, which exists by Lemma~\ref{lemma:GvsGgamma}. Then Theorem~\ref{thm:Components}-(iii) applied to $\hat\gamma$ gives \begin{align*} \text{OT}(\mu,\nu, c) = \sum_{n=1}^N \mu(X_n) \text{OT}_{X_n,Y_n}(\mu,\nu,c). \end{align*}
Now fix any $\gamma\in\Pi(\mu,\nu)$ such that $\frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n}$ is optimal for $\text{OT}_{X_n,Y_n}(\mu,\nu,c)$ for all $n\in\{1,\ldots,N\}$. This implies in particular that $\gamma(x,y)=0$ whenever $x \in X_n$ and $y \in Y_m$ with $n \neq m$. Therefore \begin{align*}
\int_{\mathcal{X}\times\mathcal{Y}} c\ d\gamma = \sum_{n=1}^N \int_{X_n \times Y_n}c|_{X_n \times Y_n}\ d\gamma|_{X_n \times Y_n} = \sum_{n=1}^N \mu(X_n) \text{OT}_{X_n,Y_n}(\mu,\nu,c)= \text{OT}(\mu,\nu,c), \end{align*} which shows that $\gamma$ is indeed optimal. \end{proof}
\subsection{Characterization of all Dual Optimizers.} Relying on the results obtained in the previous sections, we can now characterize the set of all dual optimizers.
\begin{theorem} \label{thm:SetOfAllDuals} Let $\gamma$ be a primal optimizer for $\text{OT}(\mu,\nu,c)$, and let $V_1, \ldots, V_N$ be the connected components of $G^\gamma$, with $V_n=X_n \cup Y_n$ for subsets $X_n \subseteq \mathcal{X}$ and $Y_n\subseteq \mathcal{Y}$ for $n \in \{1, \ldots, N\}$. Let $(\varphi_n,\psi_n)$ be the unique optimizer for $\text{DOT}_{X_n,Y_n}(\mu, \nu, c)$ for $n \in \{1, \ldots, N\}$. Then a pair $(\varphi, \psi)$ is an optimizer for $\text{DOT}(\mu,\nu,c)$ if and only if there are constants $\alpha_1, \alpha_2, \ldots, \alpha_N \in \mathbb{R}$ such that: \begin{itemize} \item[(i)] for all $x \in X_n$, we have $\varphi(x) = \varphi_n(x) + \alpha_n + \alpha_1$ if $n\neq 1$, and $\varphi(x)=\varphi_1(x)+\alpha_1$ if $n=1$; \item[(ii)] for all $y \in Y_n$, we have $\psi(y) = \psi_n(y) - \alpha_n - \alpha_1$ if $n\neq 1$, and $\psi(y)=\psi_1(y)-\alpha_1$ if $n=1$; \item[(iii)] for all $n, m \in \{1 \ldots, N\}$ with $n \neq m$, we have \begin{equation}\label{eq.anm} - \min_{x \in X_m, y \in Y_n} c(x,y) -\varphi_m(x)- \psi_n(y)\le \alpha_n - \alpha_m \le \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x) -\psi_m(y). \end{equation} \end{itemize} \end{theorem}
\begin{proof}
We start by showing that, for any dual optimizer $(\varphi, \psi)$, constants $\alpha_1, \alpha_2, \ldots, \alpha_N$ satisfying (i)-(iii) exist. By Theorem~\ref{thm:Components}-(iii) we know that, for every $n\in\{1,\ldots,N\}$, $\varphi|_{X_n}$ and $\psi|_{Y_n}$ have to be optimizers of $\text{DOT}_{X_n,Y_n}(\mu, \nu,c)$. Since the optimizers for these problems are unique up to translation, there is exactly one set of constants $\alpha_1, \alpha_2,\ldots, \alpha_N$ such that the first two conditions are satisfied. Hence, it remains to prove that the constants $\alpha_1, \alpha_2, \ldots, \alpha_N$ given in this way satisfy (iii). For this, fix any $n,m \in \{1, \ldots, N\}$ with $n\neq m$. Let us first consider any $x \in X_n$ and $y \in Y_m$. Since $(\varphi, \psi)$ is feasible and $(\varphi, \psi)$ satisfies (i) and (ii), then \begin{align*} c(x,y) &\ge \varphi(x) + \psi(y) = \varphi_n(x) + \alpha_n + \psi_m(y) - \alpha_m . \end{align*} Hence, $\alpha_n - \alpha_m \le c(x,y) - \varphi_n(x) - \psi_m(y)$, which proves the second inequality in (iii).
Similarly, let us now consider $x \in X_m$ and $y \in Y_n$. Then \begin{align*} c(x,y) &\ge \varphi(x) + \psi(y) = \varphi_m(x) + \alpha_m + \psi_n(y) - \alpha_n, \end{align*} thus $\alpha_n - \alpha_m \ge - c(x,y) + \varphi_m(x) + \psi_n(y)$, which proves the first inequality in (iii).
We are left to show the converse implication, that is, that given constants $\alpha_1, \ldots, \alpha_N$ satisfying (iii), the pair $(\varphi,\psi)$ defined via (i) and (ii) is a dual optimizer. For this we first prove that $(\varphi, \psi)$ is feasible, and then that it satisfies $c(x,y) = \varphi(x) + \psi(y)$ $\gamma-$a.s.. This means that $\gamma$ and $(\varphi,\psi)$ are complementary, which by Theorem~\ref{thm.510} implies optimality of $(\varphi,\psi)$.
To show that $(\varphi, \psi)$ is feasible, we first note that for $n \in \{1, \ldots, N\}$, $x \in X_n$ and $y \in Y_n$ we have \[ \varphi(x) + \psi(y) = \varphi_n(x)+\psi_n(y) \le c(x,y), \] since $(\varphi_n,\psi_n)$ is an optimizer for $\text{DOT}_{X_n,Y_n}(\mu, \nu, c)$. Now let $n,m \in \{1, \ldots, N\}$ with $n \neq m$, and consider any $x \in X_n$ and $y \in Y_m$. By choice of $\alpha_n$ and $\alpha_m$ we have \[ \alpha_n - \alpha_m \le c(x,y) -\varphi_n (x) - \psi_m(y). \] Hence, \begin{align*} \varphi(x) + \psi(y) &= \varphi_n(x) + \alpha_n + \psi_m(y) -\alpha_m \\ &\le \varphi_n(x) + \psi_m(y) + c(x,y) - \varphi_n(x) - \psi_m(y) = c(x,y). \end{align*} Thus, all in all we proved that $(\varphi, \psi)$ is feasible. It remains to prove that $(\varphi, \psi)$ is complementary to $\gamma$. By Theorem~\ref{thm:Components}-(i) we have that $\gamma(X_n \times Y_m)=0$ for all $n,m \in \{1, \ldots, N\}$ with $n \neq m$. Hence, we only have to check that \[ \gamma(x,y)>0 \Rightarrow c(x,y) = \varphi(x)+\psi(y) \quad \text{for all } n \in \{1, \ldots, N\}, x \in X_n, y \in Y_n. \] By definition of $\varphi$ and $\psi$, this is equivalent to showing that \[ \gamma(x,y)>0 \Rightarrow c(x,y) = \varphi_n(x)+\psi_n(y) \quad \text{for all } n \in \{1, \ldots, N\}, x \in X_n, y \in Y_n. \]
This follows from the fact that $\frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n}$ is optimal for $\text{OT}_{X_n,Y_n}(\mu, \nu, c)$ by Theorem~\ref{thm:Components}-(ii), and $(\varphi_n, \psi_n)$ is optimal for $\text{DOT}_{X_n, Y_n} (\mu, \nu, c)$ by assumption, thus they are complementary, which yields \[
\gamma(x,y)>0 \Leftrightarrow \frac{1}{\mu(X_n)} \gamma|_{X_n \times Y_n} (x,y)>0\Rightarrow c(x,y) = \varphi_n(x)+\psi_n(y) \;\, \text{for all } n \in \{1, \ldots, N\}, x \in X_n, y \in Y_n. \] \end{proof}
\begin{example} \label{ex2} Consider $\mathcal{X} = \{1,2,3,4\}$ and $\mathcal{Y} = \{1,2,3,4,5\}$ and the following measures: \[ \mu = \tfrac{3}{20} \delta_{\{1\}} + \tfrac{3}{20} \delta_{\{2\}} + \tfrac{1}{5} \delta_{\{3\}} + \tfrac{1}{2} \delta_{\{4\}} \quad \text{and} \quad \nu = \tfrac{1}{5} \delta_{\{1\}} + \tfrac{1}{10} \delta_{\{2\}} + \tfrac{1}{5} \delta_{\{3\}} + \tfrac{1}{4} \delta_{\{4\}} + \tfrac{1}{4} \delta_{\{5\}}. \] Set $F_1 =\{ (2,1), (3,3), (3,4)\},\,
F_2 =\{ (1,1), (2,2), (4,3), (4,4), (4,5)\},\,
F_3 = \{(1,2)\}$, and define \[ c(x,y) = 1_{F_2}(x,y)+3\cdot 1_{F_3}(x,y)+2\cdot 1_{(F_1\cup F_2\cup F_3)^c}(x,y). \] Then one finds that \[ \gamma= \tfrac{3}{20}\cdot 1_{\{(1,1)\}}+ \tfrac{1}{20}\cdot 1_{\{(2,1)\}}+ \tfrac{1}{10}\cdot 1_{\{(2,2)\}}+ \tfrac{1}{5}\cdot 1_{\{(3,3)\}}+ \tfrac{1}{4}\cdot 1_{\{(4,4), (4,5)\}} \] is a primal optimizer for $\text{OT}(\mu, \nu, c)$. The associated graph $G^\gamma$ is depicted in Figure~\ref{fig:ex2} and has three components with $X_1 = \{1,2\}, Y_1 = \{1,2\}, X_2 = \{3\}, Y_2 = \{3\}, X_3= \{4\}, Y_3 = \{4,5\}$. \begin{figure}
\caption{The graph $G^\gamma$ from Example~\ref{ex2} and Example~\ref{ex3} }
\label{fig:ex2}
\end{figure} The dual optimizers for the connected components can be immediately computed as explained after Proposition~\ref{prop:unique}, and they read as \begin{align*} &\varphi_1(1)=0, \, \varphi_1(2)= -1, \, \psi_1(1)=1, \, \psi_1(2)=2 \\ &\varphi_2(3)=0, \, \psi_2(3)=0 \\ &\varphi_3(4)=0, \, \psi_3(4)=1, \, \psi_3(5)=1. \end{align*} Now the constraints \eqref{eq.anm} on $\alpha=(\alpha_1, \alpha_2, \alpha_3)$ are \[ 0 \le \alpha_1 - \alpha_2 \le 2, \; 0 \le \alpha_1 - \alpha_3 \le 1 \text{ and } -1 \le \alpha_2 - \alpha_3 \le -1. \] Hence, we obtain that any suitable $\alpha$ has to satisfy $\alpha_1 \in \mathbb{R}$, $\alpha_2 \in [\alpha_1 - 2, \alpha_1]$ and $\alpha_3 = \alpha_2 + 1$. By Theorem~\ref{thm:SetOfAllDuals}, these choices describe all dual optimizers.
$\diamond$ \end{example}
Given this result, we can formulate the announced statement regarding the relation of $E$ and the set $\{[x,y]: x\in\mathcal{X}, y\in\mathcal{Y}, c(x,y) = \varphi(x) + \psi(y)\}$ for not necessarily connected graphs.
\begin{corollary} \label{cor:relationEandPhi} Let $G$ be the graph associated to $\text{OT}(\mu,\nu,c)$ and $V_1, \ldots, V_N$ its connected components, with $V_n = X_n \cup Y_n$ for subsets $X_n \subseteq \mathcal{X}$ and $Y_n\subseteq \mathcal{Y}$ for $n \in \{1, \ldots, N\}$. Let $(\varphi_n,\psi_n)$ be the unique optimizer for $\text{DOT}_{X_n,Y_n}(\mu, \nu, c)$ for $n \in \{1, \ldots, N\}$. Moreover, let $(\varphi, \psi)$ be an optimizer for $\text{DOT}(\mu,\nu,c)$, and $\alpha =(\alpha_1, \ldots, \alpha_N)$ be the constants satisfying condition (i)-(iii) of Theorem~\ref{thm:SetOfAllDuals}. Then \[ E = \{[x,y]: x\in\mathcal{X}, y\in\mathcal{Y}, c(x,y) = \varphi(x) + \psi(y)\} \] if and only if, for all $n, m \in \{1, \ldots, N\}$ with $n \neq m$, \eqref{eq.anm} holds with strict inequalities. \end{corollary}
\begin{proof} By Corollary~\ref{cor:Components}-(ii), the graph $G[X_n \cup Y_n]$ is the graph associated to the subproblem $\text{OT}_{X_n, Y_n}(\mu, \nu, c)$. Moreover, $G[X_n \cup Y_n]$ is connected and $(\varphi_n, \psi_n )$ is the dual optimizer for $\text{OT}_{X_n, Y_n}(\mu, \nu, c)$. Hence, by Proposition~\ref{prop:ChracterizationGconnected}, we obtain for $x \in X_n$ and $y \in Y_n$ that $(x,y) \in E$ if and only if \[ c(x,y) = \varphi_n(x) + \psi_n(y) = \varphi_n(x) + \alpha_n + \psi_n(y) - \alpha_n = \varphi(x) + \psi(y). \] Now let $n, m \in \{1, \ldots, N\}$ with $n \neq m$. If $\alpha_n-\alpha_m$ satisfies \eqref{eq.anm} with strict inequalities, then for any $x \in X_n$ and $y \in Y_m$ we have that $(x,y) \notin E$ by Corollary~\ref{cor:Components}-(i), and moreover \[ \varphi(x) + \psi(y) = \varphi_n(x) + \alpha_n + \psi_m(y) - \alpha_m < \varphi_n(x) + \psi_m(y) + c(x,y) - \varphi_n(x) - \psi_m(y) = c(x,y). \] Analogously, we obtain for $x \in X_m$ and $y \in Y_n$ that $(x,y) \notin E$ and $c(x,y) > \varphi(x) + \psi(y)$. Hence, for any dual optimizer with $\alpha$ satisfying \eqref{eq.anm} with strict inequalities for all $n,m \in \{1, \ldots, N\}$ with $n \neq m$, we have that $(x,y) \in E$ if and only if $c(x,y) = \varphi(x)+\psi(y)$.
Now assume that, for some $n,m \in \{1, \ldots, N\}$ with $n \neq m$, $\alpha_n-\alpha_m$ equals one of the extremes in \eqref{eq.anm}. Assume first that $\alpha_n - \alpha_m = - \min_{x \in X_m, y \in Y_n} c(x,y) -\varphi_m(x)- \psi_n(y)$ and let $x_m \in X_m$ and $y_n \in Y_n$ be such that $\alpha_n - \alpha_m = - (c(x_m,y_n) -\varphi_m(x_m)- \psi_n(y_n))$. Then, again by Corollary~\ref{cor:Components}-(i), we have that $(x_m, y_n) \notin E$, and at the same time \[ c(x_m,y_n) = \varphi_m(x_m) + \alpha_m + \psi_n(y_n) - \alpha_n = \varphi(x_m) + \psi(y_n). \] Analogously for the other extreme. This concludes the proof. \end{proof}
We know that whenever the interval \[ \left[ - \min_{x \in X_m, y \in Y_n} c(x,y) -\varphi_m(x)- \psi_n(y), \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x) -\psi_m(y) \right] \] consists of more than one value, then there are multiple dual optimizers and hence $G[X_n \cup Y_n \cup X_m \cup Y_m]$ cannot be connected, by Proposition~\ref{prop:unique}-(ii). In the next lemma we show that if the interval consists of exactly one point, then $G[X_n \cup Y_n \cup X_m \cup Y_m]$ is indeed connected.
\begin{prop} \label{prop:MaximalComponents} In the setting of Theorem~\ref{thm:SetOfAllDuals}, if for some $n,m \in \{1, \ldots, N\}$ with $n\neq m$ we have \begin{equation}\label{eq.mm} \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x) -\psi_m(y) = - \min_{x \in X_m, y \in Y_n} c(x,y) -\varphi_m(x)- \psi_n(y) \end{equation} then $G[X_n\cup Y_n \cup X_m \cup Y_m]$ is connected. \end{prop}
\begin{proof} Let $n,m \in \{1, \ldots, N\}$, $n\neq m$ be such that the equality in \eqref{eq.mm} holds, and call $\beta$ the common value of the LHS and RHS of \eqref{eq.mm}. Let $(\phi, \psi)$ be a dual optimizer. Then by Theorem~\ref{thm:SetOfAllDuals} there are constants $\alpha_1, \alpha_2, \ldots, \alpha_N$ such that conditions (i) - (iii) of Theorem~\ref{thm:SetOfAllDuals} are met. In particular, $\alpha_n - \alpha_m = \beta$. Since $(\varphi, \psi)$ is, as a dual optimizer, feasible, we have \begin{align*} 0 &\le \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi(x) - \psi(y) = \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x) - \alpha_n - \psi_m(y) + \alpha_m\\ &= \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x) - \psi_m(y) - \beta = 0. \end{align*} Together with \eqref{eq.mm}, this implies that there are $x_n \in X_n$, $x_m \in X_m$, $y_n \in Y_n$ and $x_m \in X_m$ such that \begin{align} \label{eq:Comp_Binding} c(x_n,y_m) = \varphi(x_n) + \psi(y_m) \quad \text{and} \quad c(x_m,y_n) = \varphi(x_m) + \psi(y_n). \end{align}
Let $i \in \{n,m\}$. Since $G[X_i \cup Y_i]$ is connected and bipartite, there is a path $x_i^{(0)}y_i^{(0)}x_i^{(1)}\ldots x_i^{(l_i)} y_i^{(l_i)}$ from $x_i^{(0)}= x_i$ to $y_i^{(l_n)} = y_i$ in $X_i \cup Y_i$, with $x_i^{(j)} \in X_i$ and $y_i^{(j)} \in Y_i$ for all $j \in \{0, \ldots, l_i\}$. Note that there is no vertex that appears in both paths since $X_n \cup Y_n$ and $X_m \cup Y_m$ are disjoint.
Now set \begin{equation*} \delta = \min_{i\in\{n,m\}, j \in \{0, \ldots, l_i\}} \gamma( x_i^{(j)}, y_i^{(j)}) \wedge \min_{i\in\{n,m\}, j \in \{1, \ldots, l_{i}\}} (1- \gamma(x_i^{(j)}, y_i^{(j-1)})), \end{equation*} and define $\hat{\gamma}: \mathcal{X} \times \mathcal{Y} \rightarrow [0,1]$ by \[ \hat{\gamma}(x,y) = \begin{cases}
\gamma(x,y) - \delta &\text{if } (x,y) = (x_i^{(j)}, y_i^{(j)}) \text{ for some } i\in\{n,m\}, j \in \{0, \ldots, l_i\} \\
\gamma(x,y) + \delta &\text{if } (x,y) = (x_i^{(j)}, y_i^{(j-1)}) \text{ for some } i\in\{n,m\}, j \in \{1, \ldots, l_i\} \\
\gamma(x,y) + \delta &\text{if } (x,y) \in \{(x_n, y_m), (x_m, y_n)\} \\
\gamma(x,y) &\text{else.} \end{cases} \] The construction of $\hat{\gamma}$ is illustrated in Figure~\ref{fig:proofIllustration}.
\begin{figure}
\caption{The figure illustrates the construction of $\hat{\gamma}$. This shows on which edges $[x,y]$ we set $\hat{\gamma}(x,y) = \gamma(x,y) - \delta$ (solid edges) and on which edges $[x,y]$ we set $\hat{\gamma}(x,y) = \gamma(x,y) - \delta$ (dashed edges).}
\label{fig:proofIllustration}
\end{figure}
We are going to show that $\hat{\gamma} \in \Pi(\mu,\nu)$. Since $\gamma(x_n,y_m)=\gamma(x_m,y_n)=0$, by definition of $\delta$ we have that $\gamma(x,y) \in [0,1]$ for all $(x,y) \in \mathcal{X} \times \mathcal{Y}$. Moreover, for any $x \in \mathcal{X} \setminus \{x_n^{(0)}, \ldots, x_n^{(l_n)}, x_m^{(0)}, x_m^{(l_m)}\}$, we have \[ \sum_{y \in \mathcal{Y}} \hat{\gamma}(x,y) = \sum_{y \in \mathcal{Y}} \gamma(x,y) =\mu(x). \] On the other hand, for $i\in\{n,m\}$ and $j \in \{0,\ldots, l_n\}$, we obtain \begin{align*} \sum_{y \in \mathcal{Y}} \hat{\gamma}(x_i^{(j)},y) &= \sum_{y \in \mathcal{Y} \setminus \{y_i^{(j)}, y_i^{(j-1)}\}} \gamma(x_i^{(j)},y) + \gamma(x_i^{(j)},y_i^{(j)}) - \delta + \gamma(x_i^{(j)}, y_i^{(j-1)}) + \delta \\ &= \sum_{y \in \mathcal{Y}} \gamma(x_i^{(j)},y) = \mu(x_i^{(j)}), \end{align*} where we used the convention $y_i^{(-1)}=y_i^{(l_i)}$. Analogous computations for $y \in \mathcal{Y}$ show that $\hat{\gamma} \in \Pi(\mu, \nu).$
Now we prove that $\hat{\gamma}$ and $(\varphi, \psi)$ are complementary. We start by noticing that for $(x,y) \notin \{(x_n,y_m), (x_m,y_n) \}$ we have \[ \hat{\gamma}(x,y) >0 \Rightarrow \gamma(x,y)>0. \] Moreover, for $(x,y) \in \{(x_n, y_m), (x_m,y_n)\}$ we have \eqref{eq:Comp_Binding}. Hence, the complementarity of $\gamma$ and $(\varphi, \psi)$ implies that $\hat{\gamma}$ and $(\varphi, \psi)$ are complementary. By Theorem~\ref{thm.510}-(iv) this implies that the coupling $\hat{\gamma}$ is a primal optimizer. Thus, $[x_n,y_m], [x_m,y_n] \in E$, which yields that $G[X_n \cup Y_n \cup X_m \cup Y_m]$ is connected, as wanted. \end{proof}
\begin{continueexample}{ex2} We have seen that, for any admissible $\alpha$, the intervals $\alpha_1 - \alpha_2$ and $\alpha_1 - \alpha_3$ consist of more than one point, which means that $X_1 \cup Y_1$ and $X_2 \cup Y_2$ as well as $X_1 \cup Y_1$ and $X_3 \cup Y_3$ are not part of the same component of $G$. On the other hand, the interval $\alpha_2 - \alpha_3$ consists of exactly one point. Hence, in $G$ there are two components with $\tilde{X}_1 = \{1,2\}$, $\tilde{Y}_1 = \{1,2\}$, $\tilde{X}_2 = \{3,4\}$ and $\tilde{Y}_2 = \{3,4,5\}$.
$\diamond$ \end{continueexample}
\begin{remark} The results derived so far allow us to compute $G$ given one primal optimizer $\gamma$ and one dual optimizer $(\varphi, \psi)$. Namely, given $G^\gamma$, we first determine the components $V_1, \ldots, V_N$ of $G^\gamma$ and then check for which pairs $n,m \in \{1, \ldots, N\}$ condition \eqref{eq.mm} is satisfied. If this condition is satisfied, by Proposition~\ref{prop:MaximalComponents} we have that $V_n \cup V_m$ is part of one component of $G$. If this condition is not satisfied, then as noted before Proposition~\ref{prop:MaximalComponents} we have that $V_n$ and $V_m$ cannot be part of the same component. Hence, by checking the condition for all $n,m \in \{1, \ldots, N\}$, we obtain the connected components $\Tilde{V}_1, \ldots, \tilde{V}_M$ of $G$. Then we construct a subgraph $G' \subseteq G$ with the same components as $G$ in the following way. For each pair $(n,m)$ such that \eqref{eq.mm} is satisfied, we find pairs $(x_n,y_m)\in X_n\times Y_m$ and $(x_m,y_n)\in X_m \times Y_n$ that attain the minima in \eqref{eq.mm}. As argued in the proof of Proposition~\ref{prop:MaximalComponents}, we have that $(x_n,y_m), (x_m,y_n) \in E$. Now let $G'$ be the graph obtained by adding to $G^\gamma$ the edges $(x_n,y_m), (x_m,y_n)$ for any $(n,m)$ such that \eqref{eq.mm} holds. By construction, this graph has the same connected components as $G$. Finally, we compute the edges of $G$. By Proposition~\ref{prop:unique}-(ii), the dual optimizer of each component $\tilde{V}_n=\Tilde{X}_n\cup\Tilde{Y}_n$ is unique, therefore $(\varphi, \psi)$ restricted to $\Tilde{V}_n$ is the unique optimizer. Hence, by Proposition~\ref{prop:ChracterizationGconnected}, all edges of $G$ in this component are given by those edges $(x,y) \in \Tilde{X}_n \times \Tilde{Y}_n$ for which $c(x,y)=\varphi(x) + \psi(y)$. That these are all edges of $G$ finally follows from Corollary~\ref{cor:relationEandPhi}.
$\diamond$ \end{remark}
Finally, we can now show that the condition that $G$ is connected is also necessary for the uniqueness (up to translation) of the dual optimizer.
\begin{corollary}
\label{cor:UniqueMeansConnected}
The dual optimizer is unique if and only if the graph $G$ is connected. \end{corollary}
\begin{proof}
That $G$ being connected is sufficient for a unique dual optimizer has been proved in Proposition~\ref{prop:unique}. Hence, it remains to prove that the condition is also necessary for uniqueness.
Assume that there is a unique dual optimizer and, by contradiction, that the graph $G$ is not connected, i.e. that $G$ has $N \ge 2$ components. By Theorem~\ref{thm:SetOfAllDuals} we now have that for any $\alpha =(\alpha_1, \ldots, \alpha_N)$ satisfying condition \eqref{eq.anm} there is a dual optimizer. Since, by assumption, the dual optimizer is unique, $\alpha$ has to be uniquely determined, which in particular means that
\[
\alpha_1 - \alpha_2 = \min_{x \in X_1, y \in Y_2} c(x,y) - \varphi_1(x) - \psi_2(y) = -\min_{x \in X_2, y \in Y_1} c(x,y) - \varphi_2(x) - \psi_1(y).
\] By Proposition~\ref{prop:MaximalComponents} this implies that $G[X_1 \cup Y_1 \cup X_2 \cup Y_2]$ is connected, which is the desired contradiction. \end{proof}
\section{Entropic Transport Problems and Asymptotics.} \label{sec:entropic} In this section we turn to the entropic regularization of the optimal transport problem and provide the characterization of the limit of the dual entropic optimizers. We start by briefly describing the entropic regularization of the optimal transport problem. For more details in this discrete setting, we refer the reader to Chapter 4 in \cite{ComputationalOT}.
For any $\varepsilon>0$, the entropic regularization of $\text{OT}(\mu, \nu, c)$ reads as \begin{equation}\label{eq.eOT} \text{OT}_\varepsilon(\mu, \nu, c) = \inf_{\gamma \in \Pi(\mu, \nu)} \left\{\textstyle{\int c\ d\gamma + \varepsilon H(\gamma)}\right\}, \end{equation} where $H(\gamma)$ is the entropy associated to the coupling $\gamma$, and is given by \[ H(\gamma)=\sum_{(x,y) \in \mathcal{X} \times \mathcal{Y}} \gamma(x,y) (\ln (\gamma(x,y)) -1), \] with the usual convention $0\log(0)=0$. The corresponding dual is \begin{align}\label{eq.DeOT} \text{DOT}_\varepsilon (\mu, \nu, c) = \sup \Big\{\textstyle{ \int_\mathcal{X} \varphi d\mu + \int_\mathcal{Y}\psi d\nu} - \varepsilon \displaystyle{\sum_{(x,y) \in \mathcal{X} \times \mathcal{Y}}} \text{e}^{\frac{1}{\varepsilon} (-c(x,y)+\varphi(x)+\psi(y))} : \varphi:\mathcal{X}\to\mathbb{R}, \psi:\mathcal{Y}\to\mathbb{R}\Big\}. \end{align} As for the classical transport problem, duality $\text{OT}_\varepsilon (\mu, \nu, c)=\text{DOT}_\varepsilon (\mu, \nu, c)$ holds. One main advantage of the regularized problems is that both \eqref{eq.eOT} and \eqref{eq.DeOT} admit unique optimizers, that we call primal and dual entropic optimizers, respectively, and denote them by $\gamma^\varepsilon$ and $(\varphi^\varepsilon, \psi^\varepsilon)$.
\begin{prop}[\cite{ComputationalOT}, Proposition 4.1] For $\varepsilon \rightarrow 0$, the unique solution $\gamma^\varepsilon$ of $\text{OT}_\varepsilon(\mu, \nu, c)$ converges to a solution of $\text{OT}(\mu, \nu, c)$, and specifically to the one with minimal entropy, namely \begin{equation}\label{eq.gamma*} \gamma^\varepsilon\to\gamma^* :=\argmin \left\{H(\gamma) : \gamma \text{ optimizer of } \eqref{eq.OT} \right\}. \end{equation} As a consequence, $\text{OT}_\varepsilon(\mu, \nu, c)\to\text{OT}(\mu, \nu, c)$ for $\varepsilon \rightarrow 0$. \end{prop} Note that the minimizer $\gamma^*$ in \eqref{eq.gamma*} is unique because of strict convexity of relative entropy and convexity of the set of primal optimizers.
\begin{remark} By Remark~\ref{rem.supp}, we obtain the more convenient characterization \[ \gamma^*=\argmin \left\{\sum_{[x,y] \in E} \gamma(x,y) (\ln(\gamma(x,y))-1) : \gamma \in \Pi(\mu, \nu) \text{ s.t. supp$(\gamma)\subseteq E$}\right\}. \]
$\diamond$ \end{remark}
Clearly, the discrete optimal transport problem $\text{OT}(\mu,\nu,c)$ is a linear programming problem. Moreover, by fixing an arbitrary $x_0 \in \mathcal{X}$ and dropping the redundant constraint $\sum_{y \in \mathcal{Y}} \gamma(x_0,y)=\mu(x_0)$, we obtain a linear programming problem where the matrix describing the constraints has full rank; see \cite[Remark 3.1]{ComputationalOT}. The dual problem associated to this adjusted linear programming problem is problem $\text{DOT}(\mu, \nu, c)$ with the additional requirement that $\varphi(x_0)=0$. By Theorem~\ref{thm.510} there is a dual optimizer and by Theorem~\ref{thm:SetOfAllDuals} we moreover have that the set of all solutions to this problem is bounded. Hence, the adjusted dual problem satisfies the assumptions requested in \cite[Proposition~3.2]{CominettiSanMartin}, ensuring the existence of a dual optimizer $(\hat\varphi, \hat \psi)$, namely the centroid of the solution set, such that any sequence of dual entropic optimizers $(\varphi^{\varepsilon_n}, \psi^{\varepsilon_n})$ with $\varepsilon_n \rightarrow 0$ converges to $(\hat\varphi, \hat \psi)$. However, the construction of the centroid in \cite{CominettiSanMartin} requires the determination of the solution set for up to $n_\mathcal{X} \times n_\mathcal{Y}$ convex optimization problems. We now propose a simple algorithm to pinpoint $(\hat\varphi, \hat \psi)$, that relies only on elementary computations.
In what follows we denote by $V_1, \ldots, V_N$ the connected components of the graph $G$ associated to $\text{OT}(\mu,\nu,c)$, and we consider the usual decomposition $V_n=X_n \cup Y_n$ for $n \in \{1, \ldots, N\}$. Without loss of generality we assume that $x_0 \in X_1$. Recall that, for each $n$, the dual problem $\text{DOT}_{X_n,Y_n}(\mu,\nu,c)$ admits a solution that is unique up to translation by constants. We fix one of them and denote it by $(\varphi_n,\psi_n)$. For the first component, we choose this representative so that $\varphi_1(x_0)=0$.
\begin{figure}
\caption{Construction of the tree $T$}
\caption{By means of simple computations, Algorithm~\ref{Algorithm:ConstructionT} gives a recipe to construct a tree $T$ on the vertex set $\{1, \ldots, N\}$, which is a connected graph with exactly $N-1$ edges. This tree then allows us to describe the dual optimizer to which any sequence of dual entropic optimizers converges, as described in Theorem~\ref{thm:LimitEntropic}.}
\label{Algorithm:ConstructionT}
\end{figure}
\begin{theorem} \label{thm:LimitEntropic} Let $T=(V^T,E^T)$ be the spanning tree on $V^T=\{1,...,N\}$ given by Algorithm~\ref{Algorithm:ConstructionT}. Let $\alpha_1=0$ and set $\alpha_2,\ldots,\alpha_N$ to be constants such that, for all $\{n,m\} \in E^T$, \begin{equation}\label{eq:DualLimitcEqual} \alpha_n - \alpha_m = L_{n,m}:= \frac{1}{2} \min_{x \in X_n, y \in Y_m} \big(c(x,y) - \varphi_n(x) - \psi_m(y) \big) - \frac{1}{2} \min_{x \in X_m, y \in Y_n} \big(c(x,y) - \varphi_m(x) - \psi_n(y) \big). \end{equation} Define $\varphi^\ast:\mathcal{X}\to\mathbb{R}$ and $\psi^\ast:\mathcal{Y}\to\mathbb{R}$ by $\varphi^\ast(x) = \varphi_n(x) + \alpha_n$ and $\psi^\ast (y) = \psi_n(y) - \alpha_n$ for all $x \in X_n$ and $y \in Y_n$, $n\in\{1,\ldots,N\}$. Then any sequence $(\varphi^{\varepsilon_n}, \psi^{\varepsilon_n})$ with $\varepsilon_n \rightarrow 0$ converges to $(\varphi^\ast, \psi^\ast)$. \end{theorem}
By what said above, proving Theorem~\ref{thm:LimitEntropic} corresponds to showing that $(\varphi^\ast, \psi^\ast)$ is the centroid $(\hat\varphi, \hat \psi)$ found in \cite{CominettiSanMartin}.
\begin{proof} \emph{Step 1: There is exactly one set of constants $(\alpha_1, \ldots, \alpha_N)$, with $\alpha_1=0$, that satisfies \eqref{eq:DualLimitcEqual} for all $\{n,m\} \in E^T$.}\\ Note that the graph $T$ built in Algorithm~\ref{Algorithm:ConstructionT} is a tree, which in particular means that it is connected. Then, by Lemma~\ref{lemma.ordering} we find an ordering $v_1, \ldots, v_N$ of the vertices in $T$ such that $v_1=1$ and $T[v_1, \ldots, v_i]$ is connected for every $i \in \{2, \ldots, N\}$. As in the proof of Proposition~\ref{prop:unique}, we can then successively set $\alpha_{v_i}$ for any $i \in \{2, \ldots, N\}$. Since $T$ is a tree, for any $i \in \{2, \ldots, N\}$ there is exactly one edge $\{v_{i'}, v_i\}$ with $i'<i$ in $T$ and we choose $\alpha_{v_i}$ such that $\alpha_{v_i} - \alpha_{v_{i'}}$ satisfies \eqref{eq:DualLimitcEqual}. Proceeding this way, we uniquely determine $(\alpha_2, \ldots, \alpha_n)$. Moreover, we used one edge for each step $i \in \{2, \ldots, N\}$, and all these edges are distinct. Hence, we used all $N-1$ edges in $T$. Therefore, $(\alpha_1, \ldots, \alpha_N)$ satisfies \eqref{eq:DualLimitcEqual} for all $\{n,m\} \in E^T$.
\emph{Step 2: Description of the construction procedure for the centroid in Cominetti and San Mart\'{i}n~\cite{CominettiSanMartin}.}\\ We denote by $S_0$ the set of all solutions of the dual problem $\text{DOT}(\mu,\nu,c)$. Let $I_0 = \{(x,y) \in \mathcal{X} \times\mathcal{Y}: c(x,y) = \varphi(x)+\psi(y) \; \forall (\varphi, \psi) \in S_0\}$. By Proposition~\ref{prop:Gcharacterizes} we have that $E\subseteq I_0$. Since, by Corollary~\ref{cor:relationEandPhi}, there exists a dual optimizer $(\varphi, \psi)$ such that $E = \{ (x,y) \in \mathcal{X} \times \mathcal{Y}: c(x,y) = \varphi(x) + \psi(y)\}$, we obtain that $E=I_0$. Next, for each $n=0,...,n_\mathcal{X} \cdot n_\mathcal{Y}-1$, we define the continuous concave function \[ f_{n}(\varphi, \psi) = \min_{(x,y) \notin I_{n}} c(x,y) - \varphi(x) - \psi(y). \] For $n=1,...,n_\mathcal{X} \cdot n_\mathcal{Y}$, we consider the convex optimization problem \[ w_n=\max \{f_{n-1}(\varphi, \psi): (\varphi, \psi) \in S_{n-1}\}, \] and we denote by $S_n$ the set of its solutions.
Finally, we write \[ J_n = \{ (x,y) \notin I_{n-1}: c(x,y) - w_n = \varphi(x)+\psi(y)\ \forall (\varphi, \psi) \in S_n\} \] and set $I_n = I_{n-1} \cup J_n$. Note that we start this induction procedure with the set $S_0$ which is non-empty, bounded, closed and convex, and that by construction all sets $S_n$ preserve the same properties.
\emph{Step 3: The set $J_n$ is non-empty for all $n$ such that $I_{n-1}\subsetneq \mathcal{X}\times \mathcal{Y}$.}\\ We work by way of contradiction, and assume that $J_n$ is empty. This means that for any $(x,y) \notin I_{n-1}$ there is a pair $(\varphi^{(x,y)}, \psi^{(x,y)}) \in S_n$ such that \begin{equation} \label{eq:Jn-non-empty} c(x,y) - \varphi^{(x,y)}(x) - \psi^{(x,y)}(y) > w_n. \end{equation} Since $S_n$ is convex, we have that \[ (\bar\varphi, \bar\psi)
:= \Big( \tfrac{1}{n_\mathcal{X} \times n_\mathcal{Y} - |I_{n-1}|} \sum_{(x,y)\notin I_{n-1}} \varphi^{(x,y)} , \tfrac{1}{n_\mathcal{X} \times n_\mathcal{Y} - |I_{n-1}|} \sum_{(x,y)\notin I_{n-1}} \psi^{(x,y)} \Big) \in S_n. \] Let $(x',y') \notin I_{n-1}$ be arbitrary. Then we have
\[ c(x',y') - \bar\varphi(x') - \bar\psi(y') = \tfrac{1}{n_\mathcal{X} \times n_\mathcal{Y} - |I_{n-1}|} \sum_{(x,y)\notin I_{n-1}} \left( c(x',y')-\varphi^{(x,y)}(x') - \psi^{(x,y)}(y') \right). \] Note that $c(x',y')-\varphi^{(x,y)}(x') - \psi^{(x,y)}(y') \ge w_n$ for all $(x,y) \notin I_{n-1}$, since $(\varphi^{(x,y)}, \psi^{(x,y)}) \in S_n$. Moreover, $c(x',y')-\varphi^{(x',y')}(x') - \psi^{(x',y')}(y') > w_n$ holds by \eqref{eq:Jn-non-empty}. Hence, we have \[ c(x',y') - \bar\varphi(x') - \bar \psi(y') > w_n \quad \text{for all } (x',y') \notin I_{n-1}, \] which is a contradiction to $(\bar\varphi, \bar\psi) \in S_n$.
\emph{Step 4: Identification of the limiting point in \cite{CominettiSanMartin}.}\\ Note that Step 3 implies that the sequence $I_0 \subseteq I_1 \subseteq \ldots \subseteq I_n$ is strictly increasing, as long as $I_{n-1}\subsetneq \mathcal{X}\times \mathcal{Y}$. Because of strict monotonicity of the sets $I_n\subseteq \mathcal{X}\times\mathcal{Y}$, for some $M\leq n_\mathcal{X}\times n_\mathcal{Y}$ we have that $(\mathcal{X}\cup\mathcal{Y},I_M)$ is a connected graph. Let $M'$ be the first step where this happens. Then, by the same argument as in the proof of Proposition~\ref{prop:unique}-(ii), we can conclude that $S_{M'}=\{(\hat\varphi,\hat\psi)\}$.
Now, set $w_0=0$ and $J_0=I_0$. In \cite{CominettiSanMartin} it is shown that the polytopes $S_0 \supseteq S_1 \supseteq \ldots \supseteq S_{M'}$ satisfy \begin{equation} \label{eq:characSN} S_n = \left\{ (\varphi, \psi): \begin{array}{l l} c(x,y) - w_j = \varphi(x) + \psi(y), & \text{ for all } j \in \{0, \ldots, n\},(x,y) \in J_j \\ c(x,y) - w_n \ge \varphi(x) + \psi(y), & \text{ for all } (x,y) \notin I_n \end{array} \right\} \end{equation} for all $n = \{0,1, \ldots, M'\}$, and that the pair $(\hat\varphi,\hat\psi)$ in $S_{M'}$ is indeed the limit of any sequence $(\varphi^{\varepsilon_n}, \psi^{\varepsilon_n})$ with $\varepsilon_n \rightarrow 0$.
We are therefore left to show that $(\varphi^\ast, \psi^\ast)=(\hat\varphi,\hat\psi)$.
\emph{Step 5: Conclusion in the case of $G$ connected.}\\ If $G$ is connected (i.e. $N=1$), by Proposition~\ref{prop:unique}-(ii) we have that already $S_0=\{(\hat\varphi,\hat\psi)\}$ is a singleton, so the claim immediately follows. Hence, in the following we assume that $N>1$.
\emph{Step 6: Setting for the rest of the proof.}\\ We define $\varphi^\alpha:\mathcal{X}\to\mathbb{R}$ and $\psi^\alpha:\mathcal{Y}\to\mathbb{R}$ by $\varphi^\alpha(x) = \varphi_n(x) + \alpha_n$ and $\psi^\alpha (y) = \psi_n(y) - \alpha_n$ for all $x \in X_n$ and $y \in Y_n$, $n\in\{1,\ldots,N\}$, with $\alpha_1=0$ and constants $\alpha_2,\ldots,\alpha_N\in\mathbb{R}$ as in Theorem~\ref{thm:SetOfAllDuals}. We write $\delta_1 < \ldots < \delta_L$ for the values of $\delta$ in line 6 of Algorithm~\ref{Algorithm:ConstructionT} that occur while iterating the while-loop. Moreover, we denote by $T_l$ the graph obtained in the while-loop in line~5 of Algorithm~\ref{Algorithm:ConstructionT} for $\delta_l$, $l=1,...,L$. Finally, we set $\delta_0=0$, $\delta_{L+1}=+\infty$ and write $T_0$ for the graph on $\{1, \ldots, N\}$ with no edges.
\emph{Step 7: For any $j\in\{1,\ldots,M'\}$ and $l\in\{0,1,\ldots,L\}$, if $w_j \in [\delta_l, \delta_{l+1})$, then $S_j$ is the set all pairs $(\varphi^\alpha, \psi^\alpha)$ with $\alpha$ such that $\alpha_1=0$ and: \begin{itemize} \item[(i)] for all edges $\{n,m\} \in T_l$, \eqref{eq:DualLimitcEqual} holds; \item[(ii)] for all pairs $(n,m)\in \{1, \ldots, N\}^2$ such that there is no path in $T_l$ joining $n$ and $m$, then \begin{align*}\label{eq.int.a} \alpha_n - \alpha_m \in \left[- \min_{x \in X_m, y \in Y_n} c(x,y) - \varphi_m(x)-\psi_n(y) + w_j, \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x)- \psi_m(y) - w_j \right]. \end{align*} \end{itemize}} We first prove by induction on $j$ that any pair $(\varphi^\alpha, \psi^\alpha)$ with $\alpha$ as in the claim satisfies the constraints in \eqref{eq:characSN} for $n=j$. Afterwards, we will show that any pair $(\varphi^\alpha,\psi^\alpha)$ where $\alpha$ does not satisfy the conditions in the claim violates at least one constraint in \eqref{eq:characSN}.
To prove the first statement, we note that for $j=0$ the claim immediately follows from Theorem~\ref{thm:SetOfAllDuals}. Now let $j \in \{1, \ldots, M'\}$ and assume that the claim has been proved for $j-1$. Let $\alpha$ be as in the claim. Let us first consider $x \in X_n$ and $y \in Y_n$, for some $n \in \{1, \ldots, N\}$. Then \[ c(x,y) - \varphi^\alpha(i) - \psi^\alpha(y) = c(x,y) - \varphi_n(x) - \alpha_n - \psi_n(y) + \alpha_n = c(x,y) - \varphi_n(x) - \psi_n(y). \] Hence, the difference $c(x,y) - \varphi^\alpha(x) - \psi^\alpha(y)$ equals some value $L(x,y)$ independent of $\alpha$. If $L(x,y) < w_j$, then, by the induction hypothesis, there is $k<j$ such that $w_k = L(x,y)$ and $(x,y) \in J_k$. Hence,
the equality in the first constraint in \eqref{eq:characSN} is satisfied. If $L(x,y) = w_j$, then $(x,y) \in J_j$ and the equality in the first constraint in \eqref{eq:characSN} is satisfied. Finally, if $L(x,y) > w_j$, then we have \[ c(x,y) - w_j > c(x,y) - L(x,y) = \varphi^\alpha(x) + \psi^\alpha(y), \] hence $(x,y) \notin I_n$ and the second inequality in \eqref{eq:characSN} is satisfied.
Now consider $x \in X_n$ and $y \in Y_m$ for $n,m\in \{1, \ldots, N\}$ with $n \neq m$ and such that they are connected by a path $n=n_1 n_2 \ldots n_k=m$ in $T_l$. Then \begin{align*} c(x,y) - \varphi^\alpha(x) - \psi^\alpha(y) &= c(x,y) - \varphi_n(x) - \alpha_n - \psi_m(y) + \alpha_m \\ &= c(x,y) - \varphi_n(x) - \psi_m(y) - \alpha_{n_1} + \alpha_{n_2} - \alpha_{n_2}+ \alpha_{n_3} - \ldots -\alpha_{n_{k-1}} + \alpha_{n_k}. \end{align*} Since $\alpha$ satisfies the claim, $\alpha_{n_i} - \alpha_{n_{i-1}}$ is constant for all $i \in \{2, \ldots, k\}$. Therefore, the difference $c(x,y) - \varphi^\alpha(x) - \psi^\alpha(y)$ equals some value $L(x,y)$. If $L(x,y)<w_j$, then $L(x,y)\le w_{j-1}$ and hence, by the induction hypothesis, the first constraint in \eqref{eq:characSN} is satisfied. If $L(x,y)=w_j$, we obtain $(x,y) \in J_j$ and again the first constraint in \eqref{eq:characSN} is satisfied. If $L(x,y)>w_j$, then we have \[ c(x,y) - w_j > c(x,y)) - L(x,y) = \phi^\alpha (x) + \psi^\alpha(y), \] thus $(x,y) \notin I_n$ and the second constraint in \eqref{eq:characSN} is satisfied.
Finally, we consider $x \in X_n$ and $y \in Y_m$ such that $n$ and $m$ are not joined by a path in $T_l$. In this case, by construction of $T_l$ we have that $\delta_{n,m} \ge \delta_{l+1} > \delta_l$. In particular, $\delta_{n,m} > w_j$. Hence, the difference $c(x,y) - \varphi^\alpha(x) - \psi^\alpha(y)$ can take different values depending on $\alpha$, thus $(x,y) \notin I_n$. By choice of $\alpha$ we have \begin{align*} c(x,y) - \varphi^\alpha(x) - \psi^\alpha(y)&= c(x,y) - \varphi_n(x) - \alpha_n - \psi_m(y) + \alpha_m \\ &= c(x,y) - \varphi_n(x) - \psi_m(y) - (\alpha_n-\alpha_m) \\ &\ge c(x,y)- \varphi_n(x) - \psi_m(y) - \min_{x' \in X_n, y' \in Y_m} \left(c(x',y') - \varphi_n(x') - \psi_m(y') \right) + w_j \\ &\ge w_j, \end{align*} hence the second constraint in \eqref{eq:characSN} is satisfied. This concludes the proof of the fact that, for all $\alpha$ satisfying the claim, the pair $(\varphi^\alpha,\psi^\alpha)$ belongs to $S_j$.
In order to conclude the proof of the claim, it remains to prove that for any $\alpha$ not satisfying the constraints of the claim we have $(\varphi^\alpha, \psi^\alpha) \notin S_j$. Assume first that $(\varphi^\alpha, \psi^\alpha)$ is such that there are $n,m \in \{1, \ldots, N\}$ such that $(n,m) \in T_l$ and $ \alpha_n - \alpha_m > L_{n,m}$, with $L_{n,m}$ given in \eqref{eq:DualLimitcEqual}. Note that by construction of $T_l$ we have $\delta_{n,m} \le \delta_l \le w_l$. Let moreover $x' \in X_n$ and $y' \in Y_m$ be such that \[ c(x',y') - \varphi_n(x') - \psi_m(y') = \min_{x \in X_n, y \in Y_m} c(x,y) -\varphi_n(x)- \psi_m(y). \] Then \begin{align*} c(x',y') - \varphi^\alpha(x') - \psi^\alpha(y') &= c(x',y') - \varphi_n(x') - \psi_m(y') - \alpha_n + \alpha_m \\ &< \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x)- \psi_m(y) - \frac{1}{2} \min_{x \in X_n, y \in Y_m} \big( c(x,y) - \varphi_n(x) - \psi_m(y) \big) \\ &\quad + \frac{1}{2} \min_{x \in X_m, y \in Y_n} \big( c(x,y) - \varphi_m(x) - \psi_n(y) \big) \\ &= \delta_{n,m} \le w_l, \end{align*} thus the constraints in \eqref{eq:characSN} are not satisfied for $(x',y')$. One can proceed in an analogous way for $(\varphi^\alpha, \psi^\alpha)$ such that there are $n,m \in \{1, \ldots, N\}$ with $(n,m) \in T_l$ and $\alpha_n - \alpha_m < L_{n,m}$.
Now assume that $(\varphi^\alpha, \psi^\alpha)$ is such that there are $n,m \in \{1, \ldots, N\}$ with $n \neq m$ such that there is no path from $n$ to $m$ in $T_l$ and that \[ \alpha_n - \alpha_m > \min_{x \in X_n, y \in Y_m} c(x,y) - \varphi_n(x)- \psi_m(y) - w_j. \] Let $x' \in X_n$ and $y' \in Y_m$ be such that \[ c(x',y') - \varphi_n(x') - \psi_m(y') = \min_{x \in X_n, y \in Y_m} c(x,y) -\varphi_n(x)- \psi_m(y). \] Then \begin{align*} c(x',y') - \varphi^\alpha(x') - \psi^\alpha(y') &= c(x',y') - \varphi_n(x') - \psi_m(y') - \alpha_n + \alpha_m \\ &< \min_{x \in X_n, y \in Y_m} c(x,y)- \varphi_n(x)- \psi_m(y) - \min_{x \in X_n, y \in Y_m} c(x,y)- \varphi_n(x)- \psi_m(y) + w_j \\ &= w_j, \end{align*} hence the constraints in \eqref{eq:characSN} are not satisfied. Finally, we consider the case of $(\varphi^\alpha, \psi^\alpha)$ such that there are $n,m \in \{1, \ldots, N\}$, $n \neq m$, with no path from $n$ to $m$ in $T_l$ and \[ \alpha_n -\alpha_m < - \min_{x \in X_m, y \in Y_n} \big(c(x,y)- \varphi_m(x)-\psi_n(y)\big) + w_j. \] Note that this inequality is equivalent to \[ \alpha_m -\alpha_n > \min_{x \in X_m, y \in Y_n} \big(c(x,y)- \varphi_m(x)-\psi_n(y)\big) - w_j, \] thus we arrive again at a violation of the constraints in \eqref{eq:characSN}.
\emph{Step 8. Conclusion.} \\
By Step 7, we note that for any $j \in \{1, \ldots, M'\}$ such that $w_j \ge \delta_L$ we have $S_j = \{(\varphi^\ast, \psi^\ast)\}$. We will now show that for any $j \in \{1, \ldots, M'\}$ such that $w_j \in [\delta_{L-1}, \delta_L)$ we have $|S_j|>1$. From this, since the sets $(S_j)_{j \in \{1, \ldots, M'\}}$ are decreasing, and $S_{M'}=\{(\hat\varphi,\hat\psi)\}$, it will follow that $|S_j|>1$ for all $j \in \{1, \ldots, M'\}$ with $w_j < \delta_L$, while $w_{M'} = \delta_L$ with $S_{M'} = \{(\varphi^\ast, \psi^\ast)\}$, which then concludes the proof.
Hence, let us now consider $j \in \{1, \ldots, M'\}$ such that $w_j \in [\delta_{L-1}, \delta_L)$ and let us prove that the set $S_j$ consists of at least two elements. First, note that the set $S_j$ contains the point $(\varphi^\ast, \psi^\ast)$, again since the sets $(S_j)_{j \in \{1, \ldots, M'\}}$ are decreasing. This means that the unique constants $(\alpha_1, \alpha_2, \ldots, \alpha_N)$ from Step 1 satisfy (i)-(ii) of Step 7 for $w_j\in [\delta_{L-1}, \delta_L)$ and $T_{L-1}$.
Note also that the graph $T_{L-1}$ is not connected. Indeed, since the algorithm successively adds edges, we know that, as the algorithm has not stopped ($L-1<L$), the graph $T_{L-1}$ has less edges than $T_L$. Now, by \cite[Theorem 1.5.1]{DiestelGT}, any tree is minimally connected, which means that, whenever at least one edge is removed, the resulting subgraph is not connected. Hence, $T_{L-1}$ is not connected. Let us enumerate the vertices of $T_{L-1}$ as $v_1, \ldots, v_N$ such that there is $K \in \{1, \ldots, N-1\}$ for which $\{v_{K+1}, \ldots, v_N\}$ is a connected component of $T_{L-1}$. Note that the set $\{v_1, \ldots, v_K\}$ can contain one or more components. Since $T_{L-1}$ is a subgraph of a tree and $\{v_{K+1}, \ldots, v_N\}$ is connected, again by \cite[Theorem 1.5.1]{DiestelGT}, there is a unique path from $v_{n}$ to $v_m$ for all $n,m \ge K+1$. Let $v_n=v^{(1)}v^{(2)}\ldots v^{(l)}=v_m$ be this path. Then define \[
\hat{L}_{v_n,v_m} = L_{v^{(1)} v^{(2)}} + \ldots + L_{v^{(l-1)}v^{(l)}}. \] Any $\tilde{\alpha}=(\tilde{\alpha}_1,\ldots,\tilde{\alpha}_N)$ with \[\tilde{\alpha}_v = \alpha_v \text{ for all }v \in \{v_1, \ldots, v_K\}, \quad \tilde{\alpha}_{v_{K+1}} = \beta, \quad \text{ and }\, \tilde{\alpha}_v = \beta + \hat{L}_{v_n,v_{K+1}} \text{ for all } v \in \{v_{K+2}, \ldots, v_N\}, \] for some \begin{align}
\label{eq:DualLimitConcl}
\begin{split}
\beta \in &\left[ \max_{m \le K, n \ge K+1} -
\left( \min_{x \in X_{v_m}, y \in Y_{v_n}} c(x,y) - \varphi_{v_m}(x) - \psi_{v_n}(y) \right) + w_j - \hat{L}_{v_n,v_{K+1}} + \alpha_{v_m}, \right. \\
&\quad \left. \min_{m \le K, n \ge K+1} \left( \min_{x \in X_{v_n}, y \in Y_{v_m}} c(x,y) - \varphi_{v_n}(x) - \psi_{v_m}(y)\right) -w_j - \hat{L}_{v_n, v_{K+1}} + \alpha_{v_m} \right],
\end{split} \end{align} satisfies the constraints in Step 7. Indeed, \eqref{eq:DualLimitConcl} is equivalent to the constraint (ii) in Step 7 for $v_n, v_m$ such that $m \le K$ and $n \ge K+1$. That the remaining constraints are satisfied (i.e. (i) for all $\{n,m \} \in T_{L-1}$ and (ii) for $v_n, v_m$ such that $n,m \le K$ and $v_n, v_m$ such that $n,m \ge K+1$) follows from the fact that $\alpha$ satisfies (ii) in Step 7 and since we have $\tilde{\alpha}_{v_n} - \tilde{\alpha}_{v_m} = \alpha_{v_n} - \alpha_{v_m}$ in all described cases.
We will now show that the interval in \eqref{eq:DualLimitConcl} has non-empty interior, which implies that $|S_j|>1$. For this we note that, since $S_j=\{(\phi^\ast, \psi^\ast)\}$ for $w_j = \delta_L$, then $\beta = \alpha_{v_{K+1}}$ satisfies the constraint \eqref{eq:DualLimitConcl} with $\delta_L$ instead of $w_j$. Hence, \begin{align*} &\max_{m \le K, n \ge K+1} - \left( \min_{x \in X_{v_m}, y \in Y_{v_n}} c(x,y) - \varphi_{v_m}(x) - \psi_{v_n}(y) \right) + w_j - \hat{L}_{v_n,v_{K+1}} + \alpha_{v_m}, \\ &<\max_{m \le K, n \ge K+1} - \left( \min_{x \in X_{v_m}, y \in Y_{v_n}} c(x,y) - \varphi_{v_m}(x) - \psi_{v_n}(y) \right) + \delta_L - \hat{L}_{v_n,v_{K+1}} + \alpha_{v_m}, \\ &\le \min_{m \le K, n \ge K+1} \left( \min_{x \in X_{v_n}, y \in Y_{v_m}} c(x,y) - \varphi_{v_n}(x) - \psi_{v_m}(y)\right) -\delta_L - \hat{L}_{v_n, v_{K+1}} + \alpha_{v_m} \\ &< \min_{m \le K, n \ge K+1} \left( \min_{x \in X_{v_n}, y \in Y_{v_m}} c(x,y) - \varphi_{v_n}(x) - \psi_{v_m}(y)\right) -w_j - \hat{L}_{v_n, v_{K+1}} + \alpha_{v_m} \end{align*}
which shows that the interval in \eqref{eq:DualLimitConcl} has a non-empty interior. Therefore, $|S_j|>1$ and the claim follows. \end{proof}
\begin{remark} The construction of the centroid in Cominetti and San Mart\'{i}n~\cite{CominettiSanMartin} is informally described as tightening all non-saturated constraints until some of them become binding. This idea is also the basis of our construction. The set of all dual optimizers is described by the set of all constants $\alpha_1, \ldots, \alpha_N$ satisfying the constraints in Theorem~\ref{thm:SetOfAllDuals}-(iii). These constraints require that $\alpha_n-\alpha_m$ lies in a particular interval. Indeed, the upper and lower bound for the difference $\alpha_n - \alpha_m$ is given by the value where for some pair in $X_m \times Y_n$ and in $X_n \times Y_m$, respectively, the constraint becomes binding. The term $2\delta_{n,m}$ now describes the difference of the upper and the lower bound, i.e. the width of the admissible interval. Hence, if $w_j=\delta_{n,m}$ the constraint will become binding.
In our algorithm, we successively add edges to a graph on $\{1, \ldots, N\}$. These edges (non-redudantly) describe that the difference $\alpha_n - \alpha_m$ is fixed to a certain value, namely, the one given by \eqref{eq:DualLimitcEqual}. The edges that we can add come from a candidate set, which is shrinking. At first, it is the set of all pairs $(n,m) \in [\{1, \ldots, N\}]^2$. Once we add an edge $(n,m)$ to the graph $T$, we delete all pairs $(n',m')$ where the value of $\alpha_{n'}-\alpha_{m'}$ is already fixed to a certain value, i.e. we delete all edges for which the pair $(n',m')$ does not impose an additional constraint on the solution set. These pairs are exactly those that are connected in $T_l$, since in this case $\alpha_{n'} - \alpha_{m'} = \alpha_{n'} - \alpha_{n_1} + \alpha_{n_1} - \alpha_{n_2} + \ldots - \alpha_{m'}$ is already fixed to a certain value, as $\{n^i,n^{i+1}\} \in T_l$ for all $i \in \{[0, \ldots, k-1\}$.
$\diamond$ \end{remark}
\begin{example} \label{ex3} We consider a slightly modified version of Example~\ref{ex2}. Namely, we set \[ \hat{c} (x,y) = c(x,y) +2 \cdot 1_{\{(3,4)\}}(x,y) + 1_{\{(4,3)\}}(x,y). \] The primal optimizer $\gamma$ from Example~\ref{ex2} is again an optimizer for the problem $\text{OT}(\mu, \nu, \hat{c})$, but now we have $G=G^\gamma$. Nonetheless, the functions $\varphi_n$ and $\psi_n$, $n \in \{1,2,3\}$, described in Example~\ref{ex2} are still the unique dual optimizers for the subproblems on the connected components. Any dual optimizer can be represented as in Theorem~\ref{thm:SetOfAllDuals} where the constant $\alpha=(\alpha_1, \alpha_2, \alpha_3)$ now satisfies \[ 0 \le \alpha_1 - \alpha_2 \le 2, \, 0 \le \alpha_1 - \alpha_3 \le 1 \text{ and } -2 \le \alpha_2 - \alpha_3 \le 1. \]
We now derive the limit $(\varphi^\ast, \psi^\ast)$ of the dual optimizers of the entropic optimal transport problems using Algorithm~\ref{Algorithm:ConstructionT} and Theorem~\ref{thm:LimitEntropic}. We first note that \[ \delta_{1,2} = 0.5 \cdot (2 + 0) = 1, \; \delta_{1,3} = 0.5 \cdot (1 + 0) = 0.5 \text{ and } \delta_{2,3} =0.5 \cdot (1 + 2) = 1.5. \] Hence, the algorithm selects $\{1,3\}$ as the first edge of $T$ and $\{1,2\}$ as the second edge. Then the constant $\alpha$ satisfies \[ \alpha_1 - \alpha_2 = 0.5 \cdot 2 - 0.5 \cdot 0 =1 \text{ and } \alpha_1 - \alpha_3 = 0.5 \cdot 1 - 0.5 \cdot 0 = 0.5, \] which, using the convention $\alpha_1=0$, yields $\alpha_2 = -1$ and $\alpha_3=-0.5$. Therefore, the limit $(\varphi^\ast, \psi^\ast)$ of the dual entropic optimizers reads as \begin{align*}
&\varphi^\ast(1) = 0, \; \varphi^\ast(2) = -1, \; \varphi^\ast(3) = -1, \; \varphi^\ast(4) = -0.5 \\
&\psi^\ast(1)=1, \; \psi^\ast(2)=2, \; \psi^\ast (3)=1,\; \psi^\ast(4) = 1.5, \; \psi^\ast(5) = 1.5. \end{align*}
$\diamond$ \end{example}
\section{Stackelberg-Cournot-Nash Equilibria.} \label{sec:SCNE} In this section we consider a game between a principal and a population of agents, that is a Stackelberg version of the problem considered by Blanchet and Carlier \cite{BlanchetCN}. We provide existence results together with a characterization of equilibria. We then study approximation by regularization and conclude with a numerical example.
\subsection{Problem Formulation.} We consider a continuum of agents (population), characterized by a finite number of types, that need to choose among a finite number of actions, and a principal who determines the additional costs an agent faces for choosing a certain action. To set this problem in the optimal transport setting of Section~\ref{sect.fOT}, we let $\mathcal{X}=\{x_1,\ldots,x_{n_\mathcal{X}}\}$ be the set of all types and $\mathcal{Y}=\{y_1,\ldots,y_{n_\mathcal{Y}}\}$ the set of all actions. A distribution of types, $\mu\in\mathcal{P}(\mathcal{X})$, is fixed, and the optimal distribution of actions will be found in equilibrium among all $\nu\in\mathcal{P}(\mathcal{Y})$. With an abuse of notation, we identify the distributions $\mu$ and $\nu$ with their respective probability vectors in the $n_\mathcal{X}$- and $n_\mathcal{Y}$-simplex, $\Delta_{n_{\mathcal{X}}}$ and $\Delta_{n_{\mathcal{Y}}}$ resp., i.e., $\mu=(\mu_1,\ldots,\mu_{n_\mathcal{X}})\equiv (\mu(x_1),\ldots,\mu(x_{n_\mathcal{X}}))\in\Delta_{n_{\mathcal{X}}}=\{\omega\in[0,1]^{n_\mathcal{X}}:\sum_{n=1}^{n_\mathcal{X}}\omega_n=1\}$, and analogously for $\nu$. For each $\nu\in\Delta_{n_{\mathcal{Y}}}$, a coupling $\gamma\in\Pi(\mu,\nu)\subseteq\mathcal{P}(\mathcal{X}\times\mathcal{Y})$ describes the strategy of the agents, with the interpretation that $\gamma_{ij}/\mu_i$ is the probability that an agent of type $x_i$ chooses action $y_j$. Again with an abuse of notation, we use $\gamma$ also to denote the matrix in the simplex $\Delta_{n_\mathcal{X}\times n_\mathcal{Y}}$ with entries $\gamma_{ij}=\gamma(x_i,y_j)$, for $i\in\{1,\ldots,n_\mathcal{X}\}, j\in\{1,\ldots,n_\mathcal{Y}\}$.
The cost of each agent does not only depend on its own type and action, but also on the actions of all other agents in a mean-field sense, i.e. it will not depend on single choices of other agents, but on the distribution $\nu$ of their actions (the continuum of agents allows us to consider as indistinguishable the distribution of actions of all agents and that of all agents except one). Specifically, for a distribution of actions $\nu\in\Delta_{n_{\mathcal{Y}}}$ and a vector of costs $k= (k_j)_{j\in\{1,\ldots,n_\mathcal{Y}\}}\in K$ chosen by the principal from a fixed subset $K \subseteq \mathbb{R}^{n_\mathcal{Y}}$, the cost of an agent of type $x_i$, $i\in\{1,\ldots,n_\mathcal{X}\}$, choosing action $y_j$, $j\in\{1,\ldots,n_\mathcal{Y}\}$, is given by \begin{equation}\label{eq.totcost} C_{ij}[\nu,k] := c_{ij} + k_j + f_j(\nu_j) + \sum_{a=1}^{n_\mathcal{Y}} \theta_{aj}\nu_a. \end{equation} Here: $c=(c_{ij})_{i\in\{1,\ldots,n_\mathcal{X}\}, j\in\{1,\ldots,n_\mathcal{Y}\}}\in\mathbb{R}^{n_\mathcal{X}\times n_\mathcal{Y}}$ takes care of the part of the cost depending on both type and action of such agent; $k_j$ is paid for action $j$ independently of the type and of other agents actions; and last two components consider the interactions with the other agents, with $f_j: [0,1] \rightarrow \mathbb{R}$ nondecreasing and continuous functions, reflecting the fact that choosing a more popular action (within agents of same type) is more costly, and $(\theta_{aj})_{a,j\in\{1,\ldots,n_\mathcal{Y}\}}\in\mathbb{R}^{n_\mathcal{Y}\times n_\mathcal{Y}}$ symmetric matrix so that the last term reflects the cost related to actions of agents of different type. Finally, the principal faces a cost $G(\nu,k)$ that depends on the chosen vector of costs $k$ and on the agents' actions through the distribution $\nu$. The function $G: \Delta_{n_\mathcal{X}\times n_\mathcal{Y}}\rightarrow \mathbb{R}$ is assumed to be continuous. A possible interpretation of the different role played by the terms in \eqref{eq.totcost} is illustrated by the following classical example. \begin{example} \label{ex:SCNE} Consider a big company with many employees for which vacation times have to be coordinated. The possible vacation times are $\mathcal{Y} = \{1, \ldots, n_{\mathcal{Y}}\}$, which we could interpret as weeks. The employees differ through their preferences for these time slots because they have kids in school, prefer travelling in summer or winter, etc. We assume that there are finitely many types $\mathcal{X} = \{1, \ldots, n_\mathcal{X}\}$. The cost of the individual agents is given by \[
C_{ij}[\nu,k] := c_{ij} + k_j + f_j(\nu_j) + \sum_{a=1}^{n_\mathcal{Y}} g(|a-j|) \nu_a, \]
where $g$ is a decreasing function and $f_j, j=1, \ldots, n_\mathcal{Y}$ are strictly increasing and continuous functions. The components are interpreted as follows: the first is the cost of taking vacation in week $j$ for an agent of type $i$; $k_j$ is an additional cost charged by the employer for agents that pick week $j$; the last two terms capture the effect that the workload is increasing the more agents are on holiday in the same week (captured by $f_j(\nu_j)$) or in the weeks that are close (captured by $g(|a-j|) \nu_a$, since $g(|a-j|)$ is smaller the larger the distance of $a$ and $j$ is). The principal's cost function reads as $G(\nu) = \sum_{j=1}^n \nu_j^2$, expressing the preference that not too many employees are on vacation at the same time. This cost function does not depend on $k$ reflecting the fact that employers can increase or reduce the costs of the agents, which are measured in utility, by measures which are not costly for themselves.
$\diamond$ \end{example}
In what follows we will use $p_1$ and $p_2$ for the projections into first and second marginal of a measure, so that $\gamma\in\Pi(\mu,\nu)$ satisfies $p_1\#\gamma=\mu$ and $p_2\#\gamma=\nu$. We will denote by $\Pi(\mu,\cdot)$ the set of measures $\gamma$ with $p_1\#\gamma=\mu$ and any second marginal, and by $\Pi(\cdot,\nu)$ the set of measures $\gamma$ with $p_2\#\gamma=\nu$ and any first marginal. Similar notation will be used when one of the marginal is not fixed in the $\text{OT}$ problem.
\begin{definition}[SCNE]\label{def.SCNE} For $k \in K$, a strategy $\gamma^k$ is said \emph{optimal for $k$}, or a \emph{Cournot-Nash equilibrium} (CNE) w.r.t. $k$, if it is optimal for the problem \begin{equation}\label{eq.CNE} \inf_\gamma\, C[\nu^k,k] \cdot \gamma = \inf_\gamma \sum_{i=1}^{n_\mathcal{X}}\sum_{j=1}^{n_\mathcal{Y}} C_{ij}[\nu^k,k]\gamma_{ij}, \end{equation} where $\nu^k=p_2\#\gamma^k$, and the minimization is run over all $\gamma\in \Pi(\mu,\cdot)$.
A pair $(\gamma^{k^*}, k^*)\in\Pi(\mu,.)\times K$ is a \emph{Stackelberg-Cournot-Nash equilibrium (SCNE)} if it satisfies the two following conditions: \begin{itemize} \item[(i)] $\gamma^{k^*}$ is a CNE w.r.t. $k^*$, \item[(ii)] $G(\nu^{k^*},k^*) \le G(\nu^k, k)$ for all $k\in K$ and $\gamma^k$ CNE w.r.t. $k$. \end{itemize} \end{definition} The fact that, for a fixed $k\in K$, the solution to problem \eqref{eq.CNE} is a Nash equilibrium for the agents, is easily seen considering that we are in a setting with a continuum of agents, so that a change of action by one of them would not change the distribution of actions; see \cite{acciaio2021cournot}. This gives condition (i) of the SCNE. Condition (ii) expresses the Stackelberg equilibrium in a principal-agent game, that is, the situation where the principal optimizes over a set of possible choices (here $k\in K$), knowing how agents would optimally act w.r.t. each choice (here $\gamma^k$).
\subsection{The Optimization Problem of the Agents.} \label{sec:game_continuum} In what follows we will relate the optimization problem for the agents to an equivalent variational problem related to optimal transport. Let us define the energy function $\mathcal{E}:\Delta_{n_{\mathcal{Y}}}\to\mathbb{R}$ as \[ \mathcal{E}[\nu] := \sum_{j=1}^{n_\mathcal{Y}} F_j(\nu_j) + \frac{1}{2} \sum_{a,j=1}^{n_\mathcal{Y}} \theta_{aj}\nu_a\nu_j, \] with $F_j(t):= \int_0^t f_j(s) \mathrm{d} s$. Then the variational problem of interest is given by \begin{equation} \label{eq:VariationalProblem} \inf_{\nu \in \mathcal{P}(\mathcal{Y})} \left\{ \text{OT}(\mu,\nu,c) + k \cdot \nu + \mathcal{E}[\nu]\right\}. \end{equation}
As for standard Cournot-Nash games, we can relate Cournot-Nash equilibria to the variational problem \eqref{eq:VariationalProblem}.
\begin{prop}[\cite{acciaio2021cournot},Theorem 3.4] \label{thm:PopulationGame} Assume that $\mathcal{E}$ is convex. Then $\gamma^k\in\Pi(\mu,\cdot)$ is a CNE w.r.t. $k$ if and only if $\nu^k=p_2\#\gamma^k$ solves \eqref{eq:VariationalProblem} and $\gamma^k$ is an optimizer for $\text{OT}(\mu, \nu^k,c)$. \end{prop}
As in \cite{BlanchetCNFinite}, we define \begin{align*} \overline{\OT}(\mu, \nu, c) := \begin{cases} \text{OT}(\mu,\nu,c), &\text{if } \nu \in \mathcal{P}(\mathcal{Y}) \\ \infty, &\text{else.} \end{cases} \end{align*} The reason for this is to have a function $\overline{\OT}(\mu, \cdot, c)$ defined on the whole $\mathbb{R}^{n_\mathcal{Y}}$, so that one can apply classical results from convex analysis.
A special role in our results will be played by the subdifferential of $F=\text{OT}(\mu, \cdot , c)$ or $F=\overline{\OT}(\mu, \cdot, c)$, which is defined by \[ \partial F(\nu)=\left\{ f\in \mathbb{R}^{n_\mathcal{Y}} : F(\nu) - f\cdot \nu \leq F(\eta) - f\cdot \eta,\quad \forall\ \eta\in\mathcal{P}(\mathcal{Y}) \right\}. \] Note that, for $\nu\notin\mathcal{P}(\mathcal{Y})$ we have $\partial_\nu \overline{\OT}(\mu,\nu, c)=\emptyset$, while for $\nu\in\mathcal{P}(\mathcal{Y})$ we have $\partial_\nu \overline{\OT}(\mu,\nu, c)=\partial_\nu \text{OT}(\mu,\nu, c)$. As an immediate consequence, we obtain the following result. \begin{prop} \label{prop:NecessarySufficient} Assume that $\mathcal{E}$ is convex. Then $\nu\in\mathcal{P}(\mathcal{Y})$ is a minimizer of \eqref{eq:VariationalProblem} if and only if \[ 0 \in \partial_\nu \text{OT}(\mu,\nu, c) + k + \nabla \mathcal{E}[\nu]. \] If $\mathcal{E}$ is strictly convex, then there is a unique optimizer of \eqref{eq:VariationalProblem}. \end{prop}
\subsection{The Optimization Problem of the Principal.} For any fixed vector of costs $k\in K$, the optimization problem of the population, that is \eqref{eq.CNE}, has been reduced to the variational problem \eqref{eq:VariationalProblem}. Let us write \[ \text{BR}: K \rightarrow 2^{\mathcal{P}(\mathcal{Y})} \] for the set-valued map that maps $k$ to the set of all optimizers of \eqref{eq:VariationalProblem}. By Proposition~\ref{prop:NecessarySufficient}, whenever $\mathcal{E}$ is strictly convex, the optimizer $\nu^k$ of \eqref{eq:VariationalProblem} is unique. Hence, in this case the map $\text{BR}$ is a function.
\begin{theorem} \label{thm:BRContinuous} Assume that $\mathcal{E}$ is convex. Then the map $\text{BR}$ has a closed graph. \end{theorem}
\begin{proof} In order to show that $\{(k, \nu): \nu \in \text{BR}(k)\}$ is closed, it suffices to prove that, for any sequence $(k^n)_n$ that converges to $k$ and any sequence $(\nu^n)_n$ that converges to $\nu$, with $\nu^n \in \text{BR}(k^n)$, we have that $\nu \in \text{BR}(k)$. By Proposition~\ref{prop:NecessarySufficient}, the values $m^n := -k^n - \nabla \mathcal{E}[\nu^n]$ satisfy $m^n \in \partial_\nu \overline{\OT}(\mu,\nu^n, c)$. Since $(k^n)_n$ and $(\nu^n)_n$ are converging sequences and $\mathcal{E} \in \mathcal{C}^1$, we obtain that $m^n$ converges towards $m:=-k-\nabla \mathcal{E}[\nu]$. Since the map $\text{OT}(\mu, \cdot, c)$ is lower semicontinuous (see \cite[p.13]{acciaio2021cournot}), the subdifferential $\partial_\nu \overline{\OT}(\mu, \cdot, c)$ is upper semicontinuous (see \cite[p. 55]{VillaniTopics}). Thus, $m \in \partial_\nu \overline{\OT}(\mu, \nu, c)$. This is equivalent to \[ 0 \in \partial_\nu \overline{\OT}(\mu, \nu, c) + k + \nabla \mathcal{E}[\nu], \] which shows that $\nu$ is indeed the optimal response of the population to the vector of costs $k$, i.e. $\nu \in \text{BR}(k)$. \end{proof}
\begin{prop} Let $\mathcal{E}$ be convex and $K$ be compact. Then a SCNE exists. \end{prop}
\begin{proof} The statement follows directly from Theorem~\ref{thm:BRContinuous}, since the latter implies that the infimum \[ \inf_{\{(k,\nu): k \in K, \nu \in \text{BR}(k)\}} G(\nu,k) \] is attained. \end{proof}
\begin{remark} We highlight that this result crucially relies on the link of optimal transport and the Cournot-Nash equilibria. Indeed, standard techniques from game theory yield only an existence result and only for the game without the principal. However, for our proof the characterization of the equilibria as solutions to a variational problem is a cornerstone of the analysis.
$\diamond$ \end{remark}
As a next step we provide a helpful reformulation of the optimization problem for the principal.
\begin{theorem} \label{thm:OptimizationPrincipal} Assume that $\mathcal{E}$ is convex. Then it holds that \[ \inf_{k \in K} \inf_{\nu \in \text{BR}(k)} G(\nu,k) = \inf_{\nu \in \mathcal{P}(\mathcal{Y})} \inf_{k \in \left(-\partial_\nu\text{OT}(\mu,\nu, c) - \nabla \mathcal{E}(\nu) \right) \cap K} G(\nu,k), \] with the convention that $\inf \emptyset = \infty$. \end{theorem}
\begin{proof} By \cite[Theorem 23.5]{Rockafellar}, $k \in -\partial_\nu \text{OT}(\mu, \nu, c) -\nabla \mathcal{E}[\nu]=-\partial_\nu \overline{\OT}(\mu, \nu, c) -\nabla \mathcal{E}[\nu]$ is equivalent to the fact that the function \[ \tilde{\nu} \mapsto k\cdot \tilde{\nu} + \overline{\OT}(\mu, \tilde{\nu}, c) + \mathcal{E}[\tilde{\nu}] \] achieves its infimum for $\tilde{\nu}=\nu$. \end{proof}
An important class of models are those where the reward of the principal reads $G(\nu,k)=G(\nu)$ for some function $G: \Delta_{n_{\mathcal{Y}}} \rightarrow \mathbb{R}$. This is reasonable if the costs of the agents are measured in terms of utility and the principal can influence these costs in such a way that it is costless for itself, see Example~\ref{ex:SCNE}.
For this class of models, we obtain an existence result that even yields the possibility to compute CNEs.
\begin{corollary} \label{cor:SCNEex} Assume that $\mathcal{E}$ is convex. Let $G(\nu,k)=G(\nu)$ for all $\nu \in \Delta_{n_{\mathcal{Y}}}$, $k \in K$. Assume that $\nu^\ast$ is a minimizer of $G$ and that $\gamma^\ast$ is an optimizer for $\text{OT}(\mu,\nu^\ast, c)$. Then, for any $k^\ast \in \left(-\partial_\nu\text{OT}(\mu,\nu^\ast,c) - \nabla \mathcal{E}(\nu^\ast) \right) \cap K$, the pair $(k^\ast, \gamma^\ast)$ is a SCNE. In particular, if $\nu^\ast \in \text{ri}(\Delta_{n_{\mathcal{Y}}})$ and $K=\mathbb{R}^{n_\mathcal{Y}}$, then a SCNE exists. \end{corollary}
\begin{proof} For the first part of the claim, by Theorem~\ref{thm:OptimizationPrincipal} it suffices to show that $(k^\ast, \gamma^\ast)$ is an optimizer of \[\inf_{\nu \in \mathcal{P}(\mathcal{Y})} \inf_{k \in \left(-\partial_\nu\text{OT}(\mu,\nu, c) - \nabla \mathcal{E}(\nu) \right) \cap K} G(\nu).\] Since the function $G$ does not depend on $k$, this means (recall the convention $\inf \emptyset =\infty$) that the optimizer of $G(\nu)$ over all $\nu \in \mathcal{P}(\mathcal{Y})$ such that $\left(-\partial_\nu\text{OT}(\mu,\nu, c) - \nabla \mathcal{E}(\nu) \right) \cap K$ is non-empty. By assumption, $\nu^\ast$ is such a minimizer and $k^\ast$ is one minimizer for the inner infimum. The second part of the claim follows since $\overline{\OT}$ is a proper closed convex function and such functions are subdifferentiable in the relative interior of the domain $\Delta_{n_{\mathcal{Y}}}$; see \cite[Theorem 23.4]{Rockafellar}. \end{proof}
\subsection{The Subdifferential of $\text{OT}$.} Given the reformulation of the principal's problem in Theorem~\ref{thm:OptimizationPrincipal}, understanding the subdifferential of the optimal transport problem turns out to be an essential step in order to solve the SCNE problem. We will see in Theorem~\ref{thm.subdiffOT} below that this is tightly related to the optimizers of the dual problem.
Recall that the so-called $c$-transform of the function $k$, denoted by $k^c$, is given by \[ k^c_i:=\min_{j\leq n_\mathcal{Y}}c_{ij}-k_j,\quad i=1,\ldots,n_\mathcal{X}. \] This clearly satisfies the constraint \[ k^c_i+k_j\leq c_{ij}\quad \forall i\leq n_\mathcal{X}, j\leq n_\mathcal{Y}, \] so that the pair $(k^c,k)$ is feasible for the dual problem $\text{DOT}(\mu,\nu,c)$.
\begin{theorem}\label{thm.subdiffOT} Let $\nu \in \Delta_{n_{\mathcal{Y}}}$. Then $k\in K$ satisfies $k \in \partial_\nu \text{OT}(\mu,\nu, c)$ if and only if $(k^c,k)$ is an optimizer for $\text{DOT}(\mu,\nu, c)$, i.e. \[ k^c\cdot\mu+k\cdot\nu=\max\left\{\varphi\cdot\mu+\psi\cdot\nu: \varphi\in\mathbb{R}^{n_\mathcal{X}},\psi\in\mathbb{R}^{n_\mathcal{Y}}, \varphi_i+\psi_j\leq c_{ij}\ \forall i\leq n_\mathcal{X}, j\leq n_\mathcal{Y}\right\}. \] \end{theorem} We remark that this result has been proved for general probability measures supported on a compact subset of $\mathbb{R}^d$ in \cite[Proposition 7.17]{santambrogio2015optimal}. Here we present a simpler proof for our discrete setting.
\begin{proof} Let us first fix $k \in \partial_\nu \text{OT}(\mu,\nu,c)$. Then we have for any $\eta \in \Delta_{n_{\mathcal{Y}}}$ and any $\gamma \in \Pi(\mu,\eta)$ that \begin{equation} \label{eq:SubgradientKantorovichProof} \text{OT}(\mu,\nu, c) - k \cdot \nu \le \text{OT}(\mu, \eta, c) - k\cdot \eta \le c \cdot\gamma - k\cdot \eta. \end{equation} Now, for any $i\leq n_\mathcal{X}$, choose $j^{(i)}$ such that \[ j^{(i)} \in \text{argmin}_{j\leq n_\mathcal{Y}} \{c_{ij}-k_j\} \] and set $\gamma\in\Pi(\mu,\cdot)$ via \[ \gamma_{ij} = \begin{cases} 0, & \text{for $j \neq j^{(i)}$} \\ \mu_i, & \text{for $j = j^{(i)}$}. \end{cases} \] Letting $\eta=p_2\#\gamma$, by \eqref{eq:SubgradientKantorovichProof} we get \begin{align*} \text{OT}(\mu,\nu, c) - k\cdot \nu &\le c \cdot\gamma - k\cdot \eta = \sum_{i \leq n_\mathcal{X}} c_{ij^{(i)}} \mu_i - \sum_{j \leq n_\mathcal{Y}} k_j \sum_{i \leq n_\mathcal{X}: j^{(i)}=j} \mu_i \\ &= \sum_{i \leq n_\mathcal{X}} c_{ij^{(i)}} \mu_i - \sum_{i \leq n_\mathcal{X}} k_{j^{(i)}} \mu_i = \sum_{i \leq n_\mathcal{X}} \left( c_{ij^{(i)}}-k_{j^{(i)}}\right) \mu_i \\ &= \sum_{i \leq n_\mathcal{X}} k^c_i \mu_i = k^c \cdot \mu. \end{align*} This implies $\text{OT}(\mu, \nu, c) \le k\cdot \nu + k^c\cdot \mu$. Since $\text{OT}(\mu, \nu, c) \ge k\cdot \nu + k^c\cdot \mu$ is true by the Kantorovich duality, equality follows, thus $(k^c,k)$ is an optimizer for $\text{DOT}(\mu,\nu, c)$.
To show the converse implication, we now assume that $(k^c,k)$ is an optimizer for $\text{DOT}(\mu,\nu, c)$. This yields $\text{OT}(\mu, \nu, c) -k\cdot \nu = k^c \cdot \mu$. On the other hand, for any $\eta \in \Delta_{n_{\mathcal{Y}}}$, we have $\text{OT}(\mu, \eta, c) \ge k\cdot \eta + k^c\cdot \mu$ by the Kantorovich duality. Hence, we obtain \[ \text{OT}(\mu, \eta, c) - k\cdot \eta \ge k^c\cdot \mu =\text{OT}(\mu, \nu, c) - k\cdot \nu,\quad \text{for any $\eta \in \Delta_{n_{\mathcal{Y}}}$}. \] Thus $k$ lies in the subdifferential $\partial_\nu \text{OT}(\mu,\nu,c)$. \end{proof}
\begin{remark}
Theorem~\ref{thm.subdiffOT} allows us to formulate a more general version of Corollary~\ref{cor:SCNEex}. Namely, we obtain that, when $\mathcal{E}$ is convex, $G(\nu,k) = G(\nu)$ for all $\nu \in \Delta_{n_{\mathcal{Y}}}$, $k \in K$, $\nu^\ast \in \text{ri}(\Delta_{n_\mathcal{Y}})$, and $K = \mathbb{R}{n_\mathcal{Y}}_+$, then a SCNE exists. Indeed, by \cite[Theorem 23.4]{Rockafellar}, an element $k \in -\partial_\nu\text{OT}(\mu,\nu^\ast, c) - \nabla \mathcal{E}(\nu^\ast)$ exists. Since $k + c\cdot 1 \in \partial_\nu\text{OT}(\mu,\nu^\ast, c) - \nabla \mathcal{E}(\nu^\ast)$ for all $c \in \mathbb{R}$, we can find $c\in \mathbb{R}$ such that $k + c \cdot 1 \in \mathbb{R}_+^{n_\mathcal{Y}}$. Hence, $(\nu^\ast, k+c \cdot 1)$ is indeed a SCNE by Theorem~\ref{thm:OptimizationPrincipal}. \end{remark}
\subsection{Approximation Result.} This section is devoted to the discussion of approximation results for the principal and the agents' problems. We will show that, under some assumptions, up to an error that can be made as small as wanted, both principal and agents can compute (very efficiently) regularized transport problems rather than the original ones; see Remark~\ref{rem:approx}.
\begin{prop}\label{prop:approx_pr} Let $K$ be closed and let $\mathcal{E}$ be strictly convex. Assume that $G(\nu, k) = G(\nu)$ is a continuous function. Let $\nu^\ast$ be a minimizer of $G$. Moreover, assume that $k^\varepsilon \in \left(-\partial_\nu\text{OT}^\varepsilon(\mu, \nu^\ast)- \nabla \mathcal{E}[\nu^\ast]\right) \cap K$ with $k^\varepsilon (x_0) = 0$ for all $\varepsilon>0$. Then there is $k^\ast\in K$ s.t. $\nu^\ast = \text{BR}(k^\ast)$ and $k^\varepsilon \rightarrow k^\ast$. Moreover, we have \[ G(\text{BR}(k^\varepsilon)) \rightarrow G(\text{BR}(k^\ast)). \] \end{prop}
\begin{proof} We first note that, since $\mathcal{E}$ is strictly convex, the map $\text{BR}$ is a function. By Theorem~\ref{thm:LimitEntropic} and Theorem~\ref{thm.subdiffOT}, $k^\varepsilon \rightarrow k^\ast$ with $k^\ast \in -\partial_\nu\text{OT}(\mu, \nu^\ast)- \nabla \mathcal{E}[\nu^\ast]$. Since $K$ is closed, $k^\ast\in K$. Moreover, $\nu^\ast = \text{BR}(k^\ast)$ by Proposition~\ref{prop:NecessarySufficient}. Finally, by Theorem~\ref{thm:BRContinuous} the map $\text{BR}$ has a closed graph, so the claim follows from continuity of $G$. \end{proof}
\begin{remark}\label{rem:approx} When the set $K$ of cost vectors and the principal's cost function $G$ are as in Proposition~\ref{prop:approx_pr}, the principal can look for a cost vector $k^\varepsilon$ in $\left(-\partial_\nu\text{OT}^\varepsilon(\mu, \nu^\ast)- \nabla \mathcal{E}[\nu^\ast]\right) \cap K$ rather than in $\left(-\partial_\nu\text{OT}(\mu, \nu^\ast)- \nabla \mathcal{E}[\nu^\ast]\right) \cap K$, for $\varepsilon$ small enough so that $G(\text{BR}(k^\varepsilon))$ is as close as wanted to the optimal value $G(\text{BR}(k^\ast))$. The reason for doing this is the fact that computing $\partial_\nu\text{OT}^\varepsilon(\mu, \nu^\ast)$ is more efficient than computing $\partial_\nu\text{OT}(\mu, \nu^\ast)$. Indeed, $\partial_\nu \text{OT}^\varepsilon(\mu,\nu^\ast)$ is the dual optimizer $\psi^\varepsilon$ for the regularized problem, that can be approximated via the Sinkhorn algorithm (the scaling variables satisfy $(u,v)=(e^{\varphi^\varepsilon/\varepsilon)}, e^{\psi^\varepsilon/\varepsilon})$, see \cite[Propositions 4.4 and 4.6]{ComputationalOT}).
Note also that, by offering $k^\varepsilon$ rather than $k^*$ to the agents, they also achieve a result as close as wanted to their optimal value, for $\varepsilon$ small enough. This follows by Proposition~\ref{thm:PopulationGame} and Theorem~\ref{thm:BRContinuous}, since the optimal transport problem is stable w.r.t. its marginals, see \cite{ghosal2022stability}. Furthermore, again for efficiency reasons, agents can as well decide to solve a regularized OT problem rather than the original one in Proposition~\ref{thm:PopulationGame}, and get as closed as wanted to their optimal value; see \cite{acciaio2021cournot}.
$\diamond$ \end{remark}
For general functions $G(\nu,k)$, i.e. depending on the second variable as well, it might happen that considering only the entropic regularization yields to larger costs than necessary, since it might happen that, in the set of all dual optimizers, better candidates can be found. An illustration of this phenomenon is given in the next example.
\begin{example} Consider $\mathcal{X}= \mathcal{Y} = \{1,2\}$, $\mu = \tfrac{1}{2} \delta_{\{1\}} + \tfrac{1}{2} \delta_{\{2\}}$, $c= 1_{\{(1,1), (2,2)\}} + 2 \cdot 1_{\{(1,2), (2,1)\}}$, $\mathcal{E} \equiv 0$, $K=\{(1,x):x \in \mathbb{R}\}$ and \[ G(\nu, k) = \nu_1^2 + \nu_2^2 + \left(\tfrac{1}{2} + k_2\right )^2,\quad \nu\in\Delta_{n_\mathcal{Y}}. \] For any $\nu\in\Delta_{n_\mathcal{Y}}$, we denote by $(\varphi^\ast(\nu),\psi^\ast(\nu))$ the optimizer for $\text{DOT}(\mu, \nu, c)$ that is the unique limit of the dual entropic optimizers. Note that, by Proposition~\ref{prop:unique}, for all $\nu\in\mathbb{R}^2$ except $\hat\nu = (\tfrac{1}{2}, \tfrac{1}{2})$ there is a unique dual optimizer, that clearly coincides with $(\varphi^\ast(\nu),\psi^\ast(\nu))$. This dual optimizer reads as $\varphi^\ast(\nu)=(0,1)$, $\psi^\ast(\nu)=(1,2)$ if $\nu_1<\tfrac{1}{2}$, and $\varphi^\ast(\nu)=(0,1)$, $\psi^\ast(\nu)=(1,0)$ if $\nu_1>\tfrac{1}{2}$. The set of all dual optimizers given $\hat\nu$ in $K$ is given by $\varphi = (0,x)$, $\psi = (1,1-x)$ with $x \in [-1,1]$. Hence, by Theorem~\ref{thm:LimitEntropic} we have that $\psi^\ast (\hat\nu) = (1,1)$. Since $\nu_1^2+\nu_2^2\ge \tfrac{1}{2}$, we now obtain that \begin{align*}
G(\nu, -\psi^\ast(\nu)) \ge \begin{cases}
\tfrac{1}{2} + \frac{9}{4}, &\text{if } \nu_1<\tfrac{1}{2} \\
\tfrac{1}{2} + \frac{1}{4}, &\text{if } \nu_1=\tfrac{1}{2} \\
\tfrac{1}{2} + \frac{1}{4}, &\text{if } \nu_1>\tfrac{1}{2}
\end{cases}. \end{align*} However, choosing $k=(-1, -\tfrac{1}{2})$ yields $G(\hat{\nu},k)=\tfrac{1}{2} < \tfrac{3}{4} \le G(\nu, \psi^\ast(\nu))$ for all $\nu \in\Delta_{n_\mathcal{Y}}$. Let us now assume that the principal chooses the vector of costs relying on regularized transport problems, i.e. chooses a pair $(\nu, -\psi^\varepsilon(\nu))$. Then, for $\varepsilon$ sufficiently small, the pair $(\nu,-\psi^\varepsilon(\nu))$ is close to $(\nu, -\psi^\ast(\nu))$. Since $G$ is continuous, also the cost $G(\nu,k)$ will be close to the cost $G(\nu, \psi^\ast(\nu)) \ge \frac{3}{4}$. Hence, the cost will be substantially larger than $\tfrac{1}{2}=G(\hat{\nu},k)$. All in all, this shows that relying on regularized transport problems to compute the optimal vector of costs can lead to choices that are not close to the optimal ones.
$\diamond$ \end{example}
\subsection{Numerics.}\label{sect.num} In this section we come back to Example~\ref{ex:SCNE} and illustrate how the presence of a principal affects the choice of the agents. Here, we choose $\mathcal{X}=\{1,2\}$, $\mathcal{Y} = \{1, \ldots, 10\}$, $f_j(x)=x^2$ for all $j \in \mathcal{Y}$, $(g(k))_{k \in \mathcal{Y}} = (2,1,0.5,0.25,0.1,0.05,0.02,0,0,0)$ and \[ c= \begin{pmatrix}
5&4&3&2&1&1&2&3 \\
5&1&5&5&1&1&1&5
\end{pmatrix}. \]
Since $f_j(\nu_j)$ is differentiable, $g$ is non-negative and $2\sum_{k=1}^{ \lfloor n_\mathcal{Y}/2 \rfloor} g(k)< g(0)$, which implies that $(g(|a-j|))_{a,j \in \{1, \ldots, n_\mathcal{Y}\}}$ is positive definite, $\mathcal{E}[\nu]$ is strictly convex. Hence, we can compute equilibria for the game with and without a principal by applying the results established in the previous subsections.
Let us first compute the Cournot-Nash equilibrium of the game without a principal, that is where the costs of the agents are given by \eqref{eq.totcost} with $k=0$. For this we note that, as described in Section~\ref{sec:game_continuum}, it suffices to find the minimizer $\nu^\ast$ of the convex optimization problem \[ \inf_{\nu \in \mathcal{P}(\mathcal{Y})} \{\text{OT}(\mu,\nu,c) + \mathcal{E}[\nu]\}, \] as the CNE is then given by a primal optimizer $\gamma^\ast$ of $\text{OT}(\mu,\nu^\ast,c)$. As established in \cite{acciaio2021cournot}, considering the regularized problem yields good approximations of $\gamma^\ast$.
For the game with a principal, we note that the minimizer of the cost function $G$ reads as $\nu^\ast = (\frac{1}{n_\mathcal{Y}}, \ldots, \frac{1}{n_\mathcal{Y}})$. Hence, by Corollary~\ref{cor:SCNEex}, a SCNE is given by $(\gamma^\ast, k^\ast)$, where $\gamma^\ast$ is an optimizer for $\text{OT} (\mu, \nu^\ast,c)$ and $k^\ast \in \left(- \partial_\nu \text{OT}(\mu,\nu^\ast, c) - \nabla \mathcal{E}[\nu^\ast]\right)$. By Proposition~\ref{prop:approx_pr} and Remark~\ref{rem:approx}, we obtain good approximations of $\gamma^\ast$ and $k^\ast$ by computing the optimizers $ \gamma^\varepsilon$ of $\text{OT}_\varepsilon (\mu, \nu^\ast,c)$ and $ k^\varepsilon \in \left( -\partial_\nu \text{OT}_\varepsilon(\mu,\nu^\ast, c) - \nabla \mathcal{E}[\nu^\ast]\right)$.
We report in Tables \ref{table:CNE} and \ref{table:SCNE}, respectively, the approximate equilibrium strategy $\gamma^\varepsilon$ and the associated approximate equilibrium distribution $\nu^\varepsilon = p_2\#\gamma^\varepsilon$, in the case without a principal (that is taking $k=0$) and with a principal (optimizing over $k$), respectively. Furthermore, for the game with a principal, we report the principal's choice of costs $k^\varepsilon$.
\begin{table}[h] \centering
\begin{tabular}{c|cccccccc}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
$\gamma^\varepsilon(1,y)$ & 0.0019 & 0.0000 & 0.0030 & 0.0030 & 0.1555 & 0.1336 & 0.0001 & 0.0030 \\
$\gamma^\varepsilon(2,y)$ & 0.0011 & 0.3115 & 0.0000 & 0.0000 & 0.0868 & 0.0746 & 0.2260 & 0.0000 \\
\hline
$\nu^\varepsilon(y)$ & 0.0030 & 0.3115 & 0.0030 & 0.0030 & 0.2423 & 0.2082 & 0.2261 & 0.0030 \\
\end{tabular}
\caption{Cournot-Nash equilibrium for the game without a principal}
\label{table:CNE} \end{table}
\begin{table}[h] \centering
\begin{tabular}{c|cccccccc}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
$k^\varepsilon(y)$ & -2.0357 & 1.8393 & -2.1003 & -1.1610 & 1.7543 & 1.7793 & 1.8393 & -1.9153 \\
\hline
$\gamma^\varepsilon(1,y)$ & 0.0000 & 0.0000 & 0.0875 & 0.1250 & 0.0000 & 0.0000 & 0.0000 & 0.0875 \\
$\gamma^\varepsilon(2,y)$ & 0.1250 & 0.1250 & 0.0375 & 0.0000 & 0.1250 & 0.1250 & 0.1250 & 0.0375 \\
\hline
$\nu^\varepsilon(y)$ & 0.1250 & 0.1250 & 0.1250 & 0.1250 & 0.1250 & 0.1250 & 0.1250 & 0.1250 \\
\end{tabular}
\caption{Stackelberg-Cournot-Nash equilibrium for the game with a principal}
\label{table:SCNE} \end{table} As expected, the presence of a principal in this game has a clear effect. To wit, in the game without a principal most agents choose one of the actions in $\mathcal{Y}_\text{good}:=\{2,5,6,7\}$, which yields $G(\nu^\varepsilon)=0.2502$, whereas in the game with a principal, the equilibrium distribution satisfies $G(\nu^\varepsilon)=0.125$. That the equilibrium distribution in this case is uniform over the actions is clearly due to the principal's choice of $k$, which means that an agent choosing an action from $\mathcal{Y}_\text{good}$ is charged an additional cost, whereas an agent choosing a non-preferred action from $\mathcal{Y}\setminus \mathcal{Y}_\text{good}$ faces reduced costs.
\end{document} | arXiv |
Music teachers challenge students to listen and participate.
English and History teachers invite students to journey in other worlds.
Art and Drama teachers offer students opportunities to explore.
What are we to offer students if they are to function mathematically?
What do you understand by 'functioning mathematically'?
What characteristic behaviours do your highly achieving mathematicians exhibit?
What do these behaviours look like in practice?
Many numbers can be expressed as the sum of two or more consecutive integers.
Look at numbers other than 15 and find out all you can about writing them as sums of consecutive whole numbers.
Odd numbers can be written as two consecutive numbers.
Multiples of $3$ can be written as three consecutive numbers.
Even numbers can be written as four consecutive numbers.
Multiples of $3$ can be written as the sum of three consecutive numbers.
Multiples of $5$ can be written as the sum of five consecutive numbers.
If you give me any multiple of three, I can tell you the three numbers by dividing by three and that will be the middle number.
If you give me three consecutive numbers I can always turn them into a multiple of three.
Same will apply to five, seven, nine and any odd number because you can pair off numbers on either side of the middle number.
Can anyone think of a counter example?
What might you try next?
Is there a way you could organise your findings?
Drawing attention to and valuing process as well as outcome.
How can you generate questions that promote these HOTS in a mathematical context?
If the area of a rectangle is $24$ cm² and the perimeter is $22$ cm, what are its dimensions?
What if the area of a rectangle (in cm²) is equal to the perimeter (in cm), what could its dimensions be?
Find a rectangle which has unit sides and a perimeter of $100$ .
How many answers are there and how do you know you've got them all?
Find the area and perimeter of a $3cm \times 8cm$ rectangle.
If the area of a rectangle is $24$cm² and the perimeter is $22$cm, what are its dimensions? How did you work this out?
"A teacher of mathematics has a great opportunity. If he fills his allotted time with drilling his students in routine operations he kills their interest, hampers their intellectual development, and misuses his opportunity. But if he challenges the curiosity of his students by setting them problems proportionate to their knowledge, and helps them to solve their problems with stimulating questions, he may give them a taste for, and some means of, independent thinking."
"I don't expect, and I don't want, all children to find mathematics an engrossing study, or one that they want to devote themselves to either in school or in their lives. Only a few will find mathematics seductive enough to sustain a long term engagement. But I would hope that all children could experience at a few moments in their careers...the power and excitement of mathematics...so that at the end of their formal education they at least know what it is like and whether it is an activity that has a place in their future."
This article was originally presented to the Council of Boards of School Education in India Conference,"Addressing Core Issues and Concerns in Science and Mathematics", in Rishikesh, India in April 2007. | CommonCrawl |
\begin{document}
\begin{abstract}
In this paper we give new proofs of the theorem of Ma\'{c}kowiak
and Tymchatyn that every metric continuum is a
weakly-confluent image of some one-dimensional hereditarily
indecomposable continuum of countable weight. The first is a
model-theoretic argument; the second is a topological proof
inspired by the first. \end{abstract} \title{On the Ma\'{c}kowiak-Tymchatyn theorem} \subjclass{Primary 54F15, Secondary 54F50, 54C10, 06D05, 03C98} \keywords{continuum, one-dimensional, hereditarily indecomposable, weakly confluent map, lattice, Wallman representation, inverse limit, model theory} \author{K. P. Hart} \author{B. J. van der Steeg} \address{Faculty of Information Technology and Systems\\
TU Delft\\
Postbus 5031\\
2600~GA{}Delft\\
the Netherlands}
\email[K. P. Hart]{[email protected]}
\email[B.J. van der Steeg]{[email protected]}
\urladdr[K. P.Hart]{http://aw.twi.tudelft.nl/\~{}hart}
\maketitle
\section{Introduction} In~\cite{MT} Ma\'{c}kowiak and Tymchatyn proved that every metric continuum is the continuous image of a one-dimensional hereditarily indecomposable continuum by a weakly confluent map. In~\cite{HvMP} this result was extended to general continua, with two proofs, one topological and one model-theoretic. Both proofs made essential use of the metric result.
The original purpose of this paper was to (re)prove the metric case by model-theoretic means. After we found this proof we realized that it could be combined with any standard proof of the completeness theorem of first-order logic (see e.g., Hodges~\cite{H}, 6.1]) to produce an inverse-limit proof of the general form of the Ma\'{c}kowiak-Tymchatyn result. We present both proofs. The model-theoretic argument occupies sections~\ref{modprelim} and~\ref{modproof}, and the inverse-limit approach appears in section~\ref{topproof}.
We want to take this opportunity to point out some connections with work of Bankston~\cite{B}, who dualized the model-theoretic notions of existentially closed structures and existential maps to that of co-existentially closed compacta and co-existential maps. He proves that co-existential maps are weakly confluent, that co-existentially closed continua are one-dimensional and hereditarily indecomposable, and that every continuum is the continuous image of a co-existentially closed one. The map can in general not be chosen co-existential, because co-existential maps preserve indecomposability and do not raise dimension.
\section{Preliminaries} \subsection{Ma\'{c}kowiak-Tymchatyn theorem} The theorem of Ma\'{c}kowiak and~Tymchatyn, we are dealing with in this paper states that every metric continuum is a weakly confluent image of a one-dimensional hereditarily indecomposable continuum of countable weight.
A continuum is \emph{decomposable} if it can be written as a union of two proper subcontinua, it is called \emph{indecomposable} if this is not the case. We call a continuum \emph{hereditarily indecomposable} if every subcontinuum is indecomposable. This is equivalent to saying that every two subcontinua that meet, one is contained in the other. As in~\cite{HvMP} we can extend this notion for arbitrary compact Hausdorff spaces. So a compact Hausdorff space is hereditarily indecomposable if for every two subcontinua that meet, one is contained in the other. We call a continuous mapping between two continua \emph{weakly confluent} if every subcontinuum in the range is the image of a subcontinuum in the domain.
\begin{theorem}[Ma\'{c}kowiak and Tymchatyn \cite{MT}]
\label{MaTym}
Every metric continuum is a weakly confluent
image of some one-dimensional hereditarily
indecomposable continuum of the same weight. \end{theorem}
In~\cite{HvMP} Hart, van Mill and Pol showed that the Ma\'{c}kowiak and Tymchatyn result above implies the theorem for the non-metric case using model-theoretic means.
\subsection{Wallman space} In the proof we will consider the lattice of closed sets of our metric continuum $X$ and try to find, through model-theoretic means, another lattice in which we can embed our lattice of closed sets of $X$. This new lattice will be a model for some sentences which will make sure that its Wallman representation is a continuum with certain properties. So at the base of the proof is Wallman's generalization, to the class of distributive lattices, of Stone's representation theorem for Boolean algebras. Wallmann's representation theorem is as follows.
\begin{theorem}[\cite{W}] \label{wallman} If $L$ is a distributive lattice, then there is a compact $T_1$ space $X$ with a base for its closed sets that is a homomorphic image of $L$. If $L$ is also disjunctive then we can find a base for its closed sets that is an isomorphic image of $L$. \end{theorem}
We call the space $X$ a Wallman space of $L$ or a Wallman representation of $L$, notation: $wL$.
A lattice $L$ is \emph{disjunctive} if it models the sentence
\begin{equation} \label{disjunctive} \forall a\,b\exists x[(a\sqcap b\not=a)\rightarrow ((a\sqcap x=x)\wedge (b\sqcap x={\bf 0}))]. \end{equation}
Furthermore the space $X$ in theorem~\ref{wallman} is Hausdorff if and only if the lattice $L$ is a normal lattice. We call a lattice normal if it models the sentence
\begin{equation} \label{normal} \forall a\,b\exists x\,y[(a\sqcap b={\bf 0})\rightarrow((a\sqcap x={\bf 0})\wedge (b\sqcap y={\bf 0})\wedge(x\sqcup y={\bf 1}))]. \end{equation}
Note that, if we start out with a compact Hausdorff space $X$ and look at a base for its closed subsets which is closed under finite unions and intersections, i.e., a (normal, disjunctive and distributive) lattice, then the Wallman space of this lattice is just the space $X$.
\begin{remark} From now on we refer to a base for the closed subsets of some topological space which is closed under finite unions and intersections as a lattice base for the closed sets of the space $X$. \end{remark}
The following theorem shows how to create an onto mapping from maps between lattices. In this theorem $2^X$ denotes the family of all closed subsets of the space $X$.
\begin{theorem}\cite{DH} \label{contimage}
Let $X$ and $Y$ be compact Hausdorff spaces and let
$\mathcal{C}$ be a base for the closed subsets of $Y$ that is
closed under finite unions and intersections. Then $Y$ is a
continuous image of $X$ if and only if there is a map
$\phi:\mathcal{C}\rightarrow 2^X$ such that
\begin{enumerate}
\item
$\phi(\emptyset)=\emptyset$, and if $F\not=\emptyset$ then
$\phi(F)\not=\emptyset$
\item
if $F\cup G=Y$ then $\phi(F)\cup\phi(G)=X$
\item
if $F_1\cap\cdots\cap F_n=\emptyset$ then
$\phi(F_1)\cap\cdots\cap\phi(F_n)=\emptyset$.
\end{enumerate} \end{theorem}
So $Y$ is certainly a continuous image of $X$ if there is an embedding of some lattice base of the closed sets of $Y$ into $2^X$.
\subsection{Translation of properties} Our model-theoretic proof of theorem~\ref{MaTym} will be as follows. Given a metric continuum $X$, we will construct a lattice $L$ such that some lattice base of $X$ is embedded into $L$, the Wallman representation $wL$ of $L$ is a one-dimensional hereditarily indecomposable continuum and that for every subcontinuum in $X$ there exists a subcontinuum of $wL$ that is mapped onto it.
For this we need to translate things like being hereditarily indecomposable, being of dimension less than or equal to one and being connected in terms of closed sets only.
To translate hereditarily indecomposability we use the following characterization, due to Krasinkiewicz and Minc.
\begin{theorem}[Krasinkiewicz and Minc]
A compact Hausdorff space is hereditarily indecomposable if and
only if it is crooked between every pair of disjoint closed
nonempty subsets. \end{theorem}
Which the authors translated in~\cite{HvMP} into terms of closed sets only as follows.
\begin{theorem}\cite{HvMP}\label{HerIndec}
A compact Hausdorff space $X$ is hereditarily indecomposable
if and only if whenever four closed sets $C$, $D$, $F$ and~$G$
in $X$ are given such that $C\cap D=C\cap G=F\cap D=\emptyset$
one can write $X$ as the union of three closed sets $X_0$,
$X_1$ and~$X_2$ such that $C\subset X_0$, $D\subset X_2$,
$X_0\cap X_1\cap G=\emptyset$, $X_0\cap X_2=\emptyset$ and
$X_1\cap X_2\cap F=\emptyset$. \end{theorem}
So a compact Hausdorff space is hereditary indecomposable if the lattice $2^X$ models the sentence
\begin{eqnarray}
\lefteqn{\forall a\,b\,c\,d\ \exists
x\,y\,z[((a\sqcap b={\bf 0})\wedge (a\sqcap c={\bf 0})
\wedge (b\sqcap d={\bf 0}))\rightarrow}\label{HI}\\
& & \rightarrow ((a\sqcap(y\sqcup z)={\bf 0})\wedge
(b\sqcap (x\sqcup y)={\bf 0})\wedge (x\sqcap y={\bf 0})\wedge\nonumber\\
& & \wedge (x\sqcap y\sqcap d={\bf 0})\wedge (y\sqcap z\sqcap
c={\bf 0})\wedge (x\sqcup y\sqcup z={\bf 1}))]\nonumber. \end{eqnarray}
A space $X$ is of dimension less than or equal to one if the lattice $2^X$ models the sentence \begin{eqnarray}
\lefteqn{\forall a\,b\,c\ \exists x\,y\,z[(a\sqcap b\sqcap
c={\bf 0})\rightarrow}\label{dim=1}\\
& \rightarrow &((a\sqcap x=a)\wedge (b\sqcap y=b)\wedge
(c\sqcap z=c)\wedge\nonumber\\
& & \wedge (x\sqcap y \sqcap z={\bf 0})\wedge (x\sqcup y\sqcup
z={\bf 1}))]\nonumber. \end{eqnarray}
A space $X$ is connected if the lattice $2^X$ models the sentence $\operatorname{conn}({\bf 1})$, where $\operatorname{conn}(a)$ is shorthand for the formula $\forall x\, y[((x\sqcap y={\bf 0})\wedge (x\sqcup y=a))\rightarrow(x=a)\vee (x={\bf 0}))]$.
\begin{remark} For the next two sections, section~\ref{modprelim} and section~\ref{modproof} we fix some metric continuum~$X$ and we will show there exists a hereditarily indecomposable one-dimensional continuum $Y$ of weight~$\operatorname{w}(X)$ such that $X$ is weakly confluent image of~$Y$. \end{remark}
\section{A continuous image of an hereditarily indecomposable one-dimensional continuum of the same weight}\label{modprelim}
Using theorem~\ref{wallman} and~\ref{contimage} of the previous section we see that to get a hereditarily indecomposable one-dimensional continuum of weight $w(X)$ that maps onto $X$ we must find a countable distributive, disjunctive normal lattice $L$ such that it is a model of the sentences~\ref{HI}, \ref{dim=1} and~$\operatorname{conn}({\bf 1})$, and furthermore some lattice base for the closed sets of $X$ is embedded into this lattice~$L$.
Fix a lattice base $\mathcal{B}$ for the closed sets of $X$.
For some countable set of constants $K$ we will construct a set of sentences $\Sigma$ in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K$. We will make sure that $\Sigma$ is a consistent set of sentences such that, if we have a model $\mathfrak{A}=(A,\mathcal{I})$ for $\Sigma$ then
$$L(\mathfrak{A})=\mathcal{I}\beperk K$$
is the universe of some lattice model in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}$ which is normal, distributive and disjunctive and models the sentences~\ref{HI}, \ref{dim=1} and~$\operatorname{conn}({\bf 1})$. To make sure that $\mathcal{B}$ is embedded into $L(\mathfrak{A})$ we simply add the diagram of the lattice $\mathcal{B}$ to the set $\Sigma$ and make sure that there are constants in $K$ representing the elements of $\mathcal{B}$. The interpretations of $\sqcap$, $\sqcup$, ${\bf 0}$ and~${\bf 1}$ are given by there interpretations under $\mathcal{I}$ in the model $\mathfrak{A}$.
Let $K$ be the following countable set of constants
\begin{equation}
\label{K}
K=\bigcup_{-1\leq n<\omega}K_n=\bigcup_{-1\leq n<\omega}\{k_{n,m}:m<\omega\}. \end{equation}
We will define the sentences of $\Sigma$ in an $\omega$-recursion. So $\Sigma$ will be the set $\bigcup_{n<\omega}\Sigma_n$.
For definiteness we define $K_{-1}=\mathcal{B}$ and $\Sigma_0=\triangle_{\mathcal{B}}$, the diagram of $\mathcal{B}$.
\subsection{Construction of $\Sigma$ in $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K$}
Suppose we already defined the sentences up to $\Sigma_{5n}$.
\begin{enumerate}
\item
\label{Sigmalattice}
$\Sigma_{5n+1}$ will be a set of sentences that will make sure
that the supremum and infimum of any pair of constants in
$\bigcup_{m\leq 5n}K_m$ are defined, using the new
constants from $K_{5n+1}$.
We will also make sure that the set
$\Sigma_{5n+1}$ makes sure that distributivity holds for
any triple of elements from $\bigcup_{m\leq 5n}K_m$.
And we will make sure that the family of sets of sentences
$\{\Sigma_{5n+1}:n<\omega\}$ will prevent the existence of a
counterexample for $\operatorname{conn}({\bf 1})$ in $\bigcup_{m\leq 5n}K_m$.
\item
\label{Sigmadisjunctive}
$\Sigma_{5n+2}$ will be a set of sentences
that will make sure that for every $a,b\in\bigcup_{m\leq 5n}K_m$ there
exists some $c\in\bigcup_{m\leq n}K_{5m+2}$ such that the
formula that is sentence~\ref{disjunctive} without
quantifiers will hold for these $a$, $b$ and $c$.
\item
\label{Sigmanormal}
$\Sigma_{5n+3}$ will be a set of sentences
that will make sure that for every $a,b\in\bigcup_{m\leq
5n}K_m$ there exist $c,d\in\bigcup_{m\leq n}K_{5m+3}$ such
that the formula that is sentence~\ref{normal} without
quantifiers will hold for these $a$, $b$, $c$ and~$d$.
\item
$\Sigma_{5n+4}$ will be a set of
sentences that will make sure that the according to the
elements of $K_{5n+4}\cup\bigcup_{m\leq 5n}K_m$ the
dimension of the Wallman space of $L(\mathfrak{A})$
for any model $\mathfrak{A}$ of $\Sigma$ will be
less than or equal to one.
\item
\label{SigmaHI}
$\Sigma_{5(n+1)}$ will be a set of
sentences that will make sure that for any
$a,b,c\in\bigcup_{m<5n}K_m$ there exist
$x,y,z\in K_{5(n+1)}$ such that the formula, which is the
sentence~\ref{HI} without quantifiers, holds for this
$a,b,c$ and $x,y,z$. \end{enumerate}
We now show how to define the sets of sentences of $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup\bigcup_{m<5n+4}K_m$ as described in~\ref{Sigmalattice}~-~\ref{SigmaHI}.
We have a natural order $\triangleleft$ on the set $K=\bigcup_m K_m$ defined by $$k_{n,m}\triangleleft k_{r,t}\leftrightarrow [(n<r)\vee ((n=r)\wedge (m<t))].$$
Let $\{p_l\}_{l<\omega}$ be an enumeration of $$\{p\in [\bigcup_{m\leq 5n} K_m]^2: p\setminus\bigcup_{m\leq 5(n-1)} K_m\not=\emptyset\}.$$
\begin{eqnarray*} \Sigma_{5n+1}^0 &=& \{\bigsqcap p_l=k_{5n+1,2l}:l<\omega\}\\ \Sigma_{5n+1}^1 &=& \{\bigsqcup p_l=k_{5n+1,2l+1}:l<\omega\}\\ \Sigma_{5n+1}^2 &=& \{a\sqcup a=a, a\sqcap a=a: a\in\bigcup_{m\leq
5n}K_m\}\\ \Sigma_{5n+1}^3 &=& \{a\sqcup(b\sqcup c)=(a\sqcup b)\sqcup c,
a\sqcap(b\sqcap c)=(a\sqcap b)\sqcap c: a,b,c\in\bigcup_{m\leq
5n}K_m\}\\ \Sigma_{5n+1}^4 &=& \{a\sqcup(b\sqcap c)=(a\sqcup b)\sqcap (a\sqcup c):
a,b,c\in\bigcup_{m\leq 5n}K_m\}\\ \Sigma_{5n+1}^5 &=& \{a\sqcup(a\sqcap b)=a, a\sqcap(a\sqcup b)=a:
a,b\in\bigcup_{m\leq 5n}K_m\}\\ \Sigma_{5n+1}^6 &=& \{[((a\sqcup b={\bf 1})\wedge (a\sqcap
b={\bf 0}))\rightarrow ((a={\bf 0})\vee (a={\bf 1}))]:
a,b\in\bigcup_{m\leq 5n}K_m\} \end{eqnarray*}
Define $\Sigma_{5n+1}$ by $$\Sigma_{5n+1}:=\bigcup_{i\leq 6}\Sigma_{5n+1}^i.$$ This set of sentences will make sure that any model of $\Sigma$ in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K$ will be a distributive lattice and also a model of the sentence $\operatorname{conn}({\bf 1})$.
\begin{eqnarray*}
\Sigma_{5n+2} &=& \{[(\max_\triangleleft
p_l\sqcap\min_\triangleleft
p_l={\bf 0})\rightarrow
((\max_\triangleleft p_l\sqcap k_{5n+2,2l}={\bf 0})\wedge\\
& & \wedge (\min_\triangleleft p_l\sqcap k_{5n+2,2l+1}={\bf 0})\wedge
(k_{5n+2,2l}\sqcup
k_{5n+2,2l+1}={\bf 1}))]: l<\omega\} \end{eqnarray*}
This set of sentences will make sure that any (lattice) model of $\Sigma$ in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K$ will be normal.
The following set of sentences makes sure that any model of $\Sigma$ in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K$ which is also a lattice is a disjunctive lattice. \begin{eqnarray*}
\Sigma_{5n+3}^0 = \{[(\max_\triangleleft p_l\sqcap
\min_\triangleleft p_l\not=\max_\triangleleft p_l)
& \rightarrow & ((k_{5n+3,2l}\sqcap\max_\triangleleft
p_l=k_{5n+3,2l})\wedge\\
& & \wedge (k_{5n+3,2l}\sqcap\min_\triangleleft
p_l={\bf 0}))]:l<\omega\}\\
\Sigma_{5n+3}^1=\{[(\min_\triangleleft
p_l\sqcap\max_\triangleleft p_l\not=\min_\triangleleft p_l)
&\rightarrow & ((k_{5n+3,2l+1}\sqcap\min_\triangleleft
p_l=k_{5n+3,2l+1})\wedge\\
& & \wedge (k_{5n+3,2l+1}\sqcap\max_\triangleleft
p_l={\bf 0}))]:l<\omega\} \end{eqnarray*} And define $\Sigma_{5n+3}$ by $$\Sigma_{5n+3}=\Sigma_{5n+3}^0\cup\Sigma_{5n+3}^1.$$
Let $\zeta$ denote the following formula in $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}$ \begin{eqnarray*}
\zeta(a,b,c;x,y,z) = [(a\sqcap b\sqcap c={\bf 0}) &\rightarrow &
((a\sqcap x=a)\wedge (b\sqcap y=b)\wedge (c\sqcap z=c)\wedge\\
& & \wedge (x\sqcap y\sqcap z={\bf 0})\wedge (x\sqcup y\sqcup z={\bf 1}))] \end{eqnarray*} Let $\{q_l\}_{l<\omega}$ be an enumeration of the set
\begin{equation*}
\{q\in [\bigcup_{m\leq 5n}K_m]^3:q\setminus
\bigcup_{m\leq 5(n-1)}K_m\not=\emptyset\} \end{equation*}
For every $l<\omega$ write $q_l=\{q_l(0),q_l(1),q_l(2)\}$.
Now define $\Sigma_{5n+4}$ by \begin{equation*}
\Sigma_{5n+4}=\{\zeta(q_l(0),q_l(1),q_l(2);k_{5n+4,3l},
k_{5n+4,3l+1},k_{5n+4,3l+2}):l<\omega\}. \end{equation*} This will make sure that the Wallman space of any lattice model of $\Sigma$ will be at most one-dimensional.
For making sure that the Wallman space of any model of $\Sigma$ will be hereditarily indecomposable we introduce the following formulas in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}$: \begin{eqnarray}
\phi(a,b,c,d) & = & [(a\sqcap b={\bf 0})\wedge (a\sqcap d={\bf 0})\wedge
(b\sqcap c={\bf 0})]\nonumber\\
\psi(a,b,c,d;x,y,z) & = & [(x\sqcup y\sqcup z={\bf 1})\wedge
(x\sqcap z={\bf 0})\wedge\nonumber\\
& & \wedge (a\sqcap (y\sqcup z)={\bf 0})\wedge (b\sqcap(x\sqcup y)={\bf 0})
\wedge\nonumber\\
& & \wedge (x\sqcap y\sqcap d={\bf 0})\wedge (y\sqcap z\sqcap
c={\bf 0})]\nonumber\\
\theta(a,b,c,d;x,y,z) & = & \phi(a,b,c,d) \rightarrow
\psi(a,b,c,d;x,y,z) \label{theta} \end{eqnarray}
Let $\{r_l\}_{l<\omega}$ be an enumeration of the set $$\{r\in \powerinfront{4}{[\bigcup_{m\leq 5n}K_m]}: \operatorname{ran}(r)\setminus\bigcup_{m\leq 5(n-1)}K_m\not=\emptyset\}.$$
Let $\Sigma_{5(n+1)}$ be the set of sentences defined by: \begin{equation*}
\Sigma_{5(n+1)}=\{\theta(r_l(0),r_l(1),r_l(2),r_l(3);
k_{5(n+1),3l}, k_{5(n+1),3l+1},k_{5(n+1),3l+2}):l<\omega\} \end{equation*} Here the formula $\theta$ is as in equation~\ref{theta}.
\subsection{Consistency of $\Sigma$ in $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K$}\label{CON(Sigma)}
In this section we show that $\Sigma$ is a consistent set of sentences.
We will find for $\Sigma'\in[\Sigma]^{<\omega}$ a metric space $X(\Sigma')$ and an interpretation function $\mathcal{I}:K\rightarrow 2^{X(\Sigma')}$ such that $(X(\Sigma'),\mathcal{I})\models \Sigma'\cup\triangle_\mathcal{B}$. The interpretations of $\sqcap$, $\sqcup$, ${\bf 0}$ and~${\bf 1}$ will always be $\cap$, $\cup$ (the normal set intersection and union), $\emptyset$ and~$X(\Sigma')$ respectively.
For $\Sigma'=\emptyset$ we let $X(\emptyset)=X$ and we interpret every constant from $K_{-1}$ as its corresponding base element in $\mathcal{B}$. Extend the interpretation function by assigning the empty set to all constants of $K\setminus K_{-1}$. It is obvious that $(2^{X(\emptyset)},\mathcal{I})\models\triangle_{\mathcal{B}}$.
\begin{remark} \label{wellorder} As the interpretation of $\sqcap$ and $\sqcup$ in the metric continuum $X(\Sigma')$ will always be the normal set intersection and set union, all the sentences in $\Sigma_{5n+1}^i$ for some $n<\omega$ and $i\in\{3,4,5,6\}$ are true in the model $(2^{X(\Sigma')},\mathcal{I})$. So we can ignore these sentences and for the remainder of this section concentrate on the remaining sentences of $\Sigma$. \end{remark}
We can define a well order $\sqsubset$ on the set $\Sigma\setminus\{\Sigma_{5n+1}^i:n<\omega\ \text{and}\ i\in\{3,4,5,6\}\}$ by stating that $\phi\sqsubset\psi$ if and only if there are $n<m<\omega$ such that $\phi\in\Sigma_n$ and $\psi\in\Sigma_m$ or there are $k<l<\omega$ and $n<\omega$ such that $\phi,\psi\in\Sigma_n$ and $\phi$ is a sentence that mentions $p_k$ ($q_k$ or $r_k$ respectively) and $\psi$ is a sentence that mentions $p_l$ ($q_l$ or $r_l$ respectively).
Suppose $\Sigma'$ is a finite subset of $\Sigma$ such that for all of its proper subsets $\Sigma''$ there exists a metric continuum $X(\Sigma'')$ and an interpretation function $\mathcal{I}:K\rightarrow 2^{X(\Sigma'')}$ such that $(X(\Sigma''),\mathcal{I})\models \Sigma''\cup\triangle_\mathcal{B}$.
Let $\theta$ be the $\sqsubset$-maximal sentence in $\Sigma'\setminus\{\Sigma_{5n+1}^i:n<\omega\ \text{and}\ i\in\{3,4,5,6\}\}$. We will show that there exists a metric space $X(\Sigma')$ and an interpretation function $\mathcal{I}:K\rightarrow 2^{X(\Sigma')}$ such that $(X(\Sigma'),\mathcal{I})\models\Sigma'\cup\triangle_\mathcal{B}$.
Let $\Sigma''=\Sigma'\setminus\{\theta\}$.
\subsubsection{$\theta\in\bigcup_{m<\omega}\{\Sigma_{5n+1}\cup
\Sigma_{5m+2}\cup\Sigma_{5m+3}\}$} \label{theta123} We can simply let $X(\Sigma')=X(\Sigma'')$ and either (re)interpret the new constant as the intersection or union of two closed sets in $X(\Sigma'')$ if $\theta$ is in some $\Sigma_{5n+1}$ or, if $\theta$ is an element of some $\Sigma_{5m+2}$ or $\Sigma_{5m+3}$, using the fact that the space $X(\Sigma'')$ is normal find (re)interpretations for the newly added constants, in an obvious way.
\subsubsection{$\theta\in\{\Sigma_{5m+4}:m<\omega\}$} \label{theta4} Suppose the preamble of $\theta$ is true in the model $(2^{X(\Sigma'')},\mathcal{I})$, where $\theta$ is the following sentence \begin{eqnarray*}
\theta &=& [(a\sqcap b\sqcap c={\bf 0})\rightarrow (a\sqcap x=
a)\wedge (b\sqcap y=b)\wedge \\
& &
\wedge (c\sqcap z=c)\wedge (x\sqcap y\sqcap z={\bf 0})\wedge (x\sqcup y\sqcup
z={\bf 1})]. \end{eqnarray*}
If $a$ has a zero interpretation then we can choose $x={\bf 0}$, $y={\bf 1}$ and $z={\bf 1}$, and this interpretation of $x$, $y$ and~$z$ makes sure that $\theta$ holds in the model $(2^{X(\Sigma'')},\mathcal{I})$. So we may assume that $a$, $b$ and~$c$ have non zero interpretations.
As the space $X(\Sigma'')$ is metric, we can assume that we have a metric $\rho$ on $X(\Sigma'')$. Moreover we can assume that $\rho$ is bounded by~$1$.
Consider the following function $f$ from $X(\Sigma'')$ to $\mathbb{R}^3$
$$f(x)=(\kappa_a(x),\kappa_b(x),\kappa_c(x)),$$
where $\kappa_a:X(\Sigma'')\rightarrow [0,1]$ is defined by
$$\kappa_a(x)=\frac{\rho(x,a)}{\rho(x,a)+\rho(x,b)+\rho(x,c)},$$
and $\kappa_b$ and $\kappa_c$ are like $\kappa_a$, but with $a$ interchanged with $b$ and $c$ respectively. Then $f[X(\Sigma'')]$ is a subset of the triangle $T=\{(t_1,t_2,t_3)\in\mathbb{R}^3:t_1+t_2+t_3=1\ \text{and}\ t_1,t_2,t_3\geq 0\}$.
The space $X(\Sigma'')$ is embedded in the space $X(\Sigma'')\times T$ by the graph of $f$ (in other words the embedding is defined by $x\mapsto (x,f(x))$). Let us denote this embedding by $g$.
Consider the space $\partial T\times [0,1]$, where $\partial T=T\setminus\operatorname{int}(T)$ in $\mathbb{R}^3$. Let $h$ be the map from $\partial T\times [0,1]$ onto $T$ defined by $$h((x,t))=x(1-t)+t(\frac{1}{3},\frac{1}{3},\frac{1}{3}).$$ The map $h$ restricted to $\partial T\times [0,1)$ is a homeomorphism between $\partial T\times [0,1)$ and $T\setminus\{(\frac{1}{3},\frac{1}{3},\frac{1}{3})\}$.
We define $X(\Sigma')$ as the space $$X(\Sigma')=(\operatorname{id}\times h)^{-1}[g[X(\Sigma'')]].$$
Let us (re)interpret the constants $k$ in $K$ in the following way: $$\mathcal{I}(k):=\mathcal{I}(k)\times(\partial T\times[0,1])\cap X(\Sigma')\ (=(\operatorname{id}\times h)^{-1}[g[\mathcal{I}(k)]]).$$
\begin{remark} \label{mono-cl} For future reference we note that, as the inverse images of points $(x,(t_1,t_2,t_3))$ under the map $\operatorname{id}\times h$ are points for $(x,(t_1,t_2,t_3))$ in $X(\Sigma'')\times T$ for which $(t_1,t_2,t_3)\not=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$ and equal to $\{x\}\times\partial T\times\{1\}$ for those $(x,(t_1,t_2,t_3))$ in $X(\Sigma'')\times T$ for which $(t_1,t_2,t_3)=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$, we have that the map $\operatorname{id}\times h:X(\Sigma'')\times(\partial T\times[0,1])\rightarrow X(\Sigma'')\times T$ is monotone. Furthermore it is also closed. \end{remark}
We did nothing to disturb the truth or falsity of the sentences $\Sigma''$ in the model $(2^{X(\Sigma'')},\mathcal{I})$ as $f^{-1}[A\cap B]=f^{-1}[A]\cap f^{-1}[B]$ and $f^{-1}[A\cup B]=f^{-1}[A]\cup f^{-1}[B]$ for any function $f$ and any sets $A$ and $B$.
So we have that $(2^{X(\Sigma')},\mathcal{I})$ is a model for $\Sigma''$.
Let $A$ be the line segments between $(0,1,0)$ and $(0,0,1)$, $B$ the line segment between $(1,0,0)$ and $(0,0,1)$ and $C$ the line segment between $(1,0,0)$ and $(0,1,0)$. Now we (re)interpret $x$, $y$ and~$z$ as follows
\begin{eqnarray*} \mathcal{I}(x) &:=& X(\Sigma'')\times(A\times [0,1])\cap X(\Sigma')\\ \mathcal{I}(y) &:=& X(\Sigma'')\times(B\times [0,1])\cap X(\Sigma')\\ \mathcal{I}(z) &:=& X(\Sigma'')\times(C\times [0,1])\cap X(\Sigma') \end{eqnarray*}
As is easily seen, this interpretation of the constants $x$, $y$ and~$z$ makes the sentence $\theta$ a true sentence in the model $(2^{X(\Sigma')},\mathcal{I})$. So $(2^{X(\Sigma')},\mathcal{I})\models\Sigma'$.
\subsubsection{$\theta\in\{\Sigma_{5(m+1)}:m<\omega\}$} \label{theta5} Suppose the preamble of $\theta$ is true in the model $(2^{X(\Sigma'')},\mathcal{I})$, where $\theta$ is the sentence $$\theta=\phi(a,b,c,d)\rightarrow\psi(a,b,c,d;x,y,z),$$ as in equation~\ref{theta}.
If the interpretation of $a$ is zero we can simply take $x=y={\bf 0}$ and $z={\bf 1}$ to make $(2^{X(\Sigma'')},\mathcal{I})$ a model of $\theta$. So again we may assume that the interpretations of $a$, $b$, $c$ and~$d$ are nonzero.
To show that $\Sigma'$ is a consistent set of sentences we are going to use an idea from~\cite{HvMP}.
With the aid of Urysohn's lemma we can find a continuous function $f:X(\Sigma'')\rightarrow [0,1]$ such that $f(\mathcal{I}(a))\subset\{0\}$, $f(\mathcal{I}(b))\subset\{1\}$, $f(\mathcal{I}(c))\subset [0,\frac{1}{2}]$ and $f(\mathcal{I}(d))\subset[\frac{1}{2},1]$.
Let $P$ denote the (closed and connected) subset of $[0,1]\times[0,1]$ given by $$P=\{\frac{1}{4}\}\times[0,\frac{2}{3}]\cup [\frac{1}{4},\frac{1}{2}]\times\{\frac{2}{3}\}\cup \{\frac{1}{2}\}\times[\frac{1}{3},\frac{2}{3}]\cup [\frac{1}{2},\frac{3}{4}]\times\{\frac{1}{3}\}\cup \{\frac{3}{4}\}\times[\frac{1}{3},1].$$
Let $X^+\subset [0,1]\times X(\Sigma'')$ denote the pre-image of the set $P$ under the function $\operatorname{id}\times f$:
$$X^+=\{(t,x)\in [0,1]\times X(\Sigma''):(t,f(x))\in P\}.$$
As $P$ is closed and $\operatorname{id}\times f$ is continuous we have that $X^+$ is a compact metric space. Define the (continuous) map $\pi:X^+\rightarrow X(\Sigma'')$ by $\pi((t,x))=x$ for every $(t,x)\in X^+$.
\begin{lemma}
There exists a unique component $C$ of $X^+$ such that $\pi[C]=X(\Sigma'')$.
\label{cpnt_in_X+} \end{lemma} \begin{proof}
Suppose we have closed sets $F$ and $G$ such that $X^+=F+G$.
Define subsets $A_i,B_i$ of $X$, where $i\in\{0,1,2\}$ , by
\begin{eqnarray*}
A_0 & = & \{x\in X(\Sigma''):(\frac{1}{4},x)\in F\},\
B_0 = \{x\in X(\Sigma''):(\frac{1}{4},x)\in G\}\\
A_1 & = & \{x\in X(\Sigma''):(\frac{1}{2},x)\in F\},\
B_1 = \{x\in X(\Sigma''):(\frac{1}{2},x)\in G\}\\
A_2 & = & \{x\in X(\Sigma''):(\frac{3}{4},x)\in F\},\
B_2 = \{x\in X(\Sigma''):(\frac{3}{4},x)\in G\}
\end{eqnarray*}
It is clear that $A_i\cap B_i=\emptyset$ for every
$i\in\{0,1,2\}$.
\begin{claim}
The following holds
\begin{enumerate}
\item
For every $x\in(A_0\cap B_1)\cup (B_0\cap A_1)$ we have
$f(x)<\frac{2}{3}$.
\item
For every $x\in(A_1\cap B_2)\cup (B_1\cap A_2)$ we have
$f(x)>\frac{1}{3}$.
\end{enumerate}
\end{claim}
\begin{proof}
As the proofs of the statements are very similar we will
only prove the first statement.
If $x\in A_0\cap B_1$ or $x\in B_0\cap A_1$ then
$f(x)\leq\frac{2}{3}$. As $f(x)=\frac{2}{3}$ is impossible,
we are done.
\end{proof}
Let us define the following closed sets $A^*$ and~$B^*$ of
$X(\Sigma'')$ by
\begin{eqnarray*}
A^* = &\bigcup&\{f^{-1}[0,\frac{1}{3}]\cap
A_0,f^{-1}[\frac{2}{3},1]\cap A_2,A_0\cap A_1\cap A_2,\\
& &A_0\cap B_1\cap B_2,B_0\cap B_1\cap A_2\}\\
B^*= &\bigcup&\{f^{-1}[0,\frac{1}{3}]\cap
B_0,f^{-1}[\frac{2}{3},1]\cap B_2,B_0\cap B_1\cap B_2,\\
& &B_0\cap A_1\cap A_2,A_0\cap A_1\cap B_2\}
\end{eqnarray*}
The sets $A^*$ and $B^*$ are disjoint closed subsets of $X(\Sigma'')$
and their union is the whole of $X(\Sigma'')$. As $X(\Sigma'')$ is connected
one of these sets must be empty. So without loss of generality
we can assume that $B^*=\emptyset$.
We see now that $\pi[F]=X(\Sigma'')$ and that
$\pi[G]\subset f^{-1}[\frac{1}{3},\frac{2}{3}]$. It follows
that whenever $C$ is a clopen subset of $X^+$ then
either $\pi[C]=X(\Sigma'')$ and $\pi[X^+\setminus
C]\subset f^{-1}[\frac{1}{3},\frac{2}{3}]$ or it is the other way
around. This shows that $\mathcal{F}=\{C:C\ \text{is clopen
and}\ \pi[C]=X(\Sigma'')\}$ is an ultrafilter in the family of
clopen subsets of $X^+$; its intersection
$\bigcap\mathcal{F}$ is the unique component of $X^+$
that is mapped onto $X(\Sigma'')$. We let $C$ be this
component. This ends the proof of lemma~\ref{cpnt_in_X+}. \end{proof}
Let $X(\Sigma')$ be the unique component $C$ of $X^+$ that is mapped onto $X(\Sigma'')$ by the map $\pi$ with the subspace topology.
In $2^{X(\Sigma')}$, the constants $x$, $y$ and~$z$ that will make the sentence $\theta$ true will have the following (re)interpretations:
\begin{eqnarray*}
\mathcal{I}(x) &=& \{(t,x)\in X(\Sigma'):t\in[0,\frac{3}{8}]\},\\
\mathcal{I}(y) &=& \{(t,x)\in X(\Sigma')
:t\in[\frac{3}{8},\frac{5}{8}]\}\ \text{and}\\
\mathcal{I}(z) &=& \{(t,x)\in X(\Sigma'):t\in[\frac{5}{8},1]\}. \end{eqnarray*}
The (re)interpretation of the constants in $K$ will be as follows. $$\mathcal{I}(k):=[0,1]\times\mathcal{I}(k)\cap X(\Sigma')\ (=\pi^{-1}[\mathcal{I}(k)]\cap X(\Sigma')).$$
As $\pi$ maps $C$ onto $X(\Sigma'')$ we have that $(2^{X(\Sigma')},\mathcal{I})$ is a model of $\Sigma'$, as the truth or falsity of sentences in $\Sigma''$ are not affected by the new interpretation of the constants.
\section{The Ma\'{c}kowiak-Tymchatyn theorem} \label{modproof} Apart from the weakly confluent property of the continuous map we have proven the Ma\'{c}kowiak-Tymchatyn theorem, theorem~\ref{MaTym}. To make sure that the continuous map following from the previous section is weakly confluent, we must consider all the subcontinua of the space $X$.
We let $\hat{K}$ be the following set
\begin{eqnarray*} \hat{K}=\bigcup_{-2\leq n<\omega}\hat{K}_n= \bigcup_{-2\leq n<\omega}\{k_{n,\alpha}:\alpha<\vert 2^X\vert\}. \end{eqnarray*}
We will construct a set
$$\hat{\Sigma}=\bigcup_{-1\leq n<\omega}\hat{\Sigma}_n$$
of sentences in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup\hat{K}$ similar as in the previous section such that given any model $\mathfrak{A}=(A,\mathcal{I})$ of $\hat{\Sigma}$, the set $L(\mathfrak{A})=\mathcal{I}\beperk\hat{K}$ will be the universe of some normal distributive and disjunctive lattice such that \begin{enumerate} \item
$L(\mathfrak{A})$ is a model of the sentences~\ref{HI},
\ref{dim=1} and~$\operatorname{conn}({\bf 1})$, \item
the lattice $2^X$ is embedded into $L(\mathfrak{A})$ so there
exists a continuous map $f$ from $wL(\mathfrak{A})$
onto $X$, \item
for every subcontinuum of $X$ there exists a subcontinuum of
$wL(\mathfrak{A})$ that is mapped onto it by $f$. \end{enumerate}
\subsection{A weakly confluent map}
We let $\hat{K}_{-1}=\{k_{-1,\alpha}<\vert 2^X\vert\}$ correspond to the set $2^X=\{x_\alpha:\alpha<\vert 2^X\vert\}$ in such a way that the set $\mathcal{C}(X)$ of all the subcontinua of $X$ corresponds to the set $\{x_\alpha:\alpha<\beta\}$ for some ordinal number $\beta<\vert 2^X\vert$. Let the set of sentences $\hat{\Sigma}_0$ in $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup\hat{K}_{-1}$ correspond to $\triangle_{2^X}$, the diagram of the lattice $2^X$.
We want to define a set of sentences $\Sigma_{-1}$ in $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K_{-2}\cup K_{-1}$ that will make sure that if $\mathfrak{A}$ is a model of $\hat{\Sigma}$ in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup\hat{K}$ then we have for every subcontinuum in $X$ a subcontinuum of $wL(\mathfrak{A})$ that will be mapped onto it by the continuous onto map we get by the fact that $2^X$ is embedded in the lattice $L(\mathfrak{A})$. \begin{eqnarray*}
\hat{\Sigma}^0_{-1}&=&
\{\operatorname{conn}(k_{-2,\alpha})\wedge (k_{-2,\alpha}\sqcap
k_{-1,\alpha}=k_{-2,\alpha})):\alpha<\beta\}\\
\hat{\Sigma}^1_{-1}&=&\{(\operatorname{conn}(k_{-2,\alpha})\wedge
(k_{-2,\alpha}\sqcap k_{-1,\gamma}=k_{-2,\alpha}))\rightarrow\\
& &\rightarrow (k_{-1,\alpha}\sqcap k_{-1,\gamma}=k_{-1,\alpha}):
\alpha<\beta,\ \gamma<\vert 2^X\vert\}\\
\hat{\Sigma}^2_{-1}&=&\{k_{-2,\gamma}={\bf 0}:\beta\leq\gamma<\vert
2^X\vert\}. \end{eqnarray*} And define the set of sentences $\Sigma_{-1}$ as \begin{equation}
\label{weakly-confluent}
\hat{\Sigma}_{-1}=\hat{\Sigma}^0_{-1}\cup\hat{\Sigma}^1_{-1}
\cup\hat{\Sigma}^2_{-1}. \end{equation}
Suppose $\mathfrak{A}$ is a model of $\hat{\Sigma}$. The set $\hat{\Sigma}^0_{-1}$ will make sure that for every subcontinuum $C$ of $X$ there is some subcontinuum $C'$ of $wL(\mathfrak{A})$ that is mapped into $C$ by the continuous onto map $f$ we get from theorem~\ref{contimage} and the fact that $2^X$ is embedded into $wL(\mathfrak{A})$. The set $\hat{\Sigma}^1_{-1}$ will then make sure that $C'$ is in fact mapped onto $C$ by the map $f$.
Let us further construct the sets $\hat{\Sigma_n}$ for $0<n<\omega$ in the same manner as we have constructed the set $\Sigma_n$ in the previous section. So that if we have a model $\mathfrak{A}$ of $\hat{\Sigma}$, the lattice $L(\mathfrak{A})$ will be a normal distributive and disjunctive lattice that models the sentences~\ref{HI}, \ref{dim=1} and~$\operatorname{conn}({\bf 1})$.
To prove the consistency of $\hat{\Sigma}$ it is enough to prove the following lemma.
\begin{lemma} For every finite $\Sigma'\in[\hat{\Sigma}]^{<\omega}$ there is a metric continuum $X(\Sigma')$, and an interpretation function $\mathcal{I}:\hat{K}\rightarrow 2^{X(\Sigma')}$ such that $(2^{X(\Sigma')},\mathcal{I})$ is a model for $\Sigma'$. \end{lemma}
\begin{proof} Suppose we have a metric continuum $X(\Sigma'')$ for every subset $\Sigma''$ of a given $\Sigma'\in[\Sigma]^{<\omega}$ such that there is an interpretation function $\mathcal{I}_{\Sigma''}:\hat{K}\rightarrow 2^{X(\Sigma'')}$ such that $(2^{X(\Sigma'')},\mathcal{I}_{\Sigma''})\models\Sigma''$. We want to show that there exists a metric continuum $X(\Sigma')$ and an interpretation function $\mathcal{I}_{\Sigma'}:\hat{K}\rightarrow 2^{X(\Sigma')}$ such that $(2^{X(\Sigma')},\mathcal{I}_{\Sigma'})\models \Sigma'$.
Let $\theta$ be an $\sqsubset$-maximal sentence in $\Sigma'$ that is of interest (see remark~\ref{wellorder}). If $\theta$ is an element of $\hat{\Sigma}_0$, $\hat{\Sigma}_{5n+1}$, $\hat{\Sigma}_{5n+2}$ or $\hat{\Sigma}_{5n+3}$ then we can choose $X(\Sigma')=X(\Sigma'')$ and redefine the interpretation function $\mathcal{I}$ in a natural way to obtain the wanted result. So let us suppose that $\theta$ is an element of $\hat{\Sigma}_{-1}$, $\hat{\Sigma}_{5n+4}$ or $\hat{\Sigma}_{5(n+1)}$ for some $n<\omega$.
If $\theta$ is an element of $\hat{\Sigma}_{-1}$. Then, as $\theta$ is the $\sqsubset$-maximal sentence in $\Sigma'$ of interest, no $\phi\in\Sigma'$ is an element of $\hat{\Sigma}_{5n+4}$ or $\hat{\Sigma}_{5(n+1)}$ for any $n<\omega$. As $2^X$ is a normal, distributive and disjunctive lattice and as $X$ is a continuum, we have that the lattice $2^X$ with an obvious interpretation function $\mathcal{I}$ is even a model for $\triangle_{2^X}\cup\hat{\Sigma}_{-1}\cup\Sigma'$.
Suppose now that $\theta$ is an element of $\hat{\Sigma}_{5n+4}$ for some $n<\omega$. If we look at the construction in subsection~\ref{theta4} we know that the function $(id\times h)$ is a closed monotone map from $X(\Sigma'')$ onto $X(\Sigma')$. So inverse images of connected sets are connected and all the sentences of $\hat{\Sigma}_{-1}$ in $\Sigma''$ that were true (false) in the model $(2^{X(\Sigma'')},\mathcal{I}_{\Sigma''})$, stay true (resp. false) in the model $(2^{X(\Sigma')},\mathcal{I})$ as we get from subsection~\ref{theta4}.
Finally, suppose that $\theta$ is an element of some $\hat{\Sigma}_{5(n+1)}$. Lets take a look at the construction of $X(\Sigma')$ in subsection~\ref{theta5}. Let $\pi$ be the map of $X(\Sigma')$ onto $X(\Sigma'')$ as given in subsection~\ref{theta5}. Consider the following lemma.
\begin{claim} \label{preimconn}
For every connected subset $A$ of $X(\Sigma'')$ there exists a
connected set $C(A)\subset C=X(\Sigma')$ such that $\pi[C(A)]=A$. \end{claim} \begin{proof}
Suppose we have $A\subset X(\Sigma'')$ connected. If we look at the
image of $A$ under the function $f$ there are a number of
possibilities:
\begin{enumerate}
\item
\label{case1}
$f[A]\subset [0,\frac{2}{3}]$ and
$f[A]\cap[0,\frac{1}{3})\not=\emptyset$ ($f[A]\subset
[\frac{1}{3},1]$ and
$f[A]\cap(\frac{2}{3},1]\not=\emptyset$),
\item
\label{case2}
$f[A]\subset[\frac{1}{3},\frac{2}{3}]$,
\item
\label{case3}
$f[A]\setminus[0,\frac{2}{3}] \not=\emptyset \not=
f[A]\setminus[\frac{1}{3},1]$
\end{enumerate}
In case~\ref{case1} we have $\{\frac{1}{4}\}\times A$
($\{\frac{3}{4}\}\times A$) is a connected subset of $X^+$
which must intersect the component $C$, as every other
component is mapped onto some subset of $X(\Sigma'')$, which is
mapped into $[\frac{1}{3},\frac{2}{3}]$ by the function $f$.
In case~\ref{case2} we have that the component $C$ must
intersect at least one of the connected subsets
$\{\frac{1}{4}\}\times A$, $\{\frac{1}{2}\}\times A$ or
$\{\frac{3}{4}\}\times A$, as $C$ is mapped onto $X(\Sigma'')$,
$X(\Sigma'')$ is connected and $f$ is continuous.
In case~\ref{case3} we can, as above, assuming
$A^+(=\pi^{-1}[A])=F+G$, construct closed and disjoint subsets
$A^*$ and $B^*$ of $A$ which cover it. Again the image under
$\pi$ is either all of $A$ or a proper subset of $A$. The
(unique) piece
that maps onto the whole of $A$ must intersect the set $C$,
and so is contained in it.
This ends the proof of the claim. \end{proof}
We have that $(2^{X(\Sigma')},\mathcal{I}_{\Sigma'})\models\Sigma'\setminus\hat{\Sigma}_{-1}$, we now define a new interpretation function $\mathcal{I}$ on $\hat{K}$ to $2^{X(\Sigma')}$ such that $(2^{X(\Sigma')},\mathcal{I})$ will be a model for $\Sigma'$. Note that the set of constants that are mentioned in the set $\Sigma'$ is a finite subset of $\hat{K}$, and let $\hat{K}(\Sigma')$ denote this finite subset. We will define the interpretation under $\mathcal{I}$ of the constants in $\hat{K}(\Sigma')$ 'from the bottom up'.
By claim~\ref{preimconn} for every $k_{-2,\alpha}\in\hat{K}(\Sigma')$ such that $C=\mathcal{I}_{\Sigma''}(k_{-2,\alpha})$ is a connected subset of $X(\Sigma'')$ we can find a connected subset $C'$ of $X(\Sigma')$ that maps onto $C$ by the map $\pi$. Let the $\mathcal{I}$ interpretation of the constant $k_{-2,\alpha}$ be this connected set $C'$ in $X(\Sigma')$.
For all those $k$ in $\hat{K}(\Sigma')\cap\hat{K}_{-2}$ that have no connected interpretation in $X(\Sigma')$ and for all the constants $k$ in $\hat{K}(\Sigma')\cap\hat{K}_{-1}$ the interpretation under $\mathcal{I}$ will be the same as the interpretation under $\mathcal{I}_{\Sigma'}$. So for those $k\in\hat{K}(\Sigma')$ we have
$$\mathcal{I}(k_{-1,\alpha})=\mathcal{I}_{\Sigma'}(k_{-1,\alpha})= \pi^{-1}[\mathcal{I}_{\Sigma''}(k_{-1,\alpha})]\cap X(\Sigma').$$
The interpretations of the rest of the constants in $\hat{K}(\Sigma')$ will follow from the interpretations of the constants we have just defined, because the interpretation of every constant depends on just a finite set of other constants and we just have to make sure that we define their interpretation in the right order.
As $\mathcal{I}(k)\subset\mathcal{I}_{\Sigma'}(k)$ for all $k\in\hat{K}(\Sigma')$ and for all $k\in\hat{K}(\Sigma')\cap\hat{K}_{-2}$ such that $\mathcal{I}_{\Sigma'}(k)$ is connected $\mathcal{I}(k)$ is also connected we have that all the sentences of $\hat{\Sigma}_{-1}$ true (false) in $(2^{X(\Sigma'')},\mathcal{I}_{\Sigma''})$ are true (false) in $(2^{X(\Sigma'')},\mathcal{I})$. The true or falseness of the other sentences in $\Sigma'$ have not been affected by the new interpretation function $\mathcal{I}$, and we have completed the proof. \end{proof}
\subsection{The Ma\'{c}kowiak-Tymchatyn theorem} As we have seen in the previous section the set of sentences $\hat{\Sigma}$ in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup\hat{K}$ is consistent. Let $\mathfrak{A}$ be a model for $\hat{\Sigma}$. This model gives us a normal distributive and disjunctive lattice $L(\mathfrak{A})$ which models the sentences~\ref{HI}, \ref{dim=1} and~$\operatorname{conn}({\bf 1})$. There also exists an embedding of $2^X$ into this lattice $L(\mathfrak{A})$ (remember that we showed that, with $K_{-2}$ an enumeration of the lattice $2^X$ the set $\Sigma$ is a consistent set of sentences in the language $\{\sqcap,\sqcup,{\bf 0},{\bf 1}\}\cup K$). All this implies that the Wallman space $wL(\mathfrak{A})$, is a one-dimensional hereditarily indecomposable continuum which admits a weakly confluent surjection onto the metric continuum $X$.
Now we only have to make sure that there exists such a space that is of countable weight to complete the proof of the Ma\'{c}kowiak Tymchatyn theorem. \begin{theorem}\cite{HvMP}
Let $f:Y\rightarrow X$ be a continuous surjection between
compact Hausdorff spaces. Then $f$ can be factored as $h\circ
g$, where $Y\stackrel{g}{\rightarrow}
Z\stackrel{h}{\rightarrow}X$ and~$Z$ has the same weight as
$X$ and shares many properties with $Y$ (for instance, if $Y$
is one-dimensional so is $X$ or if $Y$ is hereditarily
indecomposable, so is $X$). \end{theorem} \begin{proof}
Let $\mathcal{B}$ a minimal sized lattice-base for the closed
sets of $X$, and identify it with its copy
$\{f^{-1}[B]:B\in\mathcal{B}\}$ in $2^Y$. By the
L\"{o}wenheim-Skolem theorem there is an elementary sublattice
of $2^Y$, of the same cardinality as $\mathcal{B}$
such that $\mathcal{B}\subset D\prec 2^Y$. The space
$wD$ is as required. \end{proof} Applying this theorem to the space $wL(\mathfrak{A})$ and the weakly confluent map $f:wL(\mathfrak{A})\rightarrow X$ we get a one-dimensional hereditarily indecomposable continuum $wD$ which admits a weakly confluent map onto the space $X$ and moreover the weight of the space $wD$ equals the weight of the space $X$. This is exactly what we were looking for.
\section{A topological proof of the Ma\'{c}kowiak-Tymchatyn theorem}\label{topproof}
\subsection{the Ma\'{c}kowiak-Tymchatyn theorem} After the above proof was found we realized that it could be transformed into a purely topological proof, which we shall now describe.
Let $X$ be a metric continuum. We are going to define a inverse sequence of metric continua with onto bonding maps $\{\langle X_n,f_n\rangle:n<\omega\}$,
\begin{equation*} X=X_0\stackrel{f_1}{\longleftarrow} X_1\stackrel{f_2}{\longleftarrow} \cdots\stackrel{f_n}{\longleftarrow} X_n\stackrel{f_{n+1}}{\longleftarrow}\cdots, \end{equation*}
in such a way that the inverse limit space $X_\omega$
$$X_\omega=\lim_\leftarrow\{\langle X_n,f_n\rangle:n<\omega\}$$
is a hereditarily indecomposable one-dimensional continuum of weight $w(X)$ such that $\pi_0:X_\omega\rightarrow X$ is a weakly confluent and onto. Here, for every $n<\omega$ the continuous function $\pi_n$ is defined by $\pi_n=\operatorname{proj}_n\beperk X_\omega:X_\omega\rightarrow X_n$, where $\operatorname{proj}_n:\Pi_{n<\omega}X_n\rightarrow X_n$ is the projection.
Let us furthermore define maps $f^n_m:X_n\rightarrow X_m$ for $m<n$ as
\begin{equation*} f^n_m=\left\{
\begin{array}{l}
f_{m+1}\circ f_{m+2}\circ\cdots\circ f_n,\ \text{if}\ m+1<n\\
f_{m+1}\ \text{if}\ m+1=n.
\end{array} \right. \end{equation*}
The following lemma is well known. \begin{lemma} The family of all sets of the form $\pi_n^{-1}(F)$, where $F$ is an closed subset of the space $X_n$ and $n$ runs over a subset $N$ cofinal in $\omega$, is a base for the closed sets of the limit of the inverse sequence $\{\langle X_n,f_n\rangle:n<\omega\}$. Moreover, if for every $n<\omega$ a base $\mathcal{B}_n$ for the closed sets of space $X_n$ is fixed, then the subfamily of those $\pi_n^{-1}(F)$ for which $F\in\mathcal{B}_n$, also is a base for the closed sets of $X_\omega$. \end{lemma}
To make sure that the space $X_\omega$ is one-dimensional, it is sufficient to show that $\{\pi_k^{-1}(F):F\in\mathcal{B}_k\ \text{and}\ k<\omega\}$ is a model of sentence~\ref{dim=1}.
Let $s:\omega\rightarrow\omega\times\omega$ be an onto map in such a way that for every $n,m<\omega$ we have $s^{-1}(\langle n,m\rangle)\geq\max\{n,m\}$. For instance, we take an onto map $g:\omega\rightarrow\omega\times\omega\times\omega$ and given $g(n)=\langle p,q,r\rangle$ we define $s(n)$ by \begin{equation*} s(n)=\left\{
\begin{array}{l}
\langle p,q\rangle\ \text{if}\ n\geq\max\{p,q\},\\
\langle 0,0\rangle\ \text{otherwise}.
\end{array} \right. \end{equation*}
Let $X_0=X$ and suppose we have defined the pairs $\langle X_m,f_m\rangle$ and the bases $\mathcal{B}_m$ for every $m<n$. And suppose that we have also defined an enumeration of all the triples of $\mathcal{B}_m$ that have empty intersection for every $m<n$. Let $\{G_k^m:k<\omega\}$ be an enumeration of the set $\{G\in[\mathcal{B}_m]^3:\bigcap G=\emptyset\}$ for $m<n$, write $G_k^m=\{a_k^m,b_k^m,c_k^m\}$.
The way we now define the space $X_n$ and the onto map $f_n:X_n\rightarrow X_{n-1}$ will be as follows.
Suppose $s(n)=\langle k,m\rangle$, we consider the closed sets $(f^{n-1}_k)^{-1}(a_m^k)$, $(f^{n-1}_k)^{-1}(b_m^k)$ and $(f^{n-1}_k)^{-1}(c_m^k)$ of $X_{n-1}$. They have empty intersection.
If there exist sets $x$, $y$ and~$z$ in $2^{X_{n-1}}$ such that
\begin{eqnarray*} (f^{n-1}_k)^{-1}(a_m^k)\subset x, (f^{n-1}_k)^{-1}(b_m^k)\subset y\ \text{and}\ (f^{n-1}_k)^{-1}(c_m^k)\subset z,\\ x\cap y\cap z=\emptyset\ \text{and}\ x\cup y\cup z=X_{n-1}, \end{eqnarray*}
then we let $X_n=X_{n-1}$, $f_n=\operatorname{id}_{X_n}$ and we choose a countable base $\mathcal{B}_n$ for the closed sets of $X_n$ such that $\mathcal{B}_{n-1}\cup\{x,y,z\}\subset\mathcal{B}_n$.
If there do not exist such sets $x$, $y$ and~$z$ in $2^{X_n}$ then we use the construction in subsection~\ref{theta4} to find a (metric) continuum $X_n$ and a continuous onto map $f_n:X_n\rightarrow X_{n-1}$, such that in $X_n$ there are closed sets $x$, $y$ and~$z$ in $X_n$ such that
\begin{equation*}
\begin{array}{l}
(f^n_k)^{-1}(a^k_m)\subset x,\
(f^n_k)^{-1}(b^k_m)\subset y,\
(f^n_k)^{-1}(c^k_m)\subset z,\\
x\cap y\cap z=\emptyset\ \text{and}\ x\cup y\cup z=X_n
\end{array} \end{equation*}
Let $\mathcal{B}_n$ be some countable base for the closed sets of $X_n$ such that $\{(f_n)^{-1}(F):F\in\mathcal{B}_{n-1}\}\subset\mathcal{B}_n$ and $x,y,z\in\mathcal{B}_n$.
After we have chosen the base $\mathcal{B}_n$ we can choose some enumeration of all the triples of $\mathcal{B}_n$ that have empty intersection.
We do not get into trouble by considering base elements of some base for the closed sets of $X_\omega$ which have not yet been defined, because this will not happen by the way the function $s$ is defined and the bases $\mathcal{B}_n$ are chosen.
The limit $X_\omega(s)$ of the inverse sequence $\{\langle X_n,f_n\rangle:n<\omega\}$ is a continuum, as all the spaces $X_n$ are continua, moreover, as the base $\{\pi_n^{-1}(F^n_k):k,n<\omega\}$ of the space $X_\omega(s)$ models the sentence~\ref{dim=1} we have that $X_\omega(s)$ is one-dimensional. As all the spaces $X_n$ are compact and all the bonding maps $f_n$ are onto, we have that $\pi_0:X_\omega(s)\rightarrow X$ is a continuous onto map.
In a similar way we can construct a function $t:\omega\rightarrow\omega\times\omega\times\omega$, an onto map in such a way that for all $k,l,m<\omega$ we have $t^{-1}(\langle k,l,m\rangle)\geq\max\{k,l,m\}$, and use it together with the construction in subsection~\ref{theta5} to define, given $X_0=X$, $X_n$ and $f_n$ so that $X_\omega(t)$, the inverse limit of the sequence $\{\langle X_n,f_n\rangle:n<\omega\}$ is a hereditarily indecomposable continuum which admits a continuous onto map, $\pi_0$ onto the space $X$.
We can combine these two constructions by defining the function $r$ by letting $r(2n)$ equal $s(n)$ and $r(2n+1)$ equal $t(n)$ for every $n<\omega$. Define $X_0=X$ and use the construction in subsection~\ref{theta4} if $n$ is even and the construction in subsection~\ref{theta5} if $n$ is odd to construct $X_n$ and $f_n$.
Let $X_\omega(r)$ be the inverse limit of the inverse sequence $\{\langle X_n,f_n\rangle\}$ we have constructed with the aid of the function $r$ as described in the previous paragraph.
As $\mathcal{B}=\{\pi_n^{-1}(F^n_k):n,k<\omega\}$ is a base for the closed sets of $X_\omega(r)$ we see that $w(X_\omega(r))=w(X)=\aleph_0$. The space $X_\omega(r)$ is a one-dimensional hereditarily indecomposable continuum as, by construction $\mathcal{B}$ is a model of the sentences~\ref{dim=1} and~\ref{HI}. So by the following claim we have proven the Ma\'{c}kowiak-Tymchatyn theorem.
\begin{claim} The map $\pi_0$ is a weakly confluent map from $X_\omega(r)$ onto $X$. \end{claim}
\begin{proof} Suppose we have a subcontinuum $C$ of the space $X$, we want to find a subcontinuum $C'$ of $X_\omega(r)$ such that $\pi_0[C']=C$. As, by construction, the $f_{2n}$'s are monotone closed maps from $X_{2n}$ onto $X_{2n-1}$ and the $f_{2n+1}$'s are weakly confluent, we can define an inverse sequence $\{\langle Y_n, g_n\rangle:n<\omega\}$ such that $Y_0=C$ and $Y_n$ is, for every $n$, some subcontinuum of $f_n^{-1}(Y_{n-1})$ that is mapped onto $Y_{n-1}$ by the map $f_n$ and the map $g_n$ is the restriction of the map $f_n$ to the subspace $Y_n$ of $X_n$. Let $C'$ be the inverse limit of the inverse sequence $\{\langle Y_n,g_n\rangle:n<\omega\}$, so
$$C'=\lim_\leftarrow\{\langle Y_n,g_n\rangle:n<\omega\}.$$
We have that $C'$ is a closed subspace of the space $X_\omega(r)$. Furthermore it is a continuum as it is an inverse limit of continua, so it is a subcontinuum of $X_\omega(r)$. As $\pi_n$ maps $C'$ onto the $Y_n$'s we have proven the claim. \end{proof}
\subsection{The extended Ma\'{c}kowiak-Tymchatyn theorem} Given a continuum $X$ we will construct an inverse sequence $\{\langle X_\alpha,f_\alpha\rangle:\alpha<w(X)\}$ such that the inverse limit space $Y$ is a hereditarily indecomposable continuum of weight $w(X)$ and $\operatorname{dim}(Y)=1$ and there exists a weakly confluent map of the space $Y$ onto~$X$. This is a somewhat different proof than is given in the paper of Hart, Van Mill and Pol (see~\cite{HvMP}).
\begin{equation*} X=X_0\stackrel{f_1}{\longleftarrow} X_1\stackrel{f_2}{\longleftarrow} \cdots\stackrel{f_\alpha}{\longleftarrow} X_{\alpha}\stackrel{f_{\alpha+1}}{\longleftarrow}\cdots\quad(\alpha<w(X)). \end{equation*}
We are going to make sure that every $X_\alpha$ is a continuum of weight $w(X)$ and that there exist some base $\mathcal{B}_\alpha$ for the closed sets of $X_\alpha$ of cardinality $w(X)$ such that the base $\{\pi^{-1}(B):B\in\mathcal{B}_\alpha,\alpha<w(X)\}$ for the closed sets of the space $Y$ will show that $Y$ is the desired continuum.
For $\beta<w(X)$ a limit ordinal we let $X_\beta$ be the inverse limit of the sequence $\{\langle X_\gamma,f_\gamma\rangle:\gamma<\beta\}$, and we let $\mathcal{B}_\beta$ be the set $\{(\pi_\alpha^\beta)^{-1}(B):B\in\mathcal{B}_\alpha,\ \alpha<\beta\}$. This is a base for the closed sets of $X_\beta$ and $\vert\mathcal{B}_\beta\vert\leq w(X)$. Furthermore $X_\beta$ is a continuum as it is an inverse limit of continua.
Suppose we have defined the continua $X_\beta$ for $\beta\leq\alpha$ for some $\alpha<w(X)$, as well as the bases $\mathcal{B}_\beta$ for the closed sets of these spaces and for every $\beta\leq\alpha$ we also have defined an enumeration $\{G_\tau^\beta:\tau<\Gamma_\beta\}$ of the triples of elements of $\mathcal{B}_\beta\setminus\{\emptyset\}$ that have empty intersection, we write $G_\tau^\beta=\{a_\tau^\beta,b_\tau^\beta,c_\tau^\beta\}$. Here $\Gamma_\beta$ is some ordinal number less than or equal to $w(X)$.
As in the previous section we can find a function $s:w(X)\rightarrow w(X)\times w(X)$ such that for every $\alpha,\beta< w(X)$ we have $s^{-1}(\langle \alpha,\beta\rangle)\geq\max\{\alpha,\beta\}$. To find $X_{\alpha+1}$ and $f_\alpha$ we do almost the same thing as we have done in the previous section. If $s(\alpha)=\langle \beta,\gamma\rangle$ we consider the closed sets $a=f_\beta^\alpha(a_\gamma^\beta)$, $b=f_\beta^\alpha(b_\gamma^\beta)$ and $c=f_\beta^\alpha(c_\gamma^\beta)$ of the space $X_\alpha$.
If there exist $x$, $y$ and~$z$ in $2^{X_\alpha}$ such that $a\subset x$, $b\subset y$, $c\subset z$, $x\cap y\cap z=\emptyset$ and $x\cup y\cup z=X_\alpha$ then we let $X_{\alpha+1}=X_\alpha$ and $f_{\alpha+1}=\operatorname{id}_{X_{\alpha+1}}$.
If there are no such $x$,$y$ and~$z$ in $2^{X_\alpha}$ then we will do as in subsection~\ref{theta4}, but as in that section we used a metric for $X$ we have to slightly alter the proof there. As $X_\alpha$ is normal and $a\cap b\cap c=\emptyset$ we can find a continuous function $f_a:X_\alpha\rightarrow [0,1]$ such that $f_a(a)\subset\{0\}$ and $f_a(b\cap c)\subset\{1\}$. Now, as $f_a^{-1}(\{0\})\cap b\cap c=\emptyset$ we can find a continuous function $f_b:X_\alpha\rightarrow [0,1]$ such that $f_b(b)\subset\{0\}$ and $f_b(f_a^{-1}(\{0\})\cap c)\subset\{1\}$. Finally, since $f_a^{-1}(\{0\})\cap f_b^{-1}(\{0\})\cap c=\emptyset$ we can find a continuous function $f_c:X_\alpha\rightarrow [0,1]$ such that $f_c(c)\subset\{0\}$ and $f_c(f_a^{-1}(\{0\})\cap f_b^{-1}(\{0\}))\subset\{1\}$. Now define the function $f:X_\alpha\rightarrow\mathbb{R}^3$ by
$$f(x)=(\kappa_a(x),\kappa_b(x),\kappa_c(x)),$$
where $\kappa_a:X_\alpha\rightarrow [0,1]$ is defined by
$$\kappa_a(x)=\frac{f_a(x)}{f_a(x)+f_b(x)+f_c(x)},$$
and $\kappa_b$ and $\kappa_c$ are likewise defined. The function $f$ maps $X_\alpha$ into the triangle that is the convex hull of the points $\{(0,0,1),(0,1,0),(1,0,0)\}$ in $\mathbb(R)^3$ just as in subsection~\ref{theta4} and from this point on we can follow the method in subsection~\ref{theta4} to find a continuum $X_{\alpha+1}$ and a continuous onto map $f_{\alpha+1}:X_{\alpha+1}\rightarrow X_\alpha$ such that there exist $x,y,z\in 2^{X_{\alpha+1}}$ such that $f_{\alpha+1}^{-1}(a)\subset x$, $f_{\alpha+1}^{-1}(b)\subset y$ and $f_{\alpha+1}^{-1}(c)\subset z$, $x\cap y\cap z=\emptyset$ and $x\cup y\cup z= X_{\alpha+1}$. Now let $\mathcal{B}_{\alpha+1}$ be a base for the closed sets of $X_{\alpha+1}$ such that $\{(f_{\alpha+1})^{-1}(B):B\in\mathcal{B}_\alpha\}\cup \{x,y,z\}\subset\mathcal{B}_{\alpha+1}$ and $\vert\mathcal{B}_{\alpha+1}\vert\leq w(X)$. Enumerate the set of triples of $\mathcal{B}_{\alpha+1}\setminus\{\emptyset\}$ with empty intersection as $\{G_\tau^{\alpha+1}:\tau<\Gamma_{\alpha+1}\}$, where $\Gamma_{\alpha+1}$ is some ordinal number less than or equal to $w(X)$.
In a similar way we can find an (transfinite) inverse sequence such that the inverse limit is a hereditarily indecomposable continuum of the same weight as $X$ and for which the map $\pi_0$ is a continuous onto map between the limit and the space $X$.
As in the previous section we can combine these two (we take care of the hereditary indecomposability at even ordinal stages and we take care that the dimension of the limit space will not exceed one at the odd ordinal stages), to find a transfinite inverse sequence such that the inverse limit is a one-dimensional hereditarily indecomposable continuum that admits a continuous map onto the space $X$. After some thought, as in the previous section we see that this continuous map is in fact a weakly confluent map.
\end{document} | arXiv |
\begin{document}
\title[Cohomology of modular form connections on complex curves]
{Cohomology of modular form connections on complex curves}
\author{A. Zuevsky}
\address{Institute of Mathematics \\ Czech Academy of Sciences\\ Zitna 25, Prague \\ Czech Republic}
\email{[email protected]}
\begin{abstract}
We consider reduction cohomology of modular functions defined on complex curves
via generalizations of holomorphic connections.
The cohomology is explicitly found in terms of higher genus counterparts of
elliptic functions as analytic continuations of solutions for functional equations.
Examples of modular functions on various genera are provided.
\end{abstract}
\keywords{Cohomology; Complex curves; Modular functions; Elliptic functions; Quasi-Jacobi forms}
\maketitle
\section{Introduction}
The natural problem of computation of continuous cohomologies
for non-commutative structures
on manifolds
has proven to be a subject of great geometrical interest \cite{BS, Kaw, PT, Fei, Fuks, Wag}.
As it was demonstrated in \cite{Fei, Wag}, the ordinary Gelfand-Fuks cohomology of the Lie algebra of
holomorphic vector fields on complex manifolds
turns to be not the most effective and general one.
For Riemann surfaces, and even for higher dimension complex manifolds, the classical cohomology of
vector fields becomes trivial \cite{Kaw}.
The Lie algebra of
holomorphic vector fields does not always work for cohomology.
For example, it is zero for a
compact Riemann surface of genus greater than one.
In \cite{Fei} Feigin obtained various results concerning (co)-homology of
the Lie algebra cosimplicial objects of holomorphic vector fields $Lie(M)$.
Inspite of results in previous approaches, it is desirable to
find a way to enrich cohomological structures. This motivates
constructions of more refined cohomology description for non-commutative
algebraic structures.
In \cite{BS}, it have been
proven that the Gelfand-Fuks cohomology $H^*(Vect(M))$ of vector fields on a smooth compact manifold $M$
is isomorphic to the singular cohomology of the space of continuous cross sections of a certain
fibre bundle over $M$.
The main aim of this paper is to introduce and compute the reduction cohomology of modular functions
on complex curves
\cite{FK, Bo, Gu, A}.
Due to structure of modular forms \cite{FMS, BKT, Fo} and reduction relations
\cite{Y, Zhu, MTZ, GT, TW} among them,
one can form chain complexes of $n$-point modular forms that
are fine enough to describe local geometry
of complex curves.
In contrast to more geometrical methods, e.g., of ordinary cosimplicial cohomology for
Lie algebras \cite{Fei, Wag},
the reduction cohomology pays more attention to the analytical and modular structure of elements
of chain complex spaces.
Computational methods involving reduction formulas proved their effectiveness in
conformal field theory, geometrical descriptions of intertwined modules for
Lie algebras,
and differential geometry of integrable models.
In section \ref{chain} we give the definition of the reduction cohomology as well as lemma
relating it to the cohomology of generalized connections on $M$.
The main proposition explicitly expressing the reduction cohomology in terms of
spaces of generalized elliptic functions on $M$ is proven.
In Appendix \ref{examples} we provide examples of reduction formulas for various modular functions.
Results of this paper are useful for cosimplisial cohomology theory of smooth manifolds,
generalizations of the Bott-Segal theorem, and
have their consequences in conformal field theory
\cite{Fei, Wag}, deformation theory, non-commutative geometry, modular forms,
and the theory of foliations.
\section{The chain complex and cohomology}
\label{chain}
\subsection{Chain complex spaces of $n$-variable modular forms}
In this section we introduce the chain complex spaces for modular functions
on complex curves \cite{EZ, Zag, Zhu, Miy, Miy1, MTZ, GT, TW}.
Mark $n$ points ${\bf p}_n=(p_1, \ldots, p_n)$ on a compact complex curve $M$ of genus $g$.
Denote by ${\bf z}_n=(z_1, \ldots, z_n)$ local coordinates around ${\bf p}_n \in M$.
On genus $g$ complex curves an $n$-point modular function
$\mathcal Z \left({\bf z}_n, \mu\right)$ has
certain specific form depending on $g$, $M$ (cf. \cite{Y}) and kind of modular form.
In addition to that, it depends on
a set of moduli parameters $\mu \in {\mathcal M}$
where we denote by $\mathcal M$ a subset of
the moduli space of genus $g$ complex curve $M$.
\begin{definition}
On a complex curve $M$ of genus $g$,
we consider the spaces of $n$-point modular forms with moduli parameters $\mu$.
\begin{equation}
\label{cnmu}
C^n(\mu)= \left\{
\mathcal Z \left({\bf z}_n, \mu\right), n \ge 0
\right\},
\end{equation}
that possess reduction formulas.
\end{definition}
The co-boundary operator $\delta^n({\bf z}_{n+1})$ on $C^n(\mu)$-space is defined
according to the reduction formulas for $\mu$-modular functions
(cf. particular examples in Appendix \ref{examples}, \cite{Fo, Zhu, MTZ, GT, TW}).
\begin{definition}
For $n \ge 0$, and any $z_{n+1} \in \mathbb{C}$, define
\begin{eqnarray}
\label{delta_operator}
\delta^n: C^n(\mu)&{\rightarrow }& C^{n+1}(\mu),
\end{eqnarray}
given by
\begin{eqnarray}
\label{poros}
&& \delta^n({\bf z}_{n+1}) \; \mathcal Z \left( {\bf z}_n, \mu \right) =
\sum\limits_{l=1}^{l(g)} \sum\limits_{k=0}^{n} \sum\limits_{m \ge 0}
f_{k,l, m} \left( {\bf z}_{n+1}, l, \mu \right) \; T_{l, k, m} (\mu).\mathcal Z_{n} \left( {\bf z}_n, \mu \right),
\end{eqnarray}
where $l(g) \ge 0$ is a constant depending on $g$, and the meaning of indexes $1 \le k \le n$,
$1 \le l \le l(g)$, $m \ge 0$ explained below.
\end{definition}
For each particular genus $g\ge 0$ of $M$ and type of modular form defined by the moduli parameter $\mu$,
known operator-valued functions
$f_{k,l,m} ({\bf z}_{n+1}, \mu ) T_{k, l, m}(\mu).$
change the $k$-argument of $\mathcal Z \left( {\bf z}_n, \mu \right)$
by changing $\mu$.
The reduction formulas have the form:
\begin{equation}
\label{reduction}
\mathcal Z \left( {\bf z}_{n+1} , \mu \right)= \delta^n ({\bf z}_{n+1}).
\mathcal Z\left( {\bf z}_n, \mu \right).
\end{equation}
For $n \ge 0$,
let us denote by ${\mathfrak Z}_n$ the domain of all ${\bf z}_{n} \in \mathbb{C}^n$, such that
the chain condition
\begin{equation}
\label{torba}
\delta^{n+1}({\bf z}_{n+1})\; \delta({\bf z}_{n}). \mathcal Z \left({\bf z}_n, \mu \right)=0,
\end{equation}
for the coboundary operators \eqref{poros} for spaces $C^n(\mu)$
is satisfied.
Explicitly,
the chain condition \eqref{torba}
leads to an infinite $n \ge 0$ set of equations involving functions
$f_{k, l, m} \left({\bf z}_{n+1}, \mu \right)$ and
$\mathcal Z\left({\bf z}_n, \mu \right)$:
\begin{eqnarray}
\label{conditions}
\sum\limits_{l'=1 \atop l=1}^{l(g)}
\sum\limits_{k'\atop k=1}^{n+1 \atop n } \sum\limits_{m'\atop m \ge 0}
f_{k',l',m'}\left({\bf z}_{n+2}, \mu \right)
f_{k, m, l}\left({\bf z}_{n+1}, \mu \right)
T_{k', l', m'}(\mu) T_{k,l,m} (\mu).\mathcal Z\left( {\bf z}_n, \mu \right)=0. &&
\end{eqnarray}
\begin{definition}
The spaces with conditions \eqref{conditions} constitute a chain complex
\begin{equation}
\label{buzova}
0 \longrightarrow
C^0 \stackrel {\delta^{0}} {\longrightarrow} C^1
\stackrel {\delta^{1}} {\longrightarrow}
\ldots \stackrel{\delta^{n-2}} {\longrightarrow} C^{n-1} \stackrel{\delta^{n-1}} {\longrightarrow} C^n \longrightarrow \ldots.
\end{equation}
For $n \ge 1$,
we call corresponding cohomology
\begin{equation}
\label{pusto}
H^n(\mu)={\rm Ker}\; \delta^{n}({\bf z}_{n+1})/{\rm Im}\; \delta^{n-1}({\bf z}_n),
\end{equation}
the $n$-th reduction cohomology of $\mu$-modular forms
on a complex curve $M$.
\end{definition}
\begin{remark}
Note that the reduction cohomology can be defined as soon as for a type of modular functions
there exist reduction formulas \eqref{reduction}.
\end{remark}
Operators $T_{k, l, m}(\mu)$, $0 \le l \le l(g)$, $m\ge 0$, $1 \le k \le n$,
form a set of generators of an infinite-dimensional continual Lie algebra $\mathfrak g(\mu)$
endowed with a natural grading indexed $l$, $m$.
Indeed, we set the space of functions $\mathcal Z({\bf z}_n, \mu)$
as the base algebra \cite{Sav, SV1, SV2, V} for the continual Lie algebra $\mathfrak g(\mu)$, and
the generators as
\begin{eqnarray}
X_{k, l, m} \left(\mathcal Z\left({\bf z}_n, \mu \right)\right) &=&
T_{k, l, m }(\mu ). \mathcal Z \left({\bf z}_n, \mu \right),
\end{eqnarray}
for $0 \le l \le l(g)$, $m \ge 0$, $1 \le k \le n$.
Then the commutation relations for non-commutative operators $T_{k, l, m}.$
$1 \le k \le n$ inside $\mathcal Z({\bf z}_n, \mu)$ represent the
commutation relations of the continual Lie algebra $\mathfrak g(\mu)$.
Jacobi identities for $\mathfrak g(\mu)$ follow from Jacobi identities of the Lie algebra of
operators $T_{k, l, m}$.
\subsection{Geometrical meaning of reduction formulas and conditions \eqref{conditions}}
In this section we show that the reduction formulas have the form of multipoint connections
generalizing ordinary holomorphic connections on complex curves \cite{Gu}.
Let us define the notion of a multipoint connection which will be useful for identifying
reduction cohomology in section \ref{cohomology}.
Motivated by the definition of a holomorphic connection
for a holomorphic bundle \cite{Gu} over
a smooth complex curve $M$, we introduce the definition of
a multiple point connection over $M$.
\begin{definition}
\label{mpconnection}
Let $\mathcal{V}$ be a holomorphic vector bundle on $M$, and $M_0 \subset M$ be its subdomain.
Denote by ${\mathcal S \mathcal V}$ the space of sections of $\mathcal{V}$.
A multi-point
connection $\mathcal G$ on $\mathcal{V}$
is a $\mathbb{C}$-multi-linear map
\[
\mathcal G: M^{ n} \to \mathbb{C},
\]
such that for any holomorphic function $f$, and two sections $\phi(p)$ and $\psi(p')$ of $\mathcal{V}$ at points
$p$ and $p'$ on $M_0 \subset M$ correspondingly, we have
\begin{equation}
\label{locus}
\sum\limits_{q, q' \in M_0 \subset M} \mathcal G\left( f(\psi(q)).\phi(q') \right) = f(\psi(p')) \;
\mathcal G \left(\phi(p) \right) + f(\phi(p)) \; \mathcal G\left(\psi(p') \right),
\end{equation}
where the summation on left hand side is performed over locuses of points $q$, $q'$ on $M_0$.
We denote by ${\mathcal Con}_{n}$ the space of $n$-point connections defined over $M$.
\end{definition}
Geometrically, for a vector bundle $\mathcal{V}$ defined over $M$,
a multi-point connection \eqref{locus} relates two sections $\phi$ and $\psi$ at points $p$ and $p'$
with a number of sections on $M_0 \subset M$.
\begin{definition}
We call
\begin{equation}
\label{gform}
G(\phi, \psi) = f(\phi(p)) \; \mathcal G\left(\psi(p') \right) + f(\psi(p')) \; \mathcal G \left(\phi(p) \right)
- \sum\limits_{q, q' \in M_0 \subset \mathcal X} \mathcal G\left( f(\psi(q')).\phi(q) \right),
\end{equation}
the form of a $n$-point connection $\mathcal G$.
The space of $n$-point connection forms will be denoted by $G^n$.
\end{definition}
Here we prove the following
\begin{lemma}
\label{pisaka}
$n$-point modular functions of the space $\left\{ \mathcal Z\left({\bf z}_n, \mu \right), n \ge 0 \right\}$
form $n$-point connections.
For $n\ge 0$, the reduction cohomology of a compact complex curve of genus $g$ is
\begin{equation}
\label{chek}
H^n(\mu) = {\mathcal Con}^{n}/G^{n-1}.
\end{equation}
\end{lemma}
\begin{proof}
For non-vanishing $f(\phi(p))$ let us write set
\begin{eqnarray}
\label{identifications}
\mathcal G &=&\mathcal Z \left({\bf z}_n, \mu \right),
\\
\psi(p')&=&\left({\bf z}_{n+1}, \mu \right),
\nonumber \\
\phi(p)&=&\left({\bf z_n}, \mu \right),
\nonumber \\
\mathcal G\left( f(\psi(q)).\phi(q') \right) &=& T^{(g)}_{k, l, m}(\mu). \mathcal Z\left( {\bf z}_n, \mu \right),
\nonumber \\
- \frac{f(\psi(p'))}{ f(\phi(p))} \;
\mathcal G \left(\phi(p) \right)&=&
\sum\limits_{l=1}^{l(g)}
f_{0, l, m}\left({\bf z}_{n+1}, \mu \right) \; T_{0, l, m}. \mathcal Z \left( {\bf z}_n, \mu \right),
\nonumber \\
\frac{1}{f(\phi(p)) } \sum\limits_{{q}_n, { q'}_n \in \atop {\mathcal X}_0 \subset M}
\mathcal G\left( f(\psi(q)).\phi(q') \right)
&=&
\sum\limits_{k=1}^{n} \sum\limits_{m \ge 0}
f_{k,l,m}\left( {\bf z}_{n+1}, \mu \right)
\nonumber
T_{k, l, m}(\mu). \mathcal Z \left( {\bf z}_n, \mu \right).
\end{eqnarray}
Thus, the formula \eqref{identifications} gives \eqref{reduction}.
\end{proof}
The geometrical meaning of \eqref{conditions} consists in the following.
Due to modular properties of $n$-point functions $\mathcal Z({\bf z}_n, \mu)$,
\eqref{conditions} is also interpreted as relations among modular forms.
The condition \eqref{reduction}
defines a complex variety in ${\bf z}_n \in \mathbb{C}^{n}$.
As most identities
(e.g., trisecant identity \cite{Fa, Mu} and triple product identity \cite{MTZ})
for $n$-point functions \eqref{conditions} has its algebraic-geometrical meaning.
The condition \eqref{conditions} relates finite series of modular functions on $M$
with rational function coefficients (at genus $g=0$) \cite{Zhu}, or (deformed) elliptic functions (at genus $g=1$)
\cite{Zhu, MTZ}, or
generalizations of classical elliptic functions (at genus $g \ge 2$) \cite{GT, TW}.
\subsection{Cohomology}
\label{cohomology}
In this section we compute the reduction cohomology defined by \eqref{buzova}--\eqref{pusto}.
The main result of this paper is the following
\begin{proposition}
The $n$-th reduction cohomology of the spaces $C^n(\mu)$ \eqref{cnmu}
of modular forms $\mathcal Z({\bf z}_n, \mu)$
is the space of recursively generated (by reduction formulas \eqref{reduction})
functions
with $z_i \notin {\mathfrak Z}_{i}$,
for $1 \le i \le n$,
satisfying the condition
\begin{eqnarray}
\label{poroserieroj}
&& \sum\limits_{l=1}^{l(g)} \sum\limits_{k=1}^{n} \sum\limits_{m \ge 0}
f_{k,l,m} \left( {\bf z}_{n+1}, \mu \right) \; T_{l, k, m}.
\mathcal Z\left( {\bf z}_{n}, \mu \right)=0.
\end{eqnarray}
\end{proposition}
\begin{remark}
The first cohomology
is given by
the space of transversal (i.e., with vanishing sum over $q$, $q'$)
one-point connections $\mathcal Z \left(x_1, \mu \right)$ provided by
coefficients in terms of series of special functions.
The second cohomology is given by a space of
generalized higher genus
complex kernels corresponding to $M$.
\end{remark}
\begin{proof}
By definition \eqref{pusto},
the $n$-th reduction cohomology is defined by the subspace of $C^{n}(\mu)$ of functions
$\mathcal Z\left({\bf z}_n, \mu \right)$ satisfying \eqref{poroserieroj}
modulo the subspace of $C^{n}(\mu)$ $n$-point modular functions
$\mathcal Z\left({\bf z}'_n, \mu \right)$ resulting from:
\begin{eqnarray}
\label{poroserieroj_2}
\mathcal Z \left( {\bf z}'_n, \mu \right) &=& \sum\limits_{l=1}^{l(g)}
\sum\limits_{k=1}^{n-1} \sum\limits_{m \ge 0}
f_{k, l, m} \left( {\bf z}'_n, \mu \right) \; T_{k, l, m}.
\; \mathcal Z\left( {\bf z}'_{n-1}, \mu \right).
\end{eqnarray}
We assume that, subject to other fixed $\mu$-parameters, $n$-point modular functions are completely
determined by all choices ${\bf z}_n \in \mathbb{C}^n$.
Thus, the reduction cohomology can be treated as depending on set of ${\bf z}_n$ only
with appropriate action of endomorphisms generated by $z_{n+1}$.
Consider a non-vanishing solution $\mathcal Z \left({\bf z}_n, \mu \right)$
to \eqref{poroserieroj} for some ${\bf z}_n$.
Let us use the reduction formulas \eqref{reduction} recursively for each $z_i$, $1 \le i \le n$ of ${\bf z}_n$
in order to express
$\mathcal Z \left({\bf z}_n, \mu \right)$ in terms of nul-point modular form
$\mathcal Z\left( \mu \right)$, i.e., we obtain
\begin{equation}
\label{topaz}
\mathcal Z \left({\bf z}_n, \mu \right)= {\mathcal D}({\bf z}_n, \mu) \mathcal Z \left( \mu\right),
\end{equation}
as in \cite{MTZ}.
It is clear that $z_i \notin {\mathfrak Z}_{i}$ for $1 \le i \le n$,
i.e., at each stage of the recursion procedure towards
\eqref{topaz}, otherwise $\mathcal Z\left({\bf z}_n, \mu \right)$ would be zero.
Thus, $\mathcal Z \left({\bf z}_n, \mu \right)$ is explicitly known and
is repsented as a series of auxiliary functions ${\mathcal D}({\bf z}_n)$
depending on moduli space parameters $\mu$.
Consider now $\mathcal Z\left({\bf z}'_n \right)$ given by \eqref{poroserieroj_2}.
It is either vanishes when $z_{n-i} \in {\mathfrak Z}_{n-i}$, $ 2 \le i \le n$, or
given by \eqref{topaz} with ${\bf z}'_n$ arguments.
The general idea of deriving reduction formulas is to consider the double integration of
$\mathcal Z \left({\bf z}_n \right)$ along small circles around two auxiliary variables
with the action of reproduction kernels inserted.
Then, these procedure leads to recursion formulas relating $\mathcal Z({\bf z}_{n+1}, \mu)$ and
$\mathcal Z({\bf z}_n, \mu)$ with
functional coefficients depending on the nature of corresponding modular functions, and $M$.
In \cite{Y, MTZ} formulas to $n$-point modular functions in various specific examples
were explicitely and recursively obtained.
In terms of $z_{n+1}$, we are able to transfer in \eqref{poroserieroj} the action
of
$T_{k,l,m}$-operators
into an analytical continuation of
$\mathcal Z \left({\bf z}_n, \mu \right)$
multi-valued holomorphic functions to domains $D_{n} \subset M$ with
$z_{i} \neq z_{j}$ for
$i\ne j$.
Namely, in \eqref{poroserieroj},
the operators $T_{k, l, m}$ shift the
formal parameters ${\bf z}_n$ by $z_{n+1}$, i.e., ${\bf z}'_n= {\bf z}_n + z_{n+1}$.
Thus,
the $n$-th reduction cohomology is given by the space of
analytical continuations of $n$-point modular functions
$\mathcal Z \left({\bf z}_n, \mu \right)$ with ${\bf z}_{n-1} \notin {\mathfrak Z}_{n-1}$
that are solutions of
\eqref{poroserieroj}.
\end{proof}
\section*{Acknowledgments}
The author would like to thank
H. V. L\^e and A. Lytchak
for related discussions.
Research of the author was supported by the GACR project 18-00496S and RVO: 67985840.
\section{Appendix: Examples}
\label{examples}
The reduction cohomology depends on the kind of modular forms (via moduli parameters which we denote $\mu$) and
genus of $M$.
Modular functions we consider in this section satisfy certain modular properties with respect to corresponding groups
\cite{Zhu, MTZ, GT, TW}.
As it was shown in \cite{Miy, KMI, KMII}, existence of reduction formulas is related in some sense to modularity.
\subsection{Rational case}
\label{sphere}
In (cf., e.g., \cite{Zhu}) we find
for the rational case $n$-point functions,
the reduction formulas
\begin{eqnarray}
\label{zhu_reduction_genus_zero_1}
\mathcal Z({\bf z}_{n+1}, \mu) = \sum\limits_{k=0}^{n} \sum\limits_{m \ge 0}
f_{k, m}(z_{n+1}, z_k) \; T_{k, m}. \; \mathcal Z({\bf z}_{n}, \mu),
\end{eqnarray}
where
$f_{k, m} (z, w)$ is a rational function defined by
\[
f_{n,m}(z,w) =
\frac{z^{-n}}{m!}\left(\frac{d}{dw}\right)^m \frac{w^n}{z-w},
\]
\[
\iota_{z,w}f_{n,m}(z,w) = \sum\limits_{j\in {\mathbb N}}\left( { n+j \atop m}\right) z^{-n-j-1}w^{n+j-1}.
\]
Let us take $z_{n+1}$ as the variable of expansion.
Then the $n$-th reduction cohomology $H^n(\mu)$ is given by the space of rational functions
recursively generated by \eqref{reduction} with ${\bf z}_n \notin {\mathfrak Z}_n$, satisfying
\eqref{poroserieroj}
with rational function coefficients $f_{k, m}(z_{n+1}, z_k)$,
and
modulo the space of $n$-point functions obtained by the recursion procedure,
not given by $\delta^{n-1} \mathcal Z({\bf z}_{n-1}, \mu)$.
It is possible to rewrite \eqref{poroserieroj}, in the form
\begin{eqnarray}
\label{perdolo1}
\left( \partial_{z_{n+1}}
+ \sum\limits_{k=1}^{n}
\widetilde{f}^{(0)}_{k, m} (z_{n+1}, z_k) \right) \; \mathcal Z ({\bf z}_n + (z_{n+1})_k, \mu )=0,
\end{eqnarray}
which is an equation for an analytical continuation of
$\mathcal Z ({\bf z}_n + (z_{n+1})_k, \mu)$ with different functions $\widetilde{f}_{k,m}$.
Using the reduction formulas \eqref{reduction} we obtain
\[
\mathcal Z({\bf z}_n + (z_{n+1})_k, \mu)= {\mathcal D} ({\bf z}_{n+1}, \mu),
\]
where ${\mathcal D}({\bf z}_{n+1}, \mu)$ is given by the series of
rational-valued functions in ${\bf z}_{n+1} \notin {\mathfrak Z}_n$ resulting from the recursive procedure
starting from $n$-point function to the partition function.
Thus, in this example, the $n$-th cohomology
is the space of analytic extensions of rational function solutions to
the equation \eqref{poroserieroj} with rational function coefficients.
\subsection{Modular and elliptic functions}
For a variable $x$, set
$D_x = \frac{1}{2\pi i} \partial_x$,
and $q_x = e^{2\pi i x}$.
Define for
$m\in\mathbb{N}=\{\ell\in \mathbb{Z}: \ell>0\}$,
the elliptic Weierstrass
functions
\begin{eqnarray}
P_{1}(w,\tau) =&-\sum_{n\in\mathbb{Z}\backslash \{0\}}\frac{q_w^n}{1-q^{n}}-\frac{1}{2},
\\ P_{m+1}(w,\tau) =&\frac{\left(-1\right)^m}{m!} D_w^m \left(P_{1}(w,\tau)\right) =\frac{(-1)^{m+1}}{m!}\sum_{n\in\mathbb{Z}\backslash \{0\}}\frac{n^m q_w^n}{1-q^{n}}.
\label{eq:Pm}
\end{eqnarray}
Next, we have
\begin{definition}
The modular Eisenstein series $E_{k}(\tau)$, defined by
$E_{k}=0$ for $k$
for odd and
$k\ge 2$ even
\begin{align}
E_{k}(\tau)&=-\frac{ B_{k}}{k!}+\frac{2}{(k-1)!}
\sum\limits_{n\geq 1}\frac{n^{k-1}q^{n}}{1-q^{n}}, \notag
\label{eq:Eisen}
\end{align}
where $B_{k}$ is the $k$-th Bernoulli number
defined by
\[
(e^z-1)^{-1} = \displaystyle{\sum\limits_{k\geq 0}\frac{B_{k}}{k!}z^{k-1}}.
\]
\end{definition}
It is convenient to define
$E_{0}=-1$.
$E_{k}$ is a modular form for $k>2$ and a quasi-modular form for $k=2$.
Therefore,
\begin{align}
E_{k}(\gamma \tau )=(c\tau +d)^{k} E_{k}(\tau )-\delta_{k,2} \frac{ c(c\tau +d)}{2\pi i}.
\notag
\label{eq:Engam}
\end{align}
\begin{definition}
For $w$, $z\in\mathbb{C}$, and $\tau \in \mathbb{H}$ let us define
\begin{align*}
\widetilde{P}_1(w, z,\tau) =-\sum_{n\in\mathbb{Z}}\frac{q_w^n}{1-q_z q^n}.
\end{align*}
\end{definition}
We also have
\begin{definition}
\begin{equation}
\widetilde{P}_{m+1}(w,z,\tau) =\frac{(-1)^{m}}{m!} D_w^m \left(\widetilde{P}_1(w,z,\tau)\right) =\frac{(-1)^{m+1} }{m!} \sum_{n\in\mathbb{Z}}\frac{n^m q_w^n}{1-q_zq^n}.
\label{eq:Pmtilde}
\end{equation}
\end{definition}
It is thus useful to give
\begin{definition}
For $m\in\mathbb{N}_0$, let
\begin{eqnarray}
\label{eq:PellPm}
P_{m+1, \lambda}\left(w,\tau\right) &=&
\frac{(-1)^{m+1}}{m!}\sum_{n\in \mathbb{Z}\backslash \{-\lambda\}}\frac{n^mq_w^n}{1-q^{n+\lambda}}.
\end{eqnarray}
\end{definition}
On notes that
\[
P_{1,\lambda}\left(w,\tau\right)=q_w^{-\lambda}(P_1(w,\tau)+1/2),
\]
with
\begin{align} P_{m+1,\lambda}\left(w,\tau\right)&=\frac{(-1)^m}{m!} D_w^m \left(P_{1,\lambda}\left(w,\tau\right)\right). \notag \end{align}
We also consider the expansion
\begin{align} \notag
P_{1,\lambda}(w,\tau)=\frac{1}{2\pi i w}-\sum_{k\ge 1} E_{k,\lambda}(\tau )(2\pi i w)^{k-1},
\end{align}
where we find \cite{Zag}
\begin{align}
E_{k,\lambda}(\tau )&=\sum_{j=0}^{k}\frac{\lambda^j}{j!} E_{k-j}(\tau).
\label{eq:Gkl} \end{align}
\begin{definition}
We define another generating set $\widetilde{E}_k(z,\tau)$
for $k\ge 1$ together with $E_2(\tau)$ given by
\cite{Ob}
\begin{align}
\widetilde{P}_1(w,z,\tau) = \frac{1}{2\pi i w}-\sum_{k\ge 1}\widetilde{E}_k(z,\tau) (2\pi i w)^{k-1},
\label{eq:P1Gn}
\end{align}
where we find that for $k\ge 1$,
\begin{equation} \label{eq:Gktild} \begin{aligned}
\widetilde{E}_k(z,\tau) =&
-\delta_{k,1}\frac{q_z}{q_z-1} -\dfrac{B_{k}}{k!} +\frac{1}{(k-1)!} \sum_{m,n\ge 1}\left(n^{k-1} q_z^{m}+(-1)^{k}n^{k-1}q_z^{-m} \right)q^{mn},
\end{aligned}
\end{equation}
and $\widetilde{E}_0(z,\tau)=-1$.
\end{definition}
\subsection{Elliptic case}
\label{torus}
Let $q=e^{2\pi i \tau}$, $q_{i}=e^{z_{i}}$, where $\tau$ is the torus modular parameter.
Then the genus one Zhu recursion formula is given by the following \cite{Zhu}
\begin{eqnarray}
\mathcal Z({\bf z}_{n+1}, \mu, \tau) = \mathcal Z \left( {\bf z}_n, \mu_{0}, \tau \right)
+ \sum\limits_{k=1}^{n} \sum\limits_{m \geq 0}
P_{m+1}
(z_{n+1}-z_{k},\tau )\;
\mathcal Z({\bf z}_n, \mu_{k,m}, \tau).
\label{zhu_reduction_genus_one}
\end{eqnarray}
Here $P_{m}(z,\tau)$ denote higher Weierstrass functions defined by
\[
P_{m}
(z,\tau )=\frac{(-1)^{m}}{(m-1)!}\sum\limits_{n\in {\mathbb Z}_{\neq 0}
} \frac{n^{m-1}q_{z}^{n}}{1-q^{n}}.
\]
\subsection{Case of deformed elliptic functions}
Let $w_{n+1} \in \mathbb{R}$ and define
$\phi \in U(1)$ by
\begin{equation}
\phi =\exp (2\pi i \; w_{n+1} ). \label{phi0}
\end{equation}
For some $\theta \in U(1)$,
we obtain the following generalization of Zhu's Proposition 4.3.2 \cite{Zhu} for the $n$-point function \cite{MTZ}:
\begin{theorem}
\label{Theorem_npt_rec0}
Let $\theta $ and $\phi $\ be as as above. Then for
any ${\bf z}_{n} \in C^{n}$ we have
\begin{eqnarray}
\mathcal Z \left({\bf x_{n+1}}, \mu, \tau \right) &=&
\delta _{\theta, 1} \delta _{\phi, 1} \mathcal Z \left({\bf x_{n}}, \mu_{0}, \tau \right)
\notag \\ &+& \sum\limits_{k=1 \atop m \geq 0}^{n} p(n,k) \; P_{m+1}\left[ \begin{array}{c} \theta \\ \phi \end{array} \right] (z_{n+1}-z_{k},\tau) \;
\mathcal Z ( {\bf z}_{n}; \mu_{k, m}, \tau). \label{nptrec0}
\end{eqnarray}
\end{theorem}
The deformed Weierstrass function is defined as follows \cite{MTZ}.
Let $(\theta ,\phi )\in U(1)\times U(1)$ denote a pair of modulus one
complex parameters with $\phi =\exp (2\pi i\lambda )$ for $0\leq \lambda <1$.
For $z\in \mathbb{C}$ and $\tau \in \mathbb{H}$ we define deformed Weierstrass functions for $k\geq 1$, \begin{equation*} P_{k}\left[ \begin{array}{c} \theta \\ \phi \end{array} \right] (z,\tau )=\frac{(-1)^{k}}{(k-1)!}\sum\limits_{n\in \mathbb{Z} +\lambda }^{\prime }\frac{n^{k-1}q_{z}^{n}}{1-\theta ^{-1}q^{n}}, \label{Pkuv} \end{equation*} for $q=q_{2\pi i\tau }$ where $\sum\limits^{\prime }$ means we omit $n=0$ if $(\theta ,\phi )=(1,1)$.
\subsection{Reduction formulas for Jacobi $n$-point functions}
\label{redya}
In this subsection we recall the reduction formulas derived in \cite{MTZ, BKT}.
For $\alpha \in \mathbb{C}$,
we now provide the following reduction formula for formal Jacobi $n$-point functions.
\begin{proposition}
\label{prop:Zhured}
Let ${\bf z}_{n+1}\in \mathbb{C}^{n+1}$,
$\alpha \in \mathbb{C}$.
For $\alpha z\notin {\mathbb{Z}\tau} +\mathbb{Z}$, we have
\begin{align}
\mathcal Z \left ({\bf z}_{n+1}, \mu, \tau \right)
=\sum_{k=1}^{n}\sum_{m\ge 0}
\widetilde{P}_{m+1} \left(\frac{z_{n+1}-z_k}{2\pi i}, \alpha z, \tau \right)
\mathcal Z \left ( {\bf z}_n, \mu_{k,m}, \tau \right).
\label{eq:ZhuRed}
\end{align}
\end{proposition}
Recall the definition of $\widetilde P$.
\begin{proposition}
\label{prop:Zhured0}
For $\alpha z=\lambda\tau+\mu\in {\mathbb{Z}\tau+\mathbb{Z}}$, we have
\begin{align}
&\mathcal Z \left( {\bf z}_{n+1}, \mu, \tau \right)\notag \\ &\quad =
e^{-z_{n+1} \lambda} \mathcal Z \left( {\bf z}_n, \mu_{0, \lambda}, \tau\right)
+\sum_{k=1}^{n} \sum_{m\ge 0}P_{m+1,\lambda} \left( \frac{z_{n+1}-z_k}{2\pi i}, \tau\right) \;
\mathcal Z \left( {\bf z}_n, \mu_{k, m}, \tau \right),
\label{eq:ZhuRed0}
\end{align}
with $P_{m+1,\lambda}\left(w ,\tau\right)$ defined in \eqref{eq:PellPm}.
\end{proposition}
Next we provide the reduction formula for Jacobi $n$-point functions.
\begin{proposition}
\label{prop:apnpt}
For $l\ge 1$ and $\alpha z
\notin {\mathbb{Z}\tau}+\mathbb{Z} $, we have
\begin{align}
&\mathcal Z \left ( {\bf z}_{n+1}), \mu_{1, -l}, \tau \right)
\notag \\ &\quad =
\sum_{m\ge 0}(-1)^{m+1}\binom{m+l-1}{m}\widetilde{G}_{m+l}(\alpha z,\tau)
\mathcal Z \left ( {\bf z}_{n}; \mu_{1, m} \tau \right)
\nonumber \\ & \quad +
\sum_{k=2}^{n}\sum_{m\ge 0}
(-1)^{l+1}
\binom{m+l-1}{m}
\widetilde{P}_{m+l} \left(\frac{z_1-z_k}{2\pi i}, \alpha z, \tau \right)
\mathcal Z\left( {\bf z}_n, \mu_{k, m}, \tau \right).
\label{eq:2ZhuRed} \end{align}
\end{proposition}
Propositions~\ref{prop:Zhured0} and \ref{prop:apnpt} imply the next result \cite{BKT}:
\begin{proposition}
\label{prop:apnpt0}
For $l \geq 1$ and $\alpha z = \lambda\tau+\mu\in {\mathbb{Z}\tau}+\mathbb{Z}$, we have
\begin{align}
&\mathcal Z\left(
{\bf z}_{n+1}, \mu_{1, -l}); B \right) \notag \\ &\quad =
(-1)^{l+1}\frac{\lambda^{l-1}}{(l-1)!}
\mathcal Z \left( {\bf z}_{n+1}, \mu_{0, -1}, \tau \right)
\notag \\ &\quad \quad
+\sum\limits_{m\ge 0}(-1)^{m+1}\binom{m+l-1}{m}
{E}_{m+l,\lambda}(\tau) \;
\mathcal Z\left (
{\bf z}_{n}, \mu_{1, m}, \tau \right)\notag \\ &\quad \quad +
\sum\limits_{k=2}^{n}\sum_{m\ge 0}
(-1)^{l+1}
\binom{m+l-1}{m}
P_{m+l,\lambda} \left(\frac{x_1-x_{k}} {2\pi i},\tau \right)\;
\mathcal Z\left( {\bf z}_n, \mu_{k, m}, \tau \right),
\notag
\label{eq:2ZhuRed0}
\end{align}
for ${E}_{k, \lambda}$ given in \eqref{eq:Gkl}.
\end{proposition}
\subsection{Multiparameter Jacobi forms}
For multiparameter Jacobi forms \cite{EZ, Zag, KMI, KMII, BKT}, the
reduction formulas are found using an analysis that is similar to that in \cite{Zhu, MTZ}.
The following two lemmas reduce any $n$-point multiparameter Jacobi function to a linear combination of
$(n-1)$-point Jacobi functions with modular coefficients.
\begin{lemma}
For each $1\leq j\leq m$
we have
\begin{align}
&\mathcal Z( {\bf z}_{n+1}, \mu, \tau) \notag \\
&=\delta_{ {\bf z}_n \cdot (\alpha)_n,
\mathbb{Z}}\; \mathcal Z ({\bf z}_n, (\alpha)_n, \mu(m) ) \\
& +\sum_{s=1}^n \sum_{k\geq 0} \tilde{P}_{k+1} (z_s -z, {\bf z}_n \cdot (\alpha_n), \tau ) \;
\mathcal Z ({\bf z}_n, \mu_{s, k} \tau),
\notag
\end{align}
where $\delta_{{\bf z}\cdot (\mu)_n, \mathbb{Z}}$ is $1$ if
${\bf z}_n\cdot (\mu)_n \in \mathbb{Z}$ and is $0$ otherwise.
\end{lemma}
\begin{lemma} \label{LemmaRecursion}
Let the assumptions be the same as in the previous lemma. Then for $p\geq 1$,
\begin{align}
&\mathcal Z( {\bf z}_{n+1}, \mu_{1, -p}, \tau) \notag \\
&= \delta_{{\bf z}_n \cdot (\alpha)_n, \mathbb{Z} } \; \delta_{p,1} \;
\mathcal Z({\bf z}_n, \mu_{0}, \tau )
\notag \\
& +(-1)^{p+1}\sum_{k\geq 0} \binom{k+p-1}{p-1} \tilde{E}_{k+p}
(\tau, {\bf z}_n \cdot (\alpha)_n ) \;
\mathcal Z ({\bf z}_n, \mu_{1, k}, \tau )
\notag \\
& +(-1)^{p+1} \sum_{s=2}^n \sum_{k\geq 0} \binom{k+p-1}{p-1}
\tilde{P}_{k+p}(z_s -z_1 ,\tau ,{\bf z}_n \cdot (\alpha)_n) \;
\mathcal Z ({\bf z}_n \mu_{s, k}, \tau).
\notag
\end{align}
\end{lemma}
\begin{remark}
The difference of a minus sign between these equations and those found in \cite{MTZ}
can be attributed to the minus sign difference in our definitions of the functions
$P_k \left[\begin{smallmatrix} \zeta \\ 1 \end{smallmatrix}\right] (w,\tau)$ and
the action of $\text{SL}_2 (\mathbb{Z})$.
\end{remark}
\subsection{Genus two counterparts of Weierstrass functions}
\label{derivation}
In this subsection we recall the definition of genus two Weierstrass functions \cite{GT}.
For $m$, $n\ge 1$,
we first define a number of infinite matrices and row and column vectors:
\begin{align}
\Gamma(m,n) &= \delta_{m, -n+2p-2},
\nonumber \\
\Delta(m,n) &= \delta_{m, n+2p-2}.
\label{eq:GamDelTh}
\end{align}
We also define the projection matrix
\begin{eqnarray}
\Pi =\Gamma^2=
\begin{bmatrix} \mathbbm{1}_{2p-1} & 0 \\ 0 & \ddots
\end{bmatrix} , \label{eq:PiK} \end{eqnarray}
where ${\rm Id}_{2p-3}$ denotes the $2p-3$ dimensional identity matrix and
${\rm Id }_{-1}=0$.
Let $\Lambda_a $ for
$a\in \{1,2\}$
be the matrix with components
\begin{eqnarray}
\Lambda_a (m,n;\tau_a,\epsilon) =
\epsilon^{(m+n)/2}(-1)^{n+1} \binom{m+n-1}{n}E_{m+n}(\tau_a).
\label{eq:Lambda}
\end{eqnarray}
Note that
\begin{align} \Lambda_a =SA_{a}S^{-1},
\label{eq:LamA}
\end{align}
for $A_a$ given by
\begin{align*}
A_{a}= A_{a}(k,l,\tau _{a},\epsilon )
= \frac{ (-1)^{k+1}\epsilon ^{(k+l)/2}}{\sqrt{kl}} \frac{(k+l-1)!}{(k-1)!(l-1)!}E_{k+l}(\tau_a).
\end{align*}
introduce the infinite dimensional matrices
for $S$ a diagonal matrix with components
\begin{align}
S(m,n)=\sqrt{m}\delta_{mn}. \label{eq:Sdef}
\end{align}
Let $\mathbb{R}(x)$ for $x$ on the torus be the row vector with components
\begin{align}
\mathbb{R}(x;m) = \epsilon^{\frac{m}{2}} P_{m+1} (x, \tau_a),
\label{eq:Rdef}
\end{align}
for $a\in \{1,2\}$.
Let $\mathbb{X}_a$ be the column vector with components
\begin{eqnarray}
\label{eq:Xadef}
\mathbb{X}_1(m) &=& \mathbb{X}_1 \left(m; z_{n+1},
{\bf z}_n; \mu \right)
\nonumber \\
&=& \epsilon^{-m/2} \sum_{u\in V}
\mathcal Z \left(
{\bf z}_k, \mu_{k,m}, \tau_1 \right) \;
\mathcal Z \left(
{\bf x}_{k+1, n}, \mu',
\tau_2 \right), \nonumber \\
\mathbb{X}_2(m) &=&
\mathbb{X}_2\left( m; z_{n+1},
{\bf z}_n; \mu \right)
\nonumber \\
&=& \epsilon^{-m/2} \sum_{u\in V}
\mathcal Z \left(
{\bf x}_k, \mu,
\tau_1\right)
\mathcal Z\left(
{\bf x}_{n-k},
\mu_{n-k, m}, \tau_2\right).
\end{eqnarray}
Introduce also
$\mathbb{Q}(p; x)$
an infinite row vector defined by
\begin{equation}
\mathbb{Q}(p; x) = \mathbb{R}(x) \Delta
\left( \mathbbm{1} - \widetilde{\Lambda}_{\overline{a}}\widetilde{\Lambda}_a \right)^{-1},
\label{eq:Qdef}
\end{equation}
for $x$ on the torus.
Notice that
\[
\widetilde{\Lambda}_a=\Lambda_a\Delta.
\]
One introduces
\[
\mathbb{P}_{j+1}(x)=\frac{(-1)^j}{j!}\mathbb{P}_{1}(x),
\]
and $j\ge 0$, is the column
with components
\begin{align}
\mathbb{P}_{j+1}(x;m)=\epsilon^{\frac{m}{2}}\binom{m+j-1}{j}
\left( P_{j+m}(x,\tau_a) - \delta_{j 0}E_m(\tau_a)\right).
\label{eq:P1jdef} \end{align}
\begin{definition}
One defines
\[
\mathcal{P}_{1}(p; x,y)=
\mathcal{P}_{1}(p; x,y; \tau_1, \tau_2, \epsilon),
\]
for $p \ge 1$ by
\begin{eqnarray*}
\mathcal{P}_{1} (p; x,y)
&=& P_1(x-y,\tau_a)- P_1(x,\tau_a)
\nonumber \\
&-& \mathbb{Q}(p; x) \widetilde{\Lambda}_{\overline{a}} \, \mathbb{P}_{1} (y)
-
(1-\delta_{p1})
\left(
\mathbb{Q}(p; x)\Lambda_{\overline{a}}\right) (2p-2),
\end{eqnarray*}
for $x$, $y$ on the torus, and
\begin{eqnarray*}
\mathcal{P}_{1}(p; x,y) &=& (-1)^{p+1} \Big[
\mathbb{Q}(p; x) \mathbb{P}_{1} (y)
+
(1-\delta_{p1}) \epsilon^{p-1}P_{2p-1}(x)
\nonumber \\
&+&
(1-\delta_{p1}) \left(
\mathbb{Q}(p; x) \widetilde{\Lambda}_{\overline{a}}\Lambda_a \right)(2p-2) \Big],
\end{eqnarray*}
for $x$ and
$y$ on two torai.
\end{definition}
For $j> 0$, define
\begin{eqnarray}
\mathcal{P}_{ j+1}(p; x,y)
&=& \frac{1}{j!} \partial_y^j \left(
\mathcal{P}_{1}(p; x,y) \right),
\nonumber \\
\mathcal{P}_{j+1}(p; x, y) &=&
\delta_{a, \bar{a} }P_{j+1}(x-y)+ (-1)^{j+1}.
\mathbb{Q}(p; x) \left(\widetilde{\Lambda}_{\overline{a}} \right)^{\delta_{a,\bar{a} } }\; \mathbb{P}_{j+1}(y).\;
\label{eq:PN21j}
\end{eqnarray}
\begin{definition}
One calls
$
\mathcal{P}_{j+1}(p; x,y)$
the genus two generalized Weierstrass functions.
\end{definition}
\subsection{Genus two case}
\label{corfu}
In this subsection we recall \cite{GT} the construction and reduction formulas for
modular funcitons defined on genus two complex curve.
In particular, we use the geometric construction developed in
\cite{Y}.
\begin{definition}
For a complex parameter $\epsilon= z_1 z_2$,
the null-point modular form is defined on a genus two complex curve by
\begin{equation}
\label{eq:Zdef}
\mathcal Z \left(\mu \right) = \sum_{r \geq 0} \epsilon^r
\mathcal Z (z_1, \mu_1 \tau_1) \; \mathcal Z (z_2, \mu_2, \tau_2),
\end{equation}
where parameters $\mu_1$ and $\mu_2$ are related.
\end{definition}
We then recall \cite{GT} the formal genus two reduction formulas for
$n$-point modular functions.
\begin{definition}
Let $x_{n+1}$,
${\bf y}_k$ and
${\bf y}'_l$ be inserted on two torai.
We consider the genus two $n$-point modular function
\begin{equation}
\mathcal Z \left(z_{n+1}, {\bf z}_k;
{\bf z}'_l, \mu)
\right)
=
\sum_{r\geq 0} \epsilon^r
\mathcal Z \left(z_{n+1},
{\bf x}_k, \mu_1, \tau_1 \right) \;
\mathcal Z \left(
{\bf x}'_l, \mu_2,
\tau_2 \right),
\label{Znvleft1}
\end{equation}
where the sum as in \eqref{eq:Zdef}.
\end{definition}
First, one defines the functions $\mathcal Z_{n, a}$ for $a\in \{1,2\}$, via elliptic quasi-modular forms
\begin{eqnarray*}
\mathcal Z_{n, 1} \left( {\bf z}_{n+1}; \mu \right)
&=& \sum_{r\geq 0} \epsilon^r
\mathcal Z ({\bf z}_{n+1}, {\bf z}_{k} \mu_{0}, \tau) \;
\mathcal Z_{n-k}
\left(
{\bf x}_{k+1, n}, \mu', \tau_2 \right),
\end{eqnarray*}
\begin{eqnarray*}
\mathcal Z_{n, 2} \left({\bf z}_{n+1}; \mu
\right)
&=&
\sum_{r\geq0} \epsilon^r
\mathcal Z_{ k}
\left(
{\bf x}_k, \mu', \tau_1\right) \;
\mathcal Z \left( z_{n+1}), {\bf z}_{k+1, n} \right),
\nonumber \\
\mathcal Z_{n, 3} \left( {\bf z}_{n+1}; \mu \right) &=& \mathbb{X}_1^\Pi,
\end{eqnarray*}
of \eqref{eq:Xadef}.
We also define
\begin{definition}
\label{calFforms}
Let $f^{(2)}_{a}(p; z_{n+1})$, for $p \ge 1$, and $a=1$, $2$ be given by
\begin{equation}
f^{(2)}_{a}( p;z_{n+1} )=
1^{\delta_{ba}} + (-1)^{ p\delta_{b\overline{a}} } \epsilon^{1/2} \left(
\mathbb{Q}(p; z_{n+1}) \left( \widetilde{\Lambda}_{ \overline{a} } \right)^{ \delta_{ba} } \right) (1),
\label{eq:calFadef}
\end{equation}
for $z_{n+1}\in\widehat{\Sigma}^{(1)}_b$.
Let
$f^{(2)}_3 (p; z_{n+1})$, for $z_{n+1} \in {\Sigma}^{(1)}_a$ be an infinite row vector given by
\begin{equation}
f^{(2)}_3(p; z_{n+1})=
\left( \mathbb{R}(z_{n+1}) +
\mathbb{Q}(p;z_{n+1}) \left(\widetilde{\Lambda}_{\overline{a}}\Lambda_{a}
+ \Lambda_{\overline{a}} \Gamma \right) \right)\Pi.
\label{eq:calFPidef}
\end{equation}
\end{definition}
In \cite{GT} it is proven that
the genus two $n=k+l$-point function
inserted at
$x_{n-k}$,
${\bf y}_k$ on two torai has the following reduction formula
\begin{eqnarray}
\mathcal Z ({\bf x}_{n+1}, \mu )
&=& \sum\limits_{l=1}^3
f_{l}(p; z_{n+1}) \; \mathcal Z_{n, l} \left({\bf z}_{n+1}; \mu \right),
\nonumber \\
&=& \sum_{i=1}^n \sum_{j\geq 0}
\mathcal{P}_{j+1}(p; z_{n+1}, z_i) \;
\mathcal Z \left( {\bf z}_n; \mu_{i, j} \right),
\label{eq:npZhured}
\end{eqnarray}
where $p$ is some parameter.
with $\mathcal{P}_{j+1}(p;x,y)$ of \eqref{eq:PN21j}.
\subsection{Genus $g$ generalizations of elliptic functions}
\label{derivation}
For purposes of the formula \eqref{eq:ZhuGenusg}
we recall here certain definitions \cite{TW}.
Define a column vector
\[
X=(X_{a}(m)),
\]
indexed by $ m\ge 0$ and $ a\in\mathcal{I}$ with components
\begin{align}\label{XamDef}
X_{a}(m)=\rho_{a}^{-\frac{m}{2}}\sum_{ \bm{\mu_{a,m}} }\mathcal Z(\ldots; w_a, \mu_{a, m}; \ldots),
\end{align} and a row vector
\[
p(x)=(p_{a}(x,m)),
\]
for $m\ge 0, a\in\mathcal{I}$ with components
\begin{align}\label{eq:pdef} p_{a}(x,m)=\rho_{a}^{\frac{m}{2}}\partial^{(0,m)}\psi_{p}^{(0)}(x,w_{a}). \end{align}
Introduce the
column vector
\[
G=(G_{a}(m)),
\]
for $m\ge 0, a\in\mathcal{I}$, given by
\begin{align*}
G=\sum_{k=1}^{n}\sum_{j\ge 0}\partial_{k}^{(j)} \; q(y_{k})\;
\mathcal Z( {\bf z}_n, \mu_{k, j} ),
\end{align*}
where $q(y)=(q_{a}(y;m))$, for $m\ge 0$, $a\in\mathcal{I}$, is a column vector with components
\begin{align} \label{eq:qdef} q_{a}(y;m)=(-1)^{p}\rho_{a}^{\frac{m+1}{2}}\partial^{(m,0)}\psi_{p}^{(0)}(w_{-a},y), \end{align} and
\[
R=(R_{ab}(m,n)),
\]
for $m$, $n\ge 0$ and $a$, $b\in\mathcal{I}$ is a doubly indexed matrix with components \begin{align} R_{ab}(m,n)=\begin{cases}(-1)^{p}\rho_{a}^{\frac{m+1}{2}}\rho_{b}^{\frac{n}{2}}\partial^{(m,n)}
\psi_{p}^{(0)}(w_{-a},w_{b}),&a\neq-b,\\ (-1)^{p}\rho_{a}^{\frac{m+n+1}{2}}\mathcal{E}_{m}^{n}(w_{-a}),&a=-b, \end{cases} \label{eq:Rdef} \end{align}
where
\begin{align}
\label{eq:Ejt}
\mathcal{E}_{m}^{n}(y)=\sum_{\ell=0}^{2p-2}\partial^{(m)}f_{\ell}(y)\;\partial^{(n)}y^{\ell},
\end{align}
\begin{align}\label{PsiDef} \psi_{p}^{(0)}(x,y)=\frac{1}{x-y}+\sum_{\ell=0}^{2p-2}f_{\ell}(x)y^{\ell}, \end{align}
for {any} Laurent series $f_{\ell}(x)$ for $\ell=0,\ldots ,2p-2$.
Define the doubly indexed matrix $\Delta=(\Delta_{ab}(m,n))$ by \begin{align} \Delta_{ab}(m,n)=\delta_{m,n+2p-1}\delta_{ab}.
\label {eq:Deltadef} \end{align}
Denote by
\[
\widetilde{R}=R\Delta,
\]
and the formal inverse $(I-\widetilde{R})^{-1}$ is given by
\begin{align} \label{eq:ImRinverse} \left(I-\widetilde{R}\right)^{-1}=\sum_{k\ge 0}\widetilde{R}^{\,k}. \end{align}
Define
$\chi(x)=(\chi_{a}(x;\ell))$ and
\[
o(\bm{y}_k, \mu_0)=(o_{a}( \bm{y}_k; \mu_0, \ell)),
\]
are {finite} row and column vectors indexed by $a\in\mathcal{I}$, $0\le \ell\le 2p-2$ with
\begin{align}
\label{eq:chiadef}
\chi_{a}(x;\ell)&=\rho_{a}^{-\frac{\ell}{2}}(p(x)+\widetilde{p}(x)(I-\widetilde{R})^{-1}R)_{a}(\ell), \\ \label{LittleODef}
o_{a}(\ell)&=o_{a}( \bm{y}_k, \mu_0, \ell)=\rho_{a}^{\frac{\ell}{2}}X_{a}(\ell),
\end{align} and where
\[
\widetilde{p}(x)=p(x)\Delta.
\]
$\psi_{p}(x,y)$ is defined by
\begin{align} \label{eq:psilittleN} \psi_{p}(x,y)=\psi_{p}^{(0)}(x,y)+\widetilde{p}(x)(I-\widetilde{R})^{-1}q(y). \end{align}
For each $a \in\I_{+}$ we define a vector
\[
\theta_{a}(x)=(\theta_{a}(x;\ell) ),
\]
indexed by
$0\le \ell\le 2p-2$ with components \begin{align}\label{eq:thetadef}
\theta_{a}(x;\ell) = \chi_{a}(x;\ell)+(-1)^{p }\rho_{a}^{p-1-\ell}\chi_{-a}(x;2p-2-\ell). \end{align}
Now define the following vectors of formal differential forms
\begin{align}
\label{eq:ThetaPQdef}
P(x) =p(x) \; dx^{p},
\nonumber \\
Q(y)=q(y)\; dy^{1-p},
\end{align}
with
\[
\widetilde{P}(x)=P(x)\Delta.
\]
Then with
\begin{equation}
\label{psih}
\Psi_{p} (x,y) =\psi_{p}(x,y) \;dx^{p}\; dy^{1-p},
\end{equation}
we have
\begin{align}\label{GenusgPsiDef}
\Psi_{p}(x,y)=\Psi_{p}^{(0)}(x,y)+\widetilde{P}(x)(I-\widetilde{R})^{-1}Q(y).
\end{align}
Defining
\begin{equation}
\label{thetanew}
\Theta_{a}(x;\ell) =\theta_{a}(x;\ell)\; dx^{p},
\end{equation}
and
\begin{equation}
\label{oat}
O_{a}(\bm{y}_k, \mu_0, \ell) = o_{a}(\bm{y}_k, \mu_0, \ell) \; \bm{dy}_k^\beta,
\end{equation}
for some parameter $\beta$.
\subsection{Genus $g$ Schottky case}
\label{corfug}
In this subsection we recall \cite{TW, T2} the construction and reduction relations
for $n$-point modular functions
defined on a genus $g$ Riemann surface $M$ formed in the Schottky parameterization.
All expressions here are functions of formal variables $w_{\pm a}$, $\rho_{a} \in \mathbb{C}$.
Then we recall
the genus $g$ reduction formula with universal coefficients that have a geometrical meaning and
are meromorphic on $M$.
These coefficients are
generalizations of the
elliptic Weierstrass functions \cite{L}.
For a $2g$ local coordinates
\[
\bm{w}_{2g}= (w_{-1}, w_{1}; \ldots; w_{-g},w_{g}), \]
of $2g$ points $(p_{-1}, p_1; \ldots; p_{-g}, p_g)$ on the Riemann sphere,
consider the genus zero $2g$-point function
\begin{align*}
\mathcal Z(\bm{w}_{2g}, \mu)=&\mathcal Z(w_{-1},w_{1}; \ldots ; w_{-g}, w_{g}, \mu) \\ =&\prod_{a\in\I_{+}}\rho_{a}^{\beta_a}
\mathcal Z (w_{-1}, w_{1}; \ldots;w_{-g}, w_{g}, \mu),
\end{align*}
where $\I_{+}=\{1,2,\ldots,g\}$, and $\beta_a$ are certain parameters related to $\mu$.
Let us denote
\[
\bm{z}_{+}=(z_{1},\ldots,z_{g}),
\]
\[
\bm{z}_{-}=(z_{-1}, \ldots, z_{-g}).
\]
Let $w_{a}$ for $a\in\mathcal{I}$ be $2g$
formal variables. One identify them
with the canonical Schottky parameters (for details of the Schottky construction, see \cite{TW, T2}).
One can define the genus $g$ null-point modular function as
\begin{align}
\label{GenusgPartition}
\mathcal Z=(\bm{w}_{2g},\bm{\rho}_{2g}, \mu)
=\sum_{\bm{z}_{+}} \mathcal Z (\bm{z}_{2g},\bm{w}_{2g}, \mu),
\end{align}
for
\[
(\bm{w}_{2g},\bm{\rho}_{2g})=(w_{\pm 1}, \rho_{1}; \ldots; w_{\pm g},\rho_{g}).
\]
Now
we recall
the formal reduction formulas for all
genus $g$ Schottky $n$-point functions.
One defines the genus $g$ formal $n$-point modular function for
$\bm{y}_{n}$ by
\begin{align}\label{GenusgnPoint}
\mathcal Z(\bm{y}_n, \mu) =\mathcal Z (\bm{y}_n; \bm{w}_{2g},\bm{\rho}_{2g}, \mu) = \sum_{\bm{z}_{+}}\mathcal Z (\bm{y}_n; \bm{w}_{2g}, \mu),
\end{align}
where
\[
\mathcal Z(\bm{y}_n; \bm{w}_{2g}, \mu)=\mathcal Z (\bm{y}_{n}; \bm{w}_{-1, g}, \mu).
\]
\begin{equation}
\label{eq:Z_Walpha}
\mathcal Z (\bm{y}_n, \mu) = \sum_{ {\bf z}_+ \in \bm {\alpha}_g} \mathcal Z ( \bm{y}_n; \bm{w}_{2g}, \mu),
\end{equation}
where here the sum is over a basis
${\bm{\alpha}}$.
It follows that
\begin{align}
\label{eq:Z_WalphaSum}
\mathcal Z(\bm{y}_n, \mu)=\sum_{\bm{\alpha}_{g}\in\bm{A}} \mathcal Z_{ \bm{\alpha}_g }^{(g)}(\bm{y}_n, \mu),
\end{align}
where the sum ranges over $\bm{\alpha}=(\alpha_{1},\ldots ,\alpha_{g}) \in \bm{A}$, for $\bm{A}=A^{\otimes{g}}$.
Finally, one defines corresponding formal $n$-point correlation differential forms
\begin{align}
\label{tupoy}
Z(\bm{y}_n, \mu) &=\mathcal Z(\bm{y}_n, \mu) \; \bm{dy^{\beta}}_n,
\nonumber \\
Z_{ \bm{\alpha}_{g} }(\bm{y}_n, \mu) &=\mathcal Z_{ \bm{\alpha} }(\bm{y}_n, \mu)\; \bm{dy^{\beta}}_n,
\end{align}
where
\[
\bm{dy^{\beta}}_n=\prod_{k=1}^{n} dy_{k}^{\beta_k}.
\]
In \cite{TW} they prove that
the genus $g$ $(n+1)$-point formal modular differential
$ Z (x; \bm{y}, \mu)$, for $x_{n+1}$, and
point
$p_0$, with the coordinate $y_{n+1}$, and
${\bf p}_n$ with coordinates
${\bf y}_{n}$ satisfies the recursive identity for ${\bf z}_n= (\bm{y})$
\begin{eqnarray}
\label{eq:ZhuGenusg}
Z \left(x_{n+1},
{\bf z}_n, \mu
\right) &=&
\sum_{a=1}^{g} \Theta_{a}(y_{n+1})
\; O^{W_{\bm{\alpha}}}_{a} \left(z_{n+1}; {\bf z}_n\right)
\nonumber \\
&=&
\sum_{k=1}^{n}\sum_{j\ge 0}\partial^{(0,j)} \; \Psi_{p}(y_{n+1}, y_{k}) \;
\mathcal Z
\left( {\bf x_n}, \mu_{k, j} \right)\; dy_{k}^{j}\;
\nonumber
\end{eqnarray}
Here $\partial^{(0,j)}$ is given by
\[
\partial^{(i,j)}f(x,y)=\partial_{x}^{(i)}\partial_{y}^{(j)}f(x,y),
\]
for a function $f(x,y)$, and $\partial^{(0,j)}$ denotes partial derivatives with respect to $x$ and $y_j$.
The forms
$\Psi_{p}(y_{n+1}, y_{k} )\; dy_{k}^{j}$ given by
\eqref{psih},
$\Theta_{a}(x)$ is of \eqref{thetanew}, and
$O^{ {W}_{\bm{\alpha}}}_{a}(z_{n+1}, {\bf z}_n, \mu)$ of \eqref{oat}.
\end{document} | arXiv |
Tue, 07 Jan 2020 05:36:58 GMT
1.1: Use the Language of Algebra
[ "article:topic", "license:ccby", "showtoc:yes", "transcluded:yes", "authorname:openstaxmarecek", "source[1]-math-5117" ]
https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMonroe_Community_College%2FMTH_104_Intermediate_Algebra%2F1%253A_Foundations%2F1.2%253A_Use_the_Language_of_Algebra
MTH 104 Intermediate Algebra
1: Foundations
Lynn Marecek
Professor (Mathematics) at Santa Ana College
Publisher: OpenStax CNX
Find Factors, Prime Factorizations, and Least Common Multiples
Use Variables and Algebraic Symbols
Simplify Expressions Using the Order of Operations
Evaluate an Expression
Identify and Combine Like Terms
Translate an English Phrase to an Algebraic Expression
By the end of this section, you will be able to:
This chapter is intended to be a brief review of concepts that will be needed in an Intermediate Algebra course. A more thorough introduction to the topics covered in this chapter can be found in the Elementary Algebra chapter, Foundations.
The numbers 2, 4, 6, 8, 10, 12 are called multiples of 2. A multiple of 2 can be written as the product of a counting number and 2.
Similarly, a multiple of 3 would be the product of a counting number and 3.
We could find the multiples of any number by continuing this process.
Counting Number
Multiples of 2 2 4 6 8 10 12 14 16 18 20 22 24
Multiples of 3 3 6 9 12 15 18 21 24 27 30 33 36
Multiples of 4 4 8 12 16 20 24 28 32 36 40 44 48
Multiples of 5 5 10 15 20 25 30 35 40 45 50 55 60
Multiples of 9 9 18 27 36 45 54 63 72 81 90 99 108
MULTIPLE OF A NUMBER
A number is a multiple of \(n\) if it is the product of a counting number and \(n\).
Another way to say that 15 is a multiple of 3 is to say that 15 is divisible by 3. That means that when we divide 3 into 15, we get a counting number. In fact, \(\mathrm{15÷3}\) is 5, so 15 is \(\mathrm{5⋅3}\).\(\mathrm{15÷3}\) is 5, so 15 is \(\mathrm{5⋅3}\).
DIVISIBLE BY A NUMBER
If a number mm is a multiple of n, then m is divisible by n.
If we were to look for patterns in the multiples of the numbers 2 through 9, we would discover the following divisibility tests:
A number is divisible by:
2 if the last digit is 0, 2, 4, 6, or 8.
3 if the sum of the digits is divisible by 3.
5 if the last digit is 5 or 0.
6 if it is divisible by both 2 and 3.
10 if it ends with 0.
Is 5,625 divisible by ⓐ 2? ⓑ 3? ⓒ 5 or 10? ⓓ 6?
ⓐ
\(\text{Is 5,625 divisible by 2?}\)
\( \begin{array}{ll} \text{Does it end in 0, 2, 4, 6 or 8?} & {\text{No.} \\ \text{5,625 is not divisible by 2.}} \end{array}\) ⓑ
\(\text{5,625 divisible by 3?}\)
\(\begin{array}{ll} {\text{What is the sum of the digits?} \\ \text{Is the sum divisible by 3?}} & {5+6+2+5=18 \\ \text{Yes.} \\ \text{5,625 is divisible by 3.}}\end{array}\) ⓒ
\(\text{Is 5,625 divisible by 5 or 10?}\)
\(\begin{array}{ll} \text{What is the last digit? It is 5.} & \text{5,625 is divisible by 5 but not by 10.} \end{array}\) ⓓ
\(\begin{array}{ll}\text{Is it divisible by both 2 and 3?} & {\text{No, 5,625 is not divisible by 2, so 5,625 is} \\ \text{not divisible by 6.}} \end{array}\)
Is 4,962 divisible by ⓐ 2? ⓑ 3? ⓒ 5? ⓓ 6? ⓔ 10?
ⓐ yes ⓑ yes ⓒ no ⓓ yes ⓔ no
ⓐ no ⓑ yes ⓒ yes ⓓ no ⓔ no
In mathematics, there are often several ways to talk about the same ideas. So far, we've seen that if m is a multiple of n, we can say that m is divisible by n. For example, since 72 is a multiple of 8, we say 72 is divisible by 8. Since 72 is a multiple of 9, we say 72 is divisible by 9. We can express this still another way.
Since \(\mathrm{8·9=72}\), we say that 8 and 9 are factors of 72. When we write \(\mathrm{72=8·9}\), we say we have factored 72.mathrm{8·9=72}\), we say that 8 and 9 are factors of 72. When we write \(\mathrm{72=8·9}\), we say we have factored 72.
Other ways to factor 72 are \(\mathrm{1·72, \; 2·36, \; 3·24, \; 4·18,}\) and \(\mathrm{6⋅12}\). The number 72 has many factors: \(\mathrm{1,2,3,4,6,8,9,12,18,24,36,}\) and \(\mathrm{72}\).\(mathrm{1·72, \; 2·36, \; 3·24, \; 4·18,}\) and \(\mathrm{6⋅12}\). The number 72 has many factors: \(\mathrm{1,2,3,4,6,8,9,12,18,24,36,}\) and \(\mathrm{72}\).
If \(\mathrm{a·b=m}\), then mathrm{a·b=m}\), then a and b are factors of m.
Some numbers, such as 72, have many factors. Other numbers have only two factors. A prime number is a counting number greater than 1 whose only factors are 1 and itself.
PRIME NUMBER AND COMPOSITE NUMBER
A prime number is a counting number greater than 1 whose only factors are 1 and the number itself.
A composite number is a counting number that is not prime. A composite number has factors other than 1 and the number itself.
The counting numbers from 2 to 20 are listed in the table with their factors. Make sure to agree with the "prime" or "composite" label for each!
The prime numbers less than 20 are 2, 3, 5, 7, 11, 13, 17, and 19. Notice that the only even prime number is 2.
A composite number can be written as a unique product of primes. This is called the prime factorization of the number. Finding the prime factorization of a composite number will be useful in many topics in this course.
PRIME FACTORIZATION
The prime factorization of a number is the product of prime numbers that equals the number.
To find the prime factorization of a composite number, find any two factors of the number and use them to create two branches. If a factor is prime, that branch is complete. Circle that prime. Otherwise it is easy to lose track of the prime numbers.
If the factor is not prime, find two factors of the number and continue the process. Once all the branches have circled primes at the end, the factorization is complete. The composite number can now be written as a product of prime numbers.
example \(\PageIndex{4}\): How to Find the Prime Factorization of a Composite Number
Factor 48.
We say \(\mathrm{2⋅2⋅2⋅2⋅3}\) is the prime factorization of 48. We generally write the primes in ascending order. Be sure to multiply the factors to verify your answer.\(\mathrm{2⋅2⋅2⋅2⋅3}\) is the prime factorization of 48. We generally write the primes in ascending order. Be sure to multiply the factors to verify your answer.
If we first factored 48 in a different way, for example as \(\mathrm{6·8}\), the result would still be the same. Finish the prime factorization and verify this for yourself.
Find the prime factorization of \(\mathrm{80}\).
\(\mathrm{2⋅2⋅2⋅2⋅5}\)
\(\mathrm{2⋅2⋅3⋅5}\)
FIND THE PRIME FACTORIZATION OF A COMPOSITE NUMBER
Find two factors whose product is the given number, and use these numbers to create two branches.
If a factor is prime, that branch is complete. Circle the prime, like a leaf on the tree.
If a factor is not prime, write it as the product of two factors and continue the process.
Write the composite number as the product of all the circled primes.
One of the reasons we look at primes is to use these techniques to find the least common multiple of two numbers. This will be useful when we add and subtract fractions with different denominators.
LEAST COMMON MULTIPLE
The least common multiple (LCM) of two numbers is the smallest number that is a multiple of both numbers.
To find the least common multiple of two numbers we will use the Prime Factors Method. Let's find the LCM of 12 and 18 using their prime factors.
example \(\PageIndex{7}\): How to Find the Least Common Multiple Using the Prime Factors Method
Find the least common multiple (LCM) of 12 and 18 using the prime factors method.
Notice that the prime factors of 12 \(\mathrm{(2·2·3)}\) and the prime factors of 18 \(\mathrm{(2⋅3⋅3)}\) are included in the LCM \(\mathrm{(2·2·3·3)}\). So 36 is the least common multiple of 12 and 18.
By matching up the common primes, each common prime factor is used only once. This way you are sure that 36 is the least common multiple.
Find the LCM of 9 and 12 using the Prime Factors Method.
Find the LCM of 18 and 24 using the Prime Factors Method.
FIND THE LEAST COMMON MULTIPLE USING THE PRIME FACTORS METHOD
Write each number as a product of primes.
List the primes of each number. Match primes vertically when possible.
Bring down the columns.
Multiply the factors.
In algebra, we use a letter of the alphabet to represent a number whose value may change. We call this a variable and letters commonly used for variables are \(x,y,a,b,c.\)
A variable is a letter that represents a number whose value may change.
A number whose value always remains the same is called a constant.
A constant is a number whose value always stays the same.
To write algebraically, we need some operation symbols as well as numbers and variables. There are several types of symbols we will be using. There are four basic arithmetic operations: addition, subtraction, multiplication, and division. We'll list the symbols used to indicate these operations below.
OPERATION SYMBOLS
Say:
The result is…
Addition \(a+b\) \(a\) plus \(b\) the sum of \(a\) and \(b\)
Subtraction \(a−b\) \(a\) minus \(b\) the difference of \(a\) and \(b\)
Multiplication \(a⋅b,ab,(a)(b),(a)b,a(b)\) \(a\) times \(b\) the product of \(a\) and \(b\)
Division \(a÷b,\space a/b,\space\frac{a}{b},\space b \overline{\smash{)}a}\) \(a\) divided by \(b\) the quotient of \(a\) and \(b\);
\(a\) is called the dividend, and \(b\) is called the divisor
When two quantities have the same value, we say they are equal and connect them with an equal sign.
EQUALITY SYMBOL
\(a=b\) is read "a is equal to b."
The symbol "\(=\)" is called the equal sign.
On the number line, the numbers get larger as they go from left to right. The number line can be used to explain the symbols "\(<\)" and "\(>\)".
The expressions \(a<b\) or \(a>b\) can be read from left to right or right to left, though in English we usually read from left to right. In general,
\[a<b \text{ is equivalent to }b>a. \text{For example, } 7<11 \text{ is equivalent to }11>7.\]
\[a>b \text{ is equivalent to }b<a. \text{For example, } 17>4 \text{ is equivalent to }4<17.\]
INEQUALITY SYMBOLS
\(a\neq b\) a is not equal to b.
\(a<b\) a is less than b.
\(a\leq b\) a is less than or equal to b.
\(a>b\) a is greater than b.
\(a\geq b\) a is greater than or equal to b.
Grouping symbols in algebra are much like the commas, colons, and other punctuation marks in English. They help identify an expression, which can be made up of number, a variable, or a combination of numbers and variables using operation symbols. We will introduce three types of grouping symbols now.
GROUPING SYMBOLS
\[\begin{array}{lc} \text{Parentheses} & \mathrm{()} \\ \text{Brackets} & \mathrm{[]} \\ \text{Braces} & \mathrm{ \{ \} } \end{array}\]
Here are some examples of expressions that include grouping symbols. We will simplify expressions like these later in this section.
\[\mathrm{8(14−8) \; \; \; \; \; \; \; \; 21−3[2+4(9−8)] \; \; \; \; \; \; \; \; 24÷ \{13−2[1(6−5)+4]\}}\]
What is the difference in English between a phrase and a sentence? A phrase expresses a single thought that is incomplete by itself, but a sentence makes a complete statement. A sentence has a subject and a verb. In algebra, we have expressions and equations.
An expression is a number, a variable, or a combination of numbers and variables using operation symbols.
\[\begin{array}{lll} \textbf{Expression} & \textbf{Words} & \textbf{English Phrase} \\ \mathrm{3+5} & \text{3 plus 5} & \text{the sum of three and five} \\ \mathrm{n−1} & n\text{ minus one} & \text{the difference of } n \text{ and one} \\ \mathrm{6·7} & \text{6 times 7} & \text{the product of six and seven} \\ \frac{x}{y} & x \text{ divided by }y & \text{the quotient of }x \text{ and }y \end{array} \]
Notice that the English phrases do not form a complete sentence because the phrase does not have a verb.
An equation is two expressions linked by an equal sign. When you read the words the symbols represent in an equation, you have a complete sentence in English. The equal sign gives the verb.
An equation is two expressions connected by an equal sign.
\[\begin{array}{ll} \textbf{Equation} & \textbf{English Sentence} \\ 3+5=8 & \text{The sum of three and five is equal to eight.} \\ n−1=14 & n \text{ minus one equals fourteen.} \\ 6·7=42 & \text{The product of six and seven is equal to forty-two.} \\ x=53 & x \text{ is equal to fifty-three.} \\ y+9=2y−3 & y \text{ plus nine is equal to two } y \text{ minus three.} \end{array}\]
Suppose we need to multiply 2 nine times. We could write this as \(\mathrm{2·2·2·2·2·2·2·2·2}\). This is tedious and it can be hard to keep track of all those 2s, so we use exponents. We write \(\mathrm{2·2·2}\) as \(\mathrm{2^3}\) and \(\mathrm{2·2·2·2·2·2·2·2·2}\) as \(\mathrm{2^9}\). In expressions such as \(\mathrm{2^3}\), the 2 is called the base and the 3 is called the exponent. The exponent tells us how many times we need to multiply the base.
EXPONENTIAL NOTATION
We say \(\mathrm{2^3}\) is in exponential notation and \(\mathrm{2·2·2}\) is in expanded notation.
\(a^n\) means multiply a by itself, n times.
The expression \(a^n\) is read a to the \(n^{th}\) power.
While we read \(a^n\) as \("a\) to the \(n^{th}\) power", we usually read:
\[\begin{array}{cc} a^2 & "a \text{ squared}" \\ a^3 & "a \text{ cubed}" \end{array}\]
We'll see later why \(a^2\) and \(a^3\) have special names.
Table shows how we read some expressions with exponents.
In Words
72 7 to the second power or 7 squared
53 5 to the third power or 5 cubed
94 9 to the fourth power
125 12 to the fifth power
To simplify an expression means to do all the math possible. For example, to simplify \(\mathrm{4·2+1}\) we would first multiply \(\mathrm{4⋅2}\) to get 8 and then add the 1 to get 9. A good habit to develop is to work down the page, writing each step of the process below the previous step. The example just described would look like this:
\[ \mathrm{ 4⋅2+1} \\ \mathrm{8+1} \\ \mathrm{9}\]
By not using an equal sign when you simplify an expression, you may avoid confusing expressions with equations.
SIMPLIFY AN EXPRESSION
To simplify an expression, do all operations in the expression.
We've introduced most of the symbols and notation used in algebra, but now we need to clarify the order of operations. Otherwise, expressions may have different meanings, and they may result in different values.
For example, consider the expression \(\mathrm{4+3⋅7}\). Some students simplify this getting 49, by adding \(\mathrm{4+3}\) and then multiplying that result by 7. Others get 25, by multiplying \(\mathrm{3·7}\) first and then adding 4.
The same expression should give the same result. So mathematicians established some guidelines that are called the order of operations.
USE THE ORDER OF OPERATIONS.
Parentheses and Other Grouping Symbols
Simplify all expressions inside the parentheses or other grouping symbols, working on the innermost parentheses first.
Exponents
Simplify all expressions with exponents.
Perform all multiplication and division in order from left to right. These operations have equal priority.
Perform all addition and subtraction in order from left to right. These operations have equal priority.
Students often ask, "How will I remember the order?" Here is a way to help you remember: Take the first letter of each key word and substitute the silly phrase "Please Excuse My Dear Aunt Sally".
\[\begin{array}{ll} \text{Parentheses} & \text{Please} \\ \text{Exponents} & \text{Excuse} \\ \text{Multiplication Division} & \text{My Dear} \\ \text{Addition Subtraction} & \text{Aunt Sally} \end{array}\]
It's good that "My Dear" goes together, as this reminds us that multiplication and division have equal priority. We do not always do multiplication before division or always do division before multiplication. We do them in order from left to right.
Similarly, "Aunt Sally" goes together and so reminds us that addition and subtraction also have equal priority and we do them in order from left to right.
example \(\PageIndex{10}\)
Simplify: \(\mathrm{18÷6+4(5−2)}\).
Parentheses? Yes, subtract first.
Exponents? No.
Multiplication or division? Yes.
Divide first because we multiply and divide left to right.
Any other multiplication or division? Yes.
Multiply.
Any other multiplication of division? No.
Any addition or subtraction? Yes.
Add.
Simplify: \(\mathrm{30÷5+10(3−2).}\)
Simplify: \(\mathrm{70÷10+4(6−2).}\)
When there are multiple grouping symbols, we simplify the innermost parentheses first and work outward.
Simplify: \(\mathrm{5+23+3[6−3(4−2)].}\)
Are there any parentheses (or other grouping symbols)? Yes.
Focus on the parentheses that are inside the brackets. Subtract.
Continue inside the brackets and multiply.
Continue inside the brackets and subtract.
The expression inside the brackets requires no further simplification.
Are there any exponents? Yes. Simplify exponents.
Is there any multiplication or division? Yes.
Is there any addition of subtraction? Yes.
Simplify: \(\mathrm{9+53−[4(9+3)].}\)
Simplify: \(\mathrm{72−2[4(5+1)].}\)
In the last few examples, we simplified expressions using the order of operations. Now we'll evaluate some expressions—again following the order of operations. To evaluate an expression means to find the value of the expression when the variable is replaced by a given number.
To evaluate an expression means to find the value of the expression when the variable is replaced by a given number.
To evaluate an expression, substitute that number for the variable in the expression and then simplify the expression.
Evaluate when \(x=4\): ⓐ \(x^2\) ⓑ \(3^x\) ⓒ \(2x^2+3x+8\).
Use definition of exponent.
Simplify.
ⓒ
Follow the order of operations.
Evaluate when \(x=3\), ⓐ \(x^2\) ⓑ \(4^x\) ⓒ \(3x^2+4x+1\).
ⓐ 9 ⓑ 64 ⓒ40
Evaluate when \(x=6\), ⓐ \(x^3\) ⓑ \(2^x\) ⓒ \(6x^2−4x−7\).
ⓐ 216 ⓑ 64 ⓒ 185
Algebraic expressions are made up of terms. A term is a constant, or the product of a constant and one or more variables.
A term is a constant or the product of a constant and one or more variables.
Examples of terms are \(7,y,5x^2,9a,\) and \(b^5\).
The constant that multiplies the variable is called the coefficient.
The coefficient of a term is the constant that multiplies the variable in a term.
Think of the coefficient as the number in front of the variable. The coefficient of the term \(3x\) is 3. When we write \(x\), the coefficient is 1, since \(x=1⋅x\).
Some terms share common traits. When two terms are constants or have the same variable and exponent, we say they are like terms.
Look at the following 6 terms. Which ones seem to have traits in common?
\[5x \; \; \; 7 \; \; \; n^2 \; \; \; 4 \; \; \; 3x \; \; \; 9n^2\]
We say,
\(7\) and \(4\) are like terms.
\(5x\) and \(3x\) are like terms.
\(n^2\) and \(9n^2\) are like terms.
LIKE TERMS
Terms that are either constants or have the same variables raised to the same powers are called like terms.
If there are like terms in an expression, you can simplify the expression by combining the like terms. We add the coefficients and keep the same variable.
\[\begin{array}{lc} \text{Simplify.} & 4x+7x+x \\ \text{Add the coefficients.} & 12x \end{array}\]
ExAMPLE \(\PageIndex{19}\): How To Combine Like Terms
Simplify: \(2x^2+3x+7+x^2+4x+5\).
Simplify: \(3x^2+7x+9+7x^2+9x+8\).
\(10x^2+16x+17\)
Simplify: \(4y^2+5y+2+8y2+4y+5.\)
\(12y^2+9y+7\)
COMBINE LIKE TERMS.
Identify like terms.
Rearrange the expression so like terms are together.
Add or subtract the coefficients and keep the same variable for each group of like terms.
We listed many operation symbols that are used in algebra. Now, we will use them to translate English phrases into algebraic expressions. The symbols and variables we've talked about will help us do that. Table summarizes them.
Addition a plus b
the sum of a and b
a increased by b
b more than a
the total of a and b
b added to a
\(a+b\)
Subtraction a minus b
the difference of a and b
a decreased by b
b less than a
b subtracted from a
\(a−b\)
Multiplication a times b
the product of aa and bb
twice a
\(a·b,ab,a(b),(a)(b)\)
\(2a\)
Division a divided by b
the quotient of a and b
the ratio of a and b
b divided into a
\(a÷b,a/b,\frac{a}{b},b \overline{\smash{)}a}\)
Look closely at these phrases using the four operations:
Each phrase tells us to operate on two numbers. Look for the words of and and to find the numbers.
Translate each English phrase into an algebraic expression:
ⓐ the difference of \(14x\) and 9
ⓑ the quotient of \(8y^2\) and 3
ⓒ twelve more than \(y\)
ⓓ seven less than \(49x^2\)
ⓐ The key word is difference, which tells us the operation is subtraction. Look for the words of and and to find the numbers to subtract.
ⓑ The key word is quotient, which tells us the operation is division.
ⓒ The key words are more than. They tell us the operation is addition. More than means "added to."
\[\text{twelve more than }y \\ \text{twelve added to }y \\ y+12\]
ⓓ The key words are less than. They tell us to subtract. Less than means "subtracted from."
\[\text{seven less than }49x^2 \\ \text{seven subtracted from }49x^2 \\ 49x^2−7\]
Exercise \(\PageIndex{23}\)
Translate the English phrase into an algebraic expression:
ⓐ the difference of \(14x^2\) and 13
ⓑ the quotient of \(12x\) and 2
ⓒ 13 more than \(z\)
ⓓ 18 less than \(8x\)
ⓐ \(14x^2−13\) ⓑ \(12x÷2\)
ⓒ \(z+13\) ⓓ \(8x−18\)
ⓐ the sum of \(17y^2\) and 19
ⓑ the product of 7 and y
ⓒ Eleven more than \(x\)
ⓓ Fourteen less than 11a
ⓐ \(17y^2+19\) ⓑ \(7y\)
ⓒ \(x+11\) ⓓ \(11a−14\)
We look carefully at the words to help us distinguish between multiplying a sum and adding a product.
ⓐ eight times the sum of x and y
ⓑ the sum of eight times x and y
There are two operation words—times tells us to multiply and sum tells us to add.
ⓐ Because we are multiplying 8 times the sum, we need parentheses around the sum of x and y, \((x+y)\). This forces us to determine the sum first. (Remember the order of operations.)
\[\text{eight times the sum of }x \text{ and }y \\ 8(x+y)\]
ⓑ To take a sum, we look for the words of and and to see what is being added. Here we are taking the sum of eight times x and y.
ⓐ four times the sum of p and q
ⓑ the sum of four times p and q
ⓐ \(4(p+q)\) ⓑ \(4p+q\)
ⓐ the difference of two times x and 8
ⓑ two times the difference of x and 8
ⓐ \(2x−8\) ⓑ\(2(x−8)\)
Later in this course, we'll apply our skills in algebra to solving applications. The first step will be to translate an English phrase to an algebraic expression. We'll see how to do this in the next two examples.
The length of a rectangle is 14 less than the width. Let w represent the width of the rectangle. Write an expression for the length of the rectangle.
\[\begin{array}{lc} \text{Write a phrase about the length of the rectangle.} & \text{14 less than the width} \\ \text{Substitute }w \text{ for "the width."} & w \\ \text{Rewrite less than as subtracted from.} & \text{14 subtracted from } w \\ \text{Translate the phrase into algebra.} & w−14 \end{array}\]
The length of a rectangle is 7 less than the width. Let w represent the width of the rectangle. Write an expression for the length of the rectangle.
\(w−7\)
The width of a rectangle is 6 less than the length. Let l represent the length of the rectangle. Write an expression for the width of the rectangle.
\(l−6\)
The expressions in the next example will be used in the typical coin mixture problems we will see soon.
June has dimes and quarters in her purse. The number of dimes is seven less than four times the number of quarters. Let q represent the number of quarters. Write an expression for the number of dimes.
\[\begin{array}{lc} \text{Write a phrase about the number of dimes.} & \text{7 less than 4 times }q \\ \text{Translate 4 times }q. & \text{7 less than 4}q \\ \text{Translate the phrase into algebra.} & 4q−7 \end{array}\]
Geoffrey has dimes and quarters in his pocket. The number of dimes is eight less than four times the number of quarters. Let q represent the number of quarters. Write an expression for the number of dimes.
\(4q−8\)
Lauren has dimes and nickels in her purse. The number of dimes is three more than seven times the number of nickels. Let n represent the number of nickels. Write an expression for the number of dimes.
\(7n+3\)
How to find the prime factorization of a composite number.
If a factor is prime, that branch is complete. Circle the prime, like a bud on the tree.
How To Find the least common multiple using the prime factors method.
\(a=b\) is read "a is equal to b." The symbol "=" is called the equal sign.
\(a≠b\) a is not equal to b.
\(a≤b\) a is less than or equal to b.
\(a≥b\) a is greater than or equal to b.
Grouping Symbols \(\begin{array}{lc} \text{Parentheses} & \mathrm{()} \\ \text{Brackets} & \mathrm{[]} \\ \text{Braces} & \mathrm{ \{ \} } \end{array}\)
Exponential Notation \(a^n\) means multiply a by itself, n times. The expression an is read a to the \(n^{th}\) power.
How to use the order of operations.
How to combine like terms.
the sum of aa and b
b added to a \(a+b\)
b subtracted from a \(a−b\)
a divided by b
b divided into a \(a÷b,a/b,\frac{a}{b},b \overline{\smash{)}a}\)
Identify Multiples and Factors
In the following exercises, use the divisibility tests to determine whether each number is divisible by 2, by 3, by 5, by 6, and by 10.utoNum" template (preferably at the end) to the page.
[Show Solution]
Divisible by 2, 3, 6
Divisible by 2
Divisible by 3, 5
Find Prime Factorizations and Least Common Multiples
In the following exercises, find the prime factorization.
\(2⋅43\)
\(5⋅7⋅13\)
\(2⋅2⋅2⋅2⋅3⋅3⋅3\)
In the following exercises, find the least common multiple of each pair of numbers using the prime factors method.
In the following exercises, simplify each expression.
\(2^3−12÷(9−5)\)
\(3^2−18÷(11−5)\)
\(2+8(6+1)\)
\(20÷4+6(5−1)\)
\(3(1+9⋅6)−4^2\)
\(2[1+3(10−2)]\)
\(5[2+4(3−2)]\)
\(8+2[7−2(5−3)]−3^2\)
\(10+3[6−2(4−2)]−2^4\)
In the following exercises, evaluate the following expressions.
When \(x=2\),
ⓐ \(x^6\)
ⓑ \(4^x\)
ⓒ \(2x^2+3x−7\)
ⓐ 64 ⓑ 16 ⓒ 7
ⓑ \(5x\)
ⓒ \(3x^2−4x−8\)
When \(x=4,y=1\)
\(x^2+3xy−7y^2\)
\(6x^2+3xy−9y^2\)
When \(x=10,y=7\)
\((x−y)^2\)
When \(a=3,b=8\)
\(a^2+b^2\)
Simplify Expressions by Combining Like Terms
In the following exercises, simplify the following expressions by combining like terms.
\(7x+2+3x+4\)
\(10x+6\)
\(8y+5+2y−4\)
\(10a+7+5a−2+7a−4\)
\(22a+1\)
\(7c+4+6c−3+9c−1\)
\(3x^2+12x+11+14x^2+8x+5\)
\(17x^2+20x+16\)17x^2+20x+16\)
\(5b^2+9b+10+2b^2+3b−4\)
In the following exercises, translate the phrases into algebraic expressions.
ⓐ the difference of \(5x^2\) and \(6xy\)
ⓑ the quotient of \(6y^2\) and \(5x\)
ⓒ Twenty-one more than \(y^2\)
ⓓ \(6x\) less than \(81x^2\)
ⓐ \(5x^2−6xy\) ⓑ \(\frac{6y^2}{5x}\)
ⓒ \(y^2+21\) ⓓ \(81x^2−6x\)
ⓐ the difference of \(17x^2\) and \(17x^2\) and \(5xy\)
ⓒ Eighteen more than \(a^2\);
ⓓ\(11b\) less than \(100b^2\)
ⓐ the sum of \(4ab^2\) and \(3a^2b\)
ⓑ the product of \(4y^2\) and \(5x\)
ⓒ Fifteen more than \(m\)
ⓓ \(9x\) less than \(121x^2\)
ⓐ \(4ab^2+3a^2b\) ⓑ \(20xy^2\)
ⓒ \(m+15\) ⓓ \(121x^2−9x\) \(9x<121x^2\)
ⓐ the sum of \(3x^2y\) and \(7xy^2\)
ⓑ the product of \(6xy^2\) and \(4z\)
ⓒ Twelve more than \(3x^2\)
ⓓ \(7x^2\) less than \(63x^3\)
ⓐ eight times the difference of \(y\) and nine
ⓑ the difference of eight times \(y\) and 9
ⓐ \(8(y−9)\) ⓑ \(8y−9\)
ⓐ seven times the difference of \(y\) and one
ⓑ the difference of seven times \(y\) and 1
ⓐ five times the sum of \(3x\) and \(y\)
ⓑ the sum of five times \(3x\) and \(y\)
ⓐ \(5(3x+y)\) ⓑ \(15x+y\)
ⓐ eleven times the sum of \(4x2\) and \(5x\)
ⓑ the sum of eleven times \(4x^2\) and \(5x\)
Eric has rock and country songs on his playlist. The number of rock songs is 14 more than twice the number of country songs. Let c represent the number of country songs. Write an expression for the number of rock songs.
\(14>2c\)
The number of women in a Statistics class is 8 more than twice the number of men. Let \(m\) represent the number of men. Write an expression for the number of women.
Greg has nickels and pennies in his pocket. The number of pennies is seven less than three the number of nickels. Let n represent the number of nickels. Write an expression for the number of pennies.
\(3n-7\)
Jeannette has \($5\) and \($10\) bills in her wallet. The number of fives is three more than six times the number of tens. Let \(t\) represent the number of tens. Write an expression for the number of fives.
Explain in your own words how to find the prime factorization of a composite number.
Answers will vary.
Why is it important to use the order of operations to simplify an expression?
Explain how you identify the like terms in the expression \(8a^2+4a+9−a^2−1.\)
Explain the difference between the phrases "4 times the sum of x and y" and "the sum of 4 times x and y".
ⓐ Use this checklist to evaluate your mastery of the objectives of this section.
ⓑ If most of your checks were:
…confidently. Congratulations! You have achieved the objectives in this section. Reflect on the study skills you used so that you can continue to use them. What did you do to become confident of your ability to do these things? Be specific.
…with some help. This must be addressed quickly because topics you do not master become potholes in your road to success. In math every topic builds upon previous work. It is important to make sure you have a strong foundation before you move on. Who can you ask for help? Your fellow classmates and instructor are good resources. Is there a place on campus where math tutors are available? Can your study skills be improved?
…no - I don't get it! This is a warning sign and you must not ignore it. You should get help right away or you will quickly be overwhelmed. See your instructor as soon as you can to discuss your situation. Together you can come up with a plan to get you the help you need.
composite number
A composite number is a counting number that is not prime. It has factors other than 1 and the number itself.
If a number m is a multiple of n, then m is divisible by n.
To evaluate an expression means to find the value of the expression when the variables are replaced by a given number.
If \(a·b=m\), then a and b are factors of m.
A number is a multiple of n if it is the product of a counting number and n.
The order of operations are established guidelines for simplifying an expression.
To simplify an expression means to do all the math possible.
A term is a constant, or the product of a constant and one or more variables.
1.1E: Exercises
Lynn Marecek via OpenStax
source[1]-math-5117 | CommonCrawl |
Calculators Topics Go Premium About Snapxam
ENG • ESP
Processing image... Tap to take a pic of the problem
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial.
Solved Exercises
$\left(6x-5y\right)^2$ 1d ago
$\left(2x+3y\right)^2$ 1d ago
$\sqrt{y^2+1}dx=xydy$ 1d ago
$\left(2i+7j\right)^2$ 1d ago
$\left(6z^2-5w^3\right)^2$ 1d ago
$\left(2a+3b\right)^2$ 1d ago
$\left(m-n\right)^2$ 1d ago
Prove $\frac{1+\sin\left(x\right)}{\cos\left(x\right)}+\frac{\cos\left(x\right)}{1+\sin\left(x\right)}=2\sec\left(x\right)$ 1d ago
$\left(6x+y\right)^2$ 1d ago
$\int\left(x^5\left(3-x^2\right)^7\right)dx$ 1d ago
Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day!
© 2018-2020 Snapxam, Inc. About Us Privacy Terms Contact
Calculators Topics Go Premium | CommonCrawl |
How can I prove that the XOR problem for dimension d is not lineary seperable? How to relate to an even d and an odd d?
These equations obligate wi > 0, for each i.
Now lets take the last equation.
This equation force wi<=0 for all i.
wi > 0, for each i.
Cannot be solved because. On the odd - d case, we'll have to consider the all of the equations with 1 zero (d equations), and it will get to the same contradiction.
But - i'm not sure it's the good-practice way.
Your answer for 2 dimensions can be true even for $d>2$.
You can't have $w1, w2$ s.t.
If $XOR_d(x_1,x_2,\ldots,x_d)$ is the (linear) XOR function on $d$ variables (defined as 1 iff an odd number of the inputs are 1), then define the two functions $$ f(x_1)=XOR_d(x_1,0,0,\ldots,0)=w_1x_1+a\\ g(x_1)=XOR_d(x_1,0,0,\ldots,0,1)=w_1x_1+b $$ But $f$ is (strictly) increasing and $g$ is (strictly) decreasing, which is impossible.
Not the answer you're looking for? Browse other questions tagged systems-of-equations neural-networks theorem-provers or ask your own question.
how to find the input for this optimization problem?
How to solve simultaneous logical equations?
Can neural networks figure out some unknown transform?
How to translate this to algebraic language? | CommonCrawl |
\begin{document}
\baselineskip=15pt
\author{Marco Antonio Armenta} \address{CIMAT A. C., Guanajuato, M\'exico.} \address{IMAG, Univ Montpellier, CNRS, Montpellier, France.} \email{[email protected]}
\keywords{Batalin-Vilkovisky algebras, Hochschild cohomology}
\title[Batalin-Vilkovisky structure]{Batalin-Vilkovisky structure on Hochschild cohomology with coefficients in the dual algebra} \date{} \maketitle
\begin{abstract} We prove that Hochschild cohomology with coefficients in $A^*=Hom_k(A,k)$ together with an $A$-structural map $\psi:A^* \otimes_A A^* \to A^*$ is a Batalin-Vilkovisky algebra. This applies to symmetric, Frobenius and monomial path algebras. \end{abstract}
\section{Introduction}
Let $A$ be an associative unital algebra projective over a commutative ring $k$. The Hochschild cohomology $k$-modules of $A$ with coefficients in an $A$-bimodule $M$, \[ H^\bullet(A,M)=\bigoplus_{n\geq0} H^n(A,M) \] have been introduced by Hochschild \cite{Hochschild} and extensively studied since then. Operations on cohomology have been defined, such as the cup product and the Gerstenhaber bracket, making it into a Gerstenhaber algebra \cite{Gerstenhaber}. Tradler showed \cite{Tradler} that for symmetric algebras this Gerstenhaber algebra structure on cohomology comes from a Batalin-Vilkovisky operator (BV-operator) and Menichi extended the result \cite{Menichi}. As Tradler mentions, it is important to determine other families of algebras where this property holds. Lambre-Zhou-Zimmermann proved that this is the case for Frobenius algebras with semisimple Nakayama automorphism \cite{Lambre}. Independently, Volkov proved with other methods that this holds for Frobenius algebras in which the Nakayama automorphism has finite order and the characteristic of the field $k$ does not divide it \cite{Volkov}. It has also been shown that Calabi-Yau algebras admit the existence of a BV-operator \cite{Ginzburg}, and that this BV-structure on its cohomology is isomorphic to the one of the cohomology of the Koszul dual, for a Koszul Calabi-Yau algebra \cite{Chen}. More generally, for algebras with duality, see \cite{Lambre}, a BV-structure is equivalent to a Tamarkin-Tsygan calculus or a differential calculus \cite{Lambre}. The proofs of \cite{Ginzburg}, \cite{Lambre} and \cite{Tradler} have in common the use of Connes' differential \cite{Connes} on homology to define the BV-operator on cohomology.
We start by giving an interpretation of Connes' differential in Hochschild cohomology with coefficients in the $A$-bimodule $A^*=Hom_k(A,k)$. The use of $A^*$ as bimodule of coefficients replaces the inner product which is in force for Frobenius algebras \cite{Lambre}, \cite{Tradler} as it is shown in Lemma 2.1 and Corollary 4.1. For symmetric algebras this induced BV-structure is isomorphic to the one given by Tradler in \cite{Tradler}. In the case of monomial path algebras we give a description of the $A$-bimodule structure of $A^*$ that allows us to construct an $A$-structural map on $A^*$.
To the knowledge of the author there is no other $BV$-operator entirely independent of Connes' differential.
\section{Connes' differential}
\textit{Connes' differential} is the map $B:HH_n(A) \to HH_{n+1}(A)$ that makes the Hochschild theory of an algebra into a differential calculus \cite{Tamarkin}. It is given by \[
B([a_0 \otimes \cdots \otimes a_n]) = \left[ \sum_{i=0}^n (-1)^{ni} 1 \otimes a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} \right]. \] For an $A$-bimodule $M$ the \textit{dual} $A$-bimodule is denoted $M^*=Hom_k(M,k)$. We consider the canonical $A$-bimodule structure on $M^*$, that is $(afb)(x)=f(bxa)$ for all $a,b\in A$, all $f\in M^*$ and all $x\in M$. Let \[ \bar{B}: H^{n+1}(A,A^*) \to H^{n}(A,A^*) \] given by \[
\bar{B}([f])([a_1 \otimes \cdots \otimes a_n])(a_0) := \sum_{i=0}^{n} (-1)^{ni} f(a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1}) (1). \] It is straightforward to verify that it is well-defined. Let \[ \mathfrak{C}: H^n(A,M^*) \to H_n(A,M)^* \] be the morphism \[
\mathfrak{C}([f])([x \otimes a_1 \otimes \cdots \otimes a_n])=f(a_1 \otimes \cdots \otimes a_n)(x), \] for all $a_i \in A$, for $i=1,\cdots,n$, all $x\in M$ and all $[f]\in H^{n+1}(A,M^*)$, see \cite{Cartan}. The evaluation map $ev:H_n(A,M) \to H_n(A,M)^{**}$ can be composed with the $k$-dual of $\mathfrak{C}$ to get a morphism \[ \varphi: H_n(A,M) \to H^n(A,M^*)^* \] which is given by \[
\varphi([x \otimes a_1 \otimes \cdots \otimes a_n])([f]) = f(a_1 \otimes \cdots \otimes a_n)(x). \] For $M=A$ we obtain a morphism $\varphi: HH_n(A) \to H^n(A,A^*)^*$. The proof of the following lemma is straightforward.
\begin{lemm}
Let $k$ be a commutative ring and let $A$ be an associative and unital $k$-algebra. The following diagram is commutative
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=2em,column sep=2em]
{
HH_n(A) & HH_{n+1}(A) \\
H^n(A,A^*)^* & H^{n+1}(A,A^*)^*.\\
};
\path[-stealth]
(m-1-1) edge node [above] {$B$} (m-1-2)
(m-2-1) edge node [above] {$\bar{B}^*$} (m-2-2)
(m-1-1) edge node [right] {$\varphi$} (m-2-1)
(m-1-2) edge node [right] {$\varphi$} (m-2-2);
\end{tikzpicture}
\]
If $k$ is a field then $\varphi$ is a monomorphism. If $k$ is a field and $HH_n(A)$ is finite dimensional then $\varphi: HH_n(A) \to H^n(A,A^*)^*$ is an isomorphism. \end{lemm} \iffalse \begin{proof} Let $[a_0 \otimes \cdots \otimes a_n] \in HH_n(A)$. For $[f] \in H^{n+1}(A,A^*)$ we have \[
\begin{array}{l}
\varphi(b([a_0 \otimes \cdots a_n])) ([f]) \\
= \varphi \left( \left[ \displaystyle\sum_{i=0}^n (-1)^{in} 1 \otimes a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} \right] \right) ([f]) \\
= \displaystyle\sum_{i=0}^n (-1)^{in} \varphi ( 1 \otimes a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} ) ([f]) \\
= \displaystyle\sum_{i=0}^n (-1)^{in} f(a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} )(1) \\
= \bar{b}([f])(a_1 \otimes \cdots \otimes a_n)(a_0) \\
= \varphi([a_0 \otimes \cdots \otimes a_n])(\bar{b}([f]))\\
= \bar{b}^* \big( \varphi([a_0 \otimes \cdots a_n]) \big)([f]). \\
\end{array} \] If $k$ is a field, then the evaluation map is a monomorphism and $\mathfrak{C}$ is an isomorphism \cite{Cartan}, hence $\varphi$ is a monomorphism. If in addition $HH_n(A)$ is finite dimensional over the field $k$, the evaluation map on $HH_n(A)$ is an isomorphism and then so does $\varphi$. \end{proof} \fi \section{Batalin-Vilkovisky structure}
A \textit{Gerstenhaber} algebra is a triple $\left( \mathcal{H}^\bullet, \cup, [\ ,\ ] \right)$ such that $\mathcal{H}^\bullet$ is a graded $k$-module, $\cup:\mathcal{H}^n \otimes \mathcal{H}^m \to \mathcal{H}^{n+m}$ is a graded commutative associative product and $[\ ,\ ]:\mathcal{H}^n \otimes \mathcal{H}^m \to \mathcal{H}^{n+m-1}$ is a graded Lie bracket such that it is anti-symmetric $[f,g] = (-1)^{(|f|-1)(|g|-1)} [g,f]$, it satisfies the Jacobi identity
\[
[f,[g,h]] = [[f,g],h] + (-1)^{(|f|-1)(|g|-1)} [g,[f,h]]
\] as well as the Poisson identity
\[
[f,g \cup h] = [f,g] \cup h + (-1)^{(|f|-1)|g|} g \cup [f,h],
\]
for all homogeneus elements $f,g,h$ of $\mathcal{H}^\bullet$. We denote by $|f|$ the degree of an homogeneous element $f\in\mathcal{H}^\bullet$. A \textit{Batalin-Vilkovisky} algebra (\textit{BV-algebra}) is a Gerstenhaber algebra $(\mathcal{H}^*,\cup,[\ ,\ ])$ together with a morphism \[ \Delta:\mathcal{H}^{n+1} \to \mathcal{H}^n \] such that $\Delta^2 = 0$ and \[
[f,g] = (-1)^{|f|+1} \big( \Delta(f \cup g) - \Delta(f) \cup g - (-1)^{|f|} f \cup \Delta(g) \big). \]
Recall that $H^0(A,M)=M^A=\{m \in M | ma=am \ for \ all \ a\in A \}$ for an $A$-bimodule $M$. \begin{Definition} Let $M$ be an $A$-bimodule. A morphism $\psi: M \otimes_A M \to M$ of $A$-bimodules is called an $A$-\textit{structural map} if it is \textit{associative}, that is \[ \psi(m_1 \otimes \psi(m_2 \otimes m_3)) = \psi(\psi(m_1 \otimes m_2) \otimes m_3) \] for all $m_1,m_2,m_3 \in M$, and $\psi$ is unital in the sense that there is $1_M\in H^0(A,M)$ such that $\psi(1_M\otimes m)=\psi(m\otimes 1_M)=m$ for all $m\in M$. \end{Definition}
\begin{Remark} Let $\psi: M \otimes_A M \to M$ be an $A$-structural map. Then the $\cup$-product \[
\cup : H^n(A,M) \otimes H^m(A,M) \to H^{n+m}(A,M\otimes_A M) \] can be composed with $\psi$ to obtain \[
\cup_\psi: H^n(A,M) \otimes H^m(A,M) \to H^{n+m}(A,M), \] that is \[
(f \cup_\psi g) (a_1 \otimes \cdots \otimes a_{n+m}) := \psi\big( f(a_1 \otimes \cdots \otimes a_n) \otimes g(a_{n+1} \otimes \cdots \otimes a_{n+m}) \big). \] Our assumptions on $\psi$ imply that $H^\bullet(A,M)$ is an associative and unital $k$-algebra. \end{Remark} We will denote $H^\bullet_\psi(A,M)$ the $k$-algebra $H^\bullet(A,M)$ endowed with the $\cup_\psi$-product. In case $M=A^*$, we have the following. \begin{lemm}
Let $A$ be an associative unital $k$-algebra and let $\psi:A^* \otimes_A A^* \to A^*$ be an $A$-structural map. Then $H^\bullet_\psi(A,A^*)$ is a Gerstenhaber algebra. \end{lemm} \begin{proof}
Let $d^*$ be the differential on the complex that calculates $H^\bullet(A,A^*)$ and let $f,g\in H^\bullet(A,A^*)$ be homogeneous elements. The following relation is well known, see \cite{Gerstenhaber},
\[
f\cup g - (-1)^{|f||g|}g\cup f = d^*(g) \bar{\circ} f + (-1)^{|f|} d^*(g\bar{\circ} f) + (-1)^{|f|-1} g \bar{\circ} d^*(f),
\]
where $g \bar{\circ} f (a_1 \otimes \cdots \otimes a_{|f|+|g|-1})$ is by definition
\[
\sum_{i=0}^{|g|}(-1)^{j}g(a_1 \otimes \cdots a_{i-1} \otimes f(a_i \otimes \cdots \otimes a_{i+|f|-1}) \otimes a_{i+|f|} \otimes \cdots \otimes a_{|f|+|g|-1} ),
\]
for $j=(i-1)(|f|-1)$. If $f$ and $g$ are cocycles, we get that the cup product is graded commutative and since $\psi$ is $k$-linear we get that the $\cup_\psi$-product is graded commutative.
Define the bracket in terms of $\bar{B}$ and the $\cup_\psi$-product as
\[
[f,g]_\psi:=(-1)^{(|f|-1)|g|} \big( \bar{B}(f \cup_\psi g) - \bar{B}(f) \cup_\psi g - (-1)^{|f|} f \cup_\psi \bar{B}(g) \big).
\]
\iffalse
and as a consequence we get the graded Jacobi identity
\[
[f,[g,h]_\psi]_\psi = [[f,g]_\psi,h]_\psi + (-1)^{(|f|-1)(|g|-1)} [g,[f,h]_\psi]_\psi
\]
and the graded Poisson identity
\[
[f,g \cup_\psi h]_\psi = [f,g]_\psi \cup_\psi h + (-1)^{(|f|-1)|g|} g \cup_\psi [f,h]_\psi.
\]
\fi
Hence the graded $k$-module $H^\bullet_\psi(A,A^*)$ with the $\cup_\psi$-product and the bracket $[\ ,\ ]_\psi$ is a Gerstenhaber algebra. \end{proof} \begin{Theorem}
Let $A$ be an associative unital $k$-algebra and let $\psi:A^* \otimes_A A^* \to A^*$ be an $A$-structural map. Then the data $\left(H^\bullet_\psi(A,A^*),\cup_\psi, [\ ,\ ]_\psi,\bar{B}\right)$ is a BV-algebra. \end{Theorem}
\begin{proof}
Since the following diagram is commutative
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=2em,column sep=2em]
{
HH_n(A) & HH_{n+1}(A) & HH_{n+2}(A)\\
H^n(A,A^*)^* & H^{n+1}(A,A^*)^* & H^{n+2}(A,A^*)^*.\\
};
\path[-stealth]
(m-1-1) edge node [above] {$B$} (m-1-2)
(m-1-2) edge node [above] {$B$} (m-1-3)
(m-2-1) edge node [above] {$\bar{B}^*$} (m-2-2)
(m-2-2) edge node [above] {$\bar{B}^*$} (m-2-3)
(m-1-1) edge node [left] {$\varphi$} (m-2-1)
(m-1-2) edge node [left] {$\varphi$} (m-2-2)
(m-1-3) edge node [left] {$\varphi$} (m-2-3);
\end{tikzpicture}
\]
we have that $\bar{B}^2=0$. Then $H^\bullet_\psi(A,A^*)$ is a BV-algebra with the bracket defined as in the last lemma. \end{proof}
\section{Frobenius and Symmetric algebras} Assume that $A$ is a symmetric algebra, i.e. a finite dimensional algebra with a symmetric, associative and non-degenerate bilinear form $<,>: A\otimes A \to k$, where associative means \[ <ab,c>=<a,bc> \] for all $a,b,c\in A$. The bilinear form defines an isomorphism of $A$-bimodules $Z:A \to A^*$ given by $Z(a)=<a,->$. It is shown in \cite{Tradler} that this defines a $BV$-operator on Hochschild cohomology, where $\Delta f$ is defined such that for $f \in HH^n(A)$ we have
\[
<\Delta f (a_1 \otimes \cdots \otimes a_{n-1}),a_n> = \sum_{i=1}^n (-1)^{i(n-1)}<f(a_i \otimes \cdots a_n \otimes a_1 \cdots \otimes a_{i-1}),1>.
\] \begin{Corollary}
If $A$ is a symmetric algebra, then there is an $A$-structural map $\psi: A^* \otimes_A A^* \to A^*$ such that the BV-algebras $HH^\bullet(A)$ and $H^\bullet_\psi(A,A^*)$ are isomorphic. \end{Corollary}
\begin{proof}
Let $Z:A \to A^*$ be the isomorphism of $A$-bimodules given by the bilinear form of $A$. We will denote $Z_*:HH^\bullet(A) \to H^\bullet_\psi(A,A^*)$ the isomorphism induced by composition with $Z$. Then the following diagram is commutative
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=2em,column sep=2em]
{
HH^n(A) & HH^{n-1}(A)\\
H^n(A,A^*) & H^{n-1}(A,A^*).\\
};
\path[-stealth]
(m-1-1) edge node [above] {$\Delta$} (m-1-2)
(m-2-1) edge node [above] {$\bar{B}$} (m-2-2)
(m-1-1) edge node [left] {$Z_*$} (m-2-1)
(m-1-2) edge node [right] {$Z_*$} (m-2-2);
\end{tikzpicture}
\]
Indeed,
\[
\begin{array}{l}
(\bar{B} \circ Z_*)([f]) (a_1 \otimes \cdots \otimes a_{n-1})(a_0) \\
= \bar{B}(Z \circ f)(a_1 \otimes \cdots \otimes a_{n-1})(a_0) \\
= \sum_{i=0}^{n-1} (-1)^{(n-1)i} (Z\circ f)(a_i \otimes \cdots \otimes a_{n-1} \otimes a_0 \otimes \cdots \otimes a_{i-1}) (1) \\
= Z \circ \left( \sum_{i=0}^{n-1} (-1)^{(n-1)i} f(a_i \otimes \cdots \otimes a_{n-1} \otimes a_0 \otimes \cdots \otimes a_{i-1}) (1) \right) \\
= (Z_* \circ \Delta)([f]) (a_1 \otimes \cdots \otimes a_{n-1})(a_0).
\end{array}
\]
Using the isomorphism given by the product $A \otimes_A A \cong A$ the transport of the algebra structure of $A$ to $A^*$ via $Z$ gives the $A$-structural map
\[
\psi = Z \circ (Z \otimes Z)^{-1}: A^* \otimes_A A^* \to A^*.
\]
This isomorphism satisfies the associativity and unity conditions of remark 3.1, since the product of $A$ is associative and has a unit. Even more, there are commutative diagrams where the vertical maps are isomorphisms
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=2em,column sep=2em]
{
HH^n(A) \otimes HH^m(A) & HH^{n+m}(A)\\
H^n(A,A^*) \otimes H^m(A,A^*) & H^{n+m}(A,A^*),\\
};
\path[-stealth]
(m-1-1) edge node [above] {$\cup$} (m-1-2)
(m-2-1) edge node [above] {$\cup_\psi$} (m-2-2)
(m-1-1) edge node [left] {$Z_* \otimes Z_*$} (m-2-1)
(m-1-2) edge node [right] {$Z_*$} (m-2-2);
\end{tikzpicture}
\]
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=2em,column sep=2em]
{
HH^n(A) \otimes HH^m(A) & HH^{n+m-1}(A)\\
H^n(A,A^*) \otimes H^m(A,A^*) & H^{n+m-1}(A,A^*).\\
};
\path[-stealth]
(m-1-1) edge node [above] {$[\ ,\ ]$} (m-1-2)
(m-2-1) edge node [above] {$[\ ,\ ]_\psi$} (m-2-2)
(m-1-1) edge node [left] {$Z_* \otimes Z_*$} (m-2-1)
(m-1-2) edge node [right] {$Z_*$} (m-2-2);
\end{tikzpicture}
\]
Indeed,
\[
\begin{array}{lcl}
Z_*(f) \cup_\psi Z_*(g) & = & \psi \circ (Z \otimes Z) (f\cup g) \\
& = & Z \circ (Z \otimes Z)^{-1} \circ (Z \otimes Z) (f\cup g) \\
& = & Z \circ (f \cup g) \\
& =& Z_* (f \cup g), \\
\end{array}
\]
and
\[
\begin{array}{l}
[Z_*f,Z_*g]_\psi \\
= (-1)^{(|f|-1)|g|} \Big( \bar{b}\big( Z_*f \cup_\psi Z_*g \big) - \bar{b}(Z_*f)\cup_\psi Z_*g - (-1)^{|f|}Z_*f \cup_\psi \bar{b}(Z_*g) \Big) \\
= (-1)^{(|f|-1)|g|} \Big( \bar{b}\big( Z_* (f \cup_\psi g) \big) - Z_*(\Delta f) \cup_\psi Z_*g - (-1)^{|f|} Z_*f \cup_\psi Z_* (\Delta) g \Big) \\
= (-1)^{(|f|-1)|g|} \Big( Z_* \Delta(f \cup g) - Z_*(\Delta f \cup g) - (-1)^{|f|} Z_*(f \cup \Delta g) \Big) \\
= (-1)^{(|f|-1)|g|} Z_* \Big( \Delta(f \cup g) - (\Delta f \cup g) - (-1)^{|f|} (f \cup \Delta g) \Big) \\
= Z_* [f,g]. \\
\end{array}
\]
Commutativity of these diagrams implies that the $BV$-algebras $HH^\bullet(A)$ and $H^\bullet_\psi(A,A^*)$ are isomorphic. \end{proof}
\begin{Remark} Observe that choosing $\Delta:=(Z_*)^{-1} \bar{b} Z_* $ gives $HH^\bullet(A)$ the structure of a BV-algebra. \end{Remark}
Assume now that $A$ is a Frobenius algebra, i.e. a finite dimensional algebra with a non-degenerate associative bilinear form $<-,->:A \times A \to k$. For every $a\in A$ there exist a unique $\mathfrak{N}(a) \in A$ such that $<a,->=<-, \mathfrak{N}(a)>$. The map $\mathfrak{N}:A \to A$ turns out to be an algebra isomorphism and is called the \textit{Nakayama} automorphism of the Frobenius algebra $A$. Following \cite{Lambre} we consider the $A$-bimodule $A_\mathfrak{N}$ whose underlying $k$-module is $A$ and the corresponding actions are \[
a x b = a x \mathfrak{N} (b). \] Hence the morphism $Z:A_\mathfrak{N} \to A^*$ given by $Z(a)=<a,->$ is an isomorphism of $A$-bimodules \cite{Lambre}. The morphism \[
\mu : A_\mathfrak{N} \otimes_A A_\mathfrak{N} \to A_\mathfrak{N} \] given by $\mu(a \otimes b)= a \mathfrak{N}(b)$ is a morphism of $A$-bimodules since \[
\mu(ab \otimes_A cd) = ab \mathfrak{N}(cd) = ab \mathfrak{N}(c)\mathfrak{N}(d) = ab \mathfrak{N}(c)d = a\mu(b \otimes_A c)d \] and it is well-defined since \[ \mu(a c \otimes b) = \mu(a \mathfrak{N}(c) \otimes b) = a \mathfrak{N}(c) \mathfrak{N}(b) = a \mathfrak{N}(cb) = \mu(a \otimes cb) \] for all $a,b,c,d\in A_\mathfrak{N}$. It is also unital and associative since $\mathfrak{N}(1)=1$, and \[
\begin{array}{lcl}
\mu \big( \mu(a \otimes b) \otimes c \big) & = & \mu\big(a \mathfrak{N}(b) \otimes c \big) \\
& = & \mu\big(a \otimes b c \big) \\
& = & \mu\big(a \otimes b \mathfrak{N}(c) \big) \\
& = & \mu\big(a \otimes \mu (b \otimes c) \big). \\
\end{array} \] Then $\psi=Z \circ \mu \circ (Z \otimes_A Z)^{-1} : A^* \otimes_A A^* \to A^*$ is an $A$-structural map. \begin{Corollary}
Let $A$ be a Frobenius algebra with diagonalizable Nakayama automorphism, then the BV-algebras $HH^\bullet_\psi(A,A^*)$ and $HH^\bullet(A,A_\mathfrak{N})$ are isomorphic. \end{Corollary} \begin{proof}
Hochschild cohomology of $A$ with coefficients in $A_\mathfrak{N}$ is isomorphic, see \cite{Lambre}, to Hochschild cohomology of $A$ with coefficients in $A_\mathfrak{N}$ corresponding to the eigenvalue $1\in k$ of the linear transformation $\mathfrak{N}$,
\[
HH^\bullet(A,A_\mathfrak{N}) \cong HH_1^\bullet(A,A_\mathfrak{N}).
\]
The BV-operator of $HH^\bullet(A,A_\mathfrak{N})$ is the transpose of Connes' differential
\[
B_\mathfrak{N}([a_0 \otimes \cdots \otimes a_n]) = \left[ \sum_{i=0}^n (-1)^{in} a_i \otimes \dots \otimes a_n \otimes \mathfrak{N}(a_0) \otimes \cdots \otimes \mathfrak{N}(a_{i-1}) \right] ,
\]
with respect to the duality given in \cite{Lambre}. By finite dimensionality arguments, this morphism turns out to be the $k$-dual of $\varphi$, namely
\[
\partial:HH_\bullet(A,A^*)^* \to HH^\bullet(A).
\]
The compatibility conditions for the $\cup$-product and the Gerstenhaber bracket are proved similarly. \end{proof}
\iffalse The BV-operator given in \cite{Volkov} is defined in terms of the bilinear form of the Frobenius algebra $A$. Let $\Delta : C^n(A) \to C^{n-1}(A)$ be given by \[
<\Delta f (a_1 \otimes \cdots \otimes a_{n-1}),a_n> = < \sum_{i=1}^n (-1)^{i(n-1)} \Delta_i f(a_1 \otimes \cdots \otimes a_{n-1}),1> \] where \[
\begin{array}{l}
<\Delta_if(a_1 \otimes \cdots \otimes a_{n-1}),a_n> \\
= \ <f(a_i \otimes \cdots \otimes a_{n-1} \otimes a_n \otimes \mathfrak{N}(a_0) \otimes \cdots \otimes \mathfrak{N}(a_{i-1})),1>.
\end{array} \] Y. Volkov proves that $\Delta$ defines a $BV$-operator on the elements of $HH^\bullet(A)$ for which \[
\mathfrak{N}^{-1} \left( f(\mathfrak{N}(a_1 \otimes \cdots \otimes a_n)) \right) = f(a_1 \otimes \cdots \otimes a_n). \] \fi
\section{Monomial path algebras} Let $Q$ be a finite quiver with $n$ vertices and consider a monomial path algebra $A=kQ/\left<T\right>$, that is, $T$ is a subset of paths in $Q$ of length greater or equal than 2. We do not require the algebra $A$ to be finite dimensional. We write $s(\omega)$ and $t(\omega)$ for the source and the target of $\omega$. A basis $P$ of $A$ is given the set of paths of $Q$ which do not contain paths of $T$. Let $P^\vee$ be the dual basis of $P$, and for $\omega \in P$ we denote $\omega^\vee$ its dual. Let $\alpha \in P$ and define $\omega_{/ \alpha}$ as the subpath of $\omega$ that starts in $s(\omega)$ and ends in $s(\alpha)$ if $\alpha$ is a subpath of $\omega$ such that $t(\alpha)=t(\omega)$, and zero otherwise. Let $\beta \in P$ and define $_{\beta \texttt{\symbol{92}} }\omega$ as the subpath of $\omega$ that starts at $t(\beta)$ and ends in $t(\omega)$ if $\beta$ is a subpath of $\omega$ such that $s(\beta)=s(\omega)$, and zero otherwise. The canonical $A$-bimodule structure of $A^*$ is isomorphic to the one given by linearly extending the following action \[
\alpha.\omega^\vee.\beta = (_{\beta \texttt{\symbol{92}} }\omega_{/ \alpha})^\vee. \] Now we construct an $A$-structural map for $A^*$. For $\omega, \gamma \in P$ we define \[
\omega^\vee \cdot \gamma^\vee = \left[ \begin{array}{ll}
(\gamma \omega)^\vee & if \ t(\omega)=s(\gamma) \\
0 & otherwise \\
\end{array} \right] \] and extend by linearity. Observe that $\gamma \ _{\beta \texttt{\symbol{92}} }\omega = \gamma_{/ \beta} \ \omega$, then \[
(\omega^\vee.\beta) \cdot \gamma^\vee = (_{\beta \texttt{\symbol{92}} }\omega)^\vee \cdot \gamma^\vee = (\gamma \ _{\beta \texttt{\symbol{92}} }\omega)^\vee = (\gamma_{/ \beta} \ \omega)^\vee = \omega^\vee \cdot (\gamma_{/ \beta})^\vee = \omega^\vee \cdot (\beta.\gamma^\vee). \] Therefore, by linearly extending $\psi(\omega^\vee \otimes \gamma^\vee) = \omega^\vee \cdot \gamma^\vee$ we get a morphism of $k$-modules \[
\psi: A^* \otimes_A A^* \to A^*. \] It is a morphism of $A$-bimodules since \[
\alpha . (\omega^\vee \cdot \gamma^\vee) . \beta = \alpha.(\gamma \omega)^\vee.\beta = (_{\beta \texttt{\symbol{92}} }\gamma \omega_{/ \alpha})^\vee = (\omega_{/ \alpha})^\vee \cdot (_{\beta \texttt{\symbol{92}} }\gamma)^\vee = (\alpha.\omega^\vee)\cdot (\gamma^\vee.\beta). \] The morphism $\psi$ is associative since the product of $A$ is associative. Let $e_1,...,e_n$ be the idempotents of $A$ given by the vertices of $Q$. Define $1^*=e_1^\vee+\cdots+e_n^\vee$ and observe that if $\alpha$ is a basic element of $A$ of length greater or equal than one then \[
1^*.\alpha = e_1^\vee.\alpha + \cdots + e_n^\vee.\alpha = 0 = \alpha.e_1^\vee + \cdots + \alpha.e_n^\vee = \alpha.1^* \] for every $i=1,...,n$. Moreover, \[
1^*.e_i = e_1^\vee.e_i + \cdots + e_n^\vee.e_i = e_i^\vee = e_i.e_1^\vee + \cdots + e_i.e_n^\vee = e_i.1^* \] so we get that $1^*\in H^0(A,A^*)$. Finally, \[
1^*\cdot \omega^\vee = e_1^\vee \cdot \omega^\vee + \cdots + e_n^\vee\cdot \omega^\vee = e_{t(\omega)}^\vee \cdot \omega^\vee = \omega^\vee \] and analogously $\omega^\vee \cdot 1^* = \omega^\vee$. Therefore $\psi$ is an $A$-structural map.
\begin{Corollary}
Let $A$ be a monomial path algebra. Then $H_\psi^\bullet(A,A^*)$ is a BV-algebra. \end{Corollary}
\end{document} | arXiv |
Virasoro conjecture
In algebraic geometry, the Virasoro conjecture states that a certain generating function encoding Gromov–Witten invariants of a smooth projective variety is fixed by an action of half of the Virasoro algebra. The Virasoro conjecture is named after theoretical physicist Miguel Ángel Virasoro. Tohru Eguchi, Kentaro Hori, and Chuan-Sheng Xiong (1997) proposed the Virasoro conjecture as a generalization of Witten's conjecture. Ezra Getzler (1999) gave a survey of the Virasoro conjecture.
References
• Getzler, Ezra (1999), "The Virasoro conjecture for Gromov-Witten invariants", in Wiśniewski, Jarosław; Szurek, Michał; Pragacz, Piotr (eds.), Algebraic geometry: Hirzebruch 70 (Warsaw, 1998), Contemporary Mathematics, vol. 241, Providence, R.I.: American Mathematical Society, pp. 147–176, arXiv:math/9812026, Bibcode:1998math.....12026G, doi:10.1090/conm/241/03634, ISBN 978-0-8218-1149-8, MR 1718143
• Eguchi, Tohru; Hori, Kentaro; Xiong, Chuan-Sheng (1997), "Quantum cohomology and Virasoro algebra", Physics Letters B, 402 (1): 71–80, arXiv:hep-th/9703086, Bibcode:1997PhLB..402...71E, doi:10.1016/S0370-2693(97)00401-2, ISSN 0370-2693, MR 1454328
| Wikipedia |
\begin{document}
\title{Closures of Certain Matrix Varieties and Applications}
\author{William Chang} \address{Department of Mathematics, UCLA, 520 Portola Plaza, Los Angeles, CA 90095, USA} \email{[email protected]}
\author{Robert M. Guralnick} \address{R.M. Guralnick, Department of Mathematics, University of Southern California, Los Angeles, CA 90089-2532, USA} \email{[email protected]}
\dedicatory{Dedicated to the memory of Irina Suprunenko}
\begin{abstract} We prove some results about closures of certain matrix varieties consisting of elements with the same centralizer dimension. This generalizes a result of Dixmier and has applications to topological generation of simple algebraic groups.
\end{abstract}
\keywords{Matrix varieties, Grassmanians, generation of algebraic groups, Dixmier}
\subjclass[2020]{Primary: 14L35, 20G15; secondary 15A04}
\date{\today}
\maketitle
\section{Introduction}\label{s:intro}
Let $k$ be an algebraically closed field of characteristic $p \ge 0$ and let $M_n(k)$ denote the algebra of $n \times n$ matrices over $k$.
If $A \in M_n(k)$, we let $D(A)$ denote the subset of $M_n(k)$ consisting of all elements with the same Jordan canonical form as $A$ up to changing the eigenvalues (but with the same number of distinct eigenvalues).
Note that $D(A)$ is closed under conjugation by $\operatorname{GL}_n(k)$ and any two elements in $D(A)$ have conjugate centralizers. More generally, the fixed space of any two elements in $D(A)$ on the $d$th Grassmanian, $\mathcal{G}_d$ are also conjugate and in particular the dimension of the fixed space on $\mathcal{G}_d$ is constant on $D(A)$.
We generalize a result of Dixmier \cite{D} (for the case of type A). Let $G$ be a simple algebraic group over an algebraically closed field. A unipotent element $u \in G$ is called parabolic if there exists a parabolic subgroup $P$ such that $C_P(u)$ is an open dense subset of the unipotent radical of $P$. One similarly defines parabolic nilpotent elements in the Lie algebra of $G$. Diximier proved (in characteristic $0$) that if $A$ is parabolic, then $A$ is the limit of semisimple elements such that the dimensions of the centralizers of each of the semisimple elements is the same as the dimension of the centralizer of $A$. This was used by Richardson \cite{Ri} to prove that the commuting variety of a reductive Lie algebra $\mathfrak{g}$ in characteristic $0$ is an irreducible variety of dimension equal to $\dim \mathfrak{g} + \mathrm{rank}(\mathfrak{g})$ (this was proved for $\mathfrak{gl}_n$ by Motzkin and Taussky \cite{MT} in all characteristics). Levy \cite{Le} observed that Dixmier's result goes through in good characteristic as well and so Richardson's proof for the irreducibility of the variety of commuting pairs goes through in good characteristic as well.
We generalize Dixmier's result for $\mathfrak{gl}_n$ by obtaining the same conclusion in arbitrary characteristic (using the Zariski topology) and for arbitrary elements.
We also prove a sandwich result by showing that any such element is trapped between semisimple and equipotent elements (i.e. having a single eigenvalue) all with the same centralizer dimension. We also need to verify that our semisimple and nilpotent elements have some extra properties (required for applications). We combine this with a recent result of Guralnick and Lawther \cite[Prop. 3.2.1]{GL} to conclude that all the fixed point spaces on all Grassmanians are the same for elements in these subvarieties.
This will be used in \cite{GG1} to deduce some results related to topological generation of simple algebraic groups and to extend the results of \cite{BGG} from semisimple and unipotent classes to all classes.
Our main result is the following:
\begin{thm} \label{t:dixmier} Let $A \in M_n(K)$. Then there exist a semsimple element $S \in M_n(k)$ and an equipotent element $N \in M_n(k)$ such that the following hold: \begin{enumerate} \item The Zariski closure of $D(S)$ contains $D(A)$; \item The Zariski closure of $D(A)$ contains $D(N)$; \item $A,S$ and $N$ all have centralizers of the same dimension; and \item If $1 \le d < n$, then $A, S$ and $N$ all have fixed point spaces of the same dimension on $\mathcal{G}_d$. \end{enumerate} \end{thm}
Note that in our proof controlling the determinant or trace of the elements is not an issue and so the same results hold for $\mathfrak{sl}_n(k)$, $\operatorname{GL}_n(k)$ and $\operatorname{SL}_n(k)$. We make some remarks in the last section regarding the symplectic and orthogonal cases.
We also give some consequences regarding generation of simple algebraic groups that will be required in the sequel \cite{GG1}. (3) was observed in \cite{GG} for the special case when $A$ is semisimple (i.e. the existence of a nilpotent class with the same centralizer dimension and with the largest eigenspace of the same dimension) and was used to prove results about generic stabilizers. We give the proof in the next section and some applications in the following section.
The first author thanks USC for support by a Provost's Undergraduate Research Grant. The second author was partially support by the NSF grant DMS-1901595 and a Simons Foundation Fellowship 609771. We thank the referee for their useful comments and careful reading of the manuscript.
\section{Proofs}
If $A \in M_n(k)$ has $m$ distinct eigenvalues, we set $\Delta(A)$ to be the set of $m$ partitions $\Delta_i$ where $\Delta_i$ is the partition associated to the Jordan blocks of each generalized eigenspace of $A$.
Let $X(\Delta) =\{ A \in M_n(k) | \Delta(A)=\Delta\}$ for $\Delta$ a set of $m$ partitions whose sizes add up to $n$. Note that if $A,B \in X(\Delta)$, then the centralizers of $A$ and $B$ are conjugate and in particular have the same dimension.
Let $\Sigma\Delta = \sum \Delta_i$ where by the sum of partitons we mean the usual addition (just view a partition as row vector with nonincreasing entries -- adding $0$'s to make the vectors have the same length). Given a partition $\Gamma$, let $\Gamma'$ be the transpose partition.
If $\Gamma$ is a partition of $n$, let $U(\Gamma)$ be set of all matrices with a single eigenvalue and the sizes of the Jordan blocks be given by $\Gamma$ and let $S(\Gamma)$ be the set of semisimple matrices with the dimensions of the eigenspaces given by $\Gamma$.
We first note the following elementary result.
\begin{lem} \label{l:basic} Let $A$ be an upper triangular matrix with diagonal entries contained in the set $\{a_1, \ldots a_s\}$ with the multiplicity of $a_i$ equal to $d_i$. Assume that the entries in the $i, i+1$ positions are all nonzero. Then the Jordan canonical form for $A$ consists of one Jordan block for each $a_i$ and it has size $d_i$. \end{lem}
\begin{proof} Note that $A$ is regular (i.e. its centralizer has dimension $n$). This follows by noting $A$ is cyclic (i.e. the column vector $e_n = (0,0, \ldots, 1)$ generate the module of column vectors for the algebra $k[A]$). Thus, the characteristic and minimal polynomials of $A$ are both $\Pi_{i=1}^s (x - a_i)^{d_i}$ and the result follows. \end{proof}
Let $A(a_1, \ldots, a_s; e_1, \ldots, e_s)$ denote the matrix above with the entries $i, i+1$ all equal to $1$ and all other entries (besides the diagonal) $0$. Consider the $s$-dimensional affine space of all such matrices (as the $a_i$ range over all possibilities). Note that the
generic points in the variety of all such matrices (i.e. with $a_i \ne a_j$ for $i \ne j$) are all regular. If all the $a_i=a$, then
the matrix is a regular equipotent matrix.
If $A \in M_n(k)$, let $\mathcal{G}_d^A$ be the fixed space of $A$ on $\mathcal{G}_d$ (i.e. the set of $d$-dimensional subspaces $W$ of $k^n$ with $AW \subseteq W$).
Our first result is the following:
\begin{thm} \label{t:unipotent} Let $\Delta$ be as above. Let $\Gamma = \Sigma\Delta$. \begin{enumerate} \item The closure of $S(\Gamma')$ contains $X(\Delta)$. \item The closure of $X(\Delta)$ contains $U(\Gamma)$. \item If $A \in S(\Gamma') \cup X(\Delta) \cup U(\Gamma)$, then $\dim C(A) = \sum d_i^2$ where $\Gamma'$ is the partition $d_1 \ge d_2 \ge \ldots$. \item If $A \in S(\Gamma') \cup X(\Delta) \cup U(\Gamma)$, then $\dim \mathcal{G}_d^A$ is constant. \end{enumerate} \end{thm}
\begin{proof} We prove (1). First suppose that $m=1$. Then $X(\Delta)$ consists of elements with a single eigenvalue. Then $\Gamma=\Delta$.
Let $m_1 \ge m_2 \ge \ldots \ge m_s$ be the pieces of the partition of $\Gamma$. An element of $S(\Gamma')$ has has $m_1$ distinct eigenvalues and more generally has $m_j$ distinct eigenvalues having multiplicity at least $j$. Set $d=m_1$.
For any $a_1, \ldots, a_d \in k$, let
$B$ be the matrix with diagonal blocks of size $m_i$ with the $i$th diagonal block being $A(a_1, \ldots, a_{m_i}, 1, \ldots, 1)$.
Note that if the $a_i$ are distinct, then $B$ is semisimple and is in $S(\Gamma')$ and this is a generic point in the affine
space of dimension $m$ obtained by allowing all possibilities for $a_i$. In the closure are the elements when all $a_i$ are equal
and so $X(\Gamma)$ is in the closure of $S(\Gamma')$ as claimed.
In the general case, we just choose a $B_j$ as above corresponding to the Jordan blocks corresponding to the $j$th eigenvalue
of an element in $X(\Delta)$ giving another copy of affine space. The result follows by considering each block separately.
We prove (2) similarly. Let $A \in X(\Delta)$. First consider the case that each partition in $\Delta$ has just one part. By taking generic elements and taking elements in the closure with a single eigenvalue, we see we can obtain a regular element with a single Jordan block (and any eigenvalue). In general, we decompose $A$ into pieces corresponding to $\Sigma\Delta=\Gamma$ with the $j$th piece having a single Jordan block corresponding to each eigenvalue of size the $j$th part of $\Gamma$ and the closure contains elements with a single eigenvalue with the partition of Jordan blocks corresponding to $\Gamma$.
The formulas for the dimension of centralizers and semisimple elements and nilpotent elements give that the centralizers have the dimension for elements in $S(\Gamma')$ and $U(\Gamma)$. This observation together with (1) and (2) proves (3).
The equality of the dimension of the fixed spaces on Grassmanian follows for the unipotent and semisimple elements by \cite[Prop. 3.2.1]{GL}. Then by (1) and (2), (4) follows. \end{proof}
Clearly, the result (with essentially the same proof) holds with $M_n(k)$ replaced by either $\operatorname{GL}_n(k)$ or $\operatorname{SL}_n(k)$.
\section{An Application}
We now apply Theorem \ref{t:unipotent} to the action on Grassmanians. Recall that $A \in M_n(k)$ fixes a subspace $W$ means that $AW \subseteq W$.
Let $d(A)$ be the dimension of the largest eigenspace of $A$. Observe that if $\Gamma=\Sigma\Delta$ with $\Delta=\Delta(A)$, then $d(A) = d(B) = d(C)$ by Theorem \ref{t:unipotent} applied to $\mathcal{G}_1$ (or by observation).
We now generalize the result \cite[Lemma 3.35]{BGG} which was stated for unipotent or semisimple elements.
\begin{thm} \label{t:sum} Let $A_1, \ldots, A_s \in M_n(k)$. Assume that $\sum d(A_i) \le (s-1)n$. Then one of the following holds: \begin{enumerate} \item Each $A_i$ has a quadratic minimal polynomial and $s=2$; or \item $\sum_i \dim \mathcal{G}_e^{A_i} < (s-1) \dim \mathcal{G}_e = (s-1)e(n-e)$ for $1 \le e \le n/2$. \end{enumerate} \end{thm}
\begin{proof} There is no harm is adding a scalar to each $A_i$ and so we may assume that the $A_i$ are all invertible. This is proved in \cite[Lemma 3.35]{BGG} in the case each $A_i$ is either semisimple or unipotent.
The previous result shows that this implies the result for arbitrary $A_i$. \end{proof}
\begin{cor} \label{c:2-space} Let $H$ be a closed irreducible subgroup of $\operatorname{GL}_n(k)$ that has a dense orbit $\mathcal{O}$ on $\mathcal{G}_2$. Let $x_1, \ldots, x_s \in H$ with $\sum d(x_i) \le (s-1)n$. Assume moreover that either $s > 2$ or $s=2$ and $x_1$ does not have a quadratic minimal polynomial. Then for generic $h_i \in H$, $\langle x_1^{h_1}, \ldots, x_s^{h_s} \rangle$ do not fix a point of $\mathcal{O}$. \end{cor}
\begin{proof} It follows by the previous result that Theorem \ref{t:sum}(2) holds. Let $C_i$ be the $H$-conjugacy class of $x_i$. By considering $H$ acting on $\mathcal{O}$, it follows by \cite[Lemma 3.14]{BGG} that the subset of $C_1 \times C_2 \times \ldots \times C_s$ that have a fixed point on $\mathcal{O}$ is contained in a proper closed subvariety of $C_1 \times C_2 \times \ldots \times C_s$. \end{proof}
Note that this result holds for $H$ a symplectic or special orthogonal group acting on nondegenerate $2$-spaces.
There should be an analogous but more complicated result both for higher dimensional Grassmanians and also for actions on the variety of totally singular spaces of a given dimension.
We note that one can generalize Dixmier's result for arbitrary elements in the symplectic and orthogonal Lie algebras in characteristic not $2$. Indeed, any element that does not have $0$ as an eigenvalue is in the Zariski closure of the set of semisimple elements with the same centralizer dimension. There is a similar statement for the groups. In good characteristic, Diximier's results still holds for parabolic nilpotent and unipotent elements.
We do not require this for our application (even for groups of this type) since these groups have dense open orbits on the Grassmanians (indeed, they have only finitely many orbits on each Grassmanian) and so the result for $\mathfrak{gl}$ is sufficient.
\section{Declarations}
The authors declare no competing interests. Both authors contributed equally.
\end{document} | arXiv |
\begin{definition}[Definition:Degenerate Connected Set]
Let $T = \struct {S, \tau}$ be a topological space.
Let $H \subseteq S$ be a subset of $T$.
$H$ is a '''degenerate connected set''' of $T$ {{iff}} it is a connected set of $T$ containing exactly one element.
\end{definition} | ProofWiki |
Convergence of the series using power series
Does the following series converge or diverge?
$$\sum_{n=1}^{\infty}\frac{{(-1)}^{n}}{n^{\frac2n}}$$
$$\sum_{n=1}^{\infty}\frac{{(-1)}^{n} (\ln n)^2}{n^{\frac12}}$$
I am trying to use the power series to do a direct comparison test and solve both of these questions. However the negative values in the numerator is throwing me off, i can't seem to find the right value to compare. Anyone could point me in the right direction?
sequences-and-series convergence
PhantomPhantom
$\begingroup$ "I have two questions" What are they? $\endgroup$ – Did Mar 27 '14 at 9:18
$\begingroup$ @Did The two questions which i have numbered as Qn1 and Qn2. "I have two questions" = "I am given two questions to solve". $\endgroup$ – Phantom Mar 27 '14 at 10:15
$\begingroup$ The things labelled Qn1 and Qn2 are not questions, but formulas of quantities depending on $n$. $\endgroup$ – Did Mar 27 '14 at 12:03
$\begingroup$ @Did Ah..okay, i get what you mean now. My mistake :) $\endgroup$ – Phantom Mar 27 '14 at 12:05
$\begingroup$ Still no question in Qn1 and Qn2. $\endgroup$ – Did Mar 27 '14 at 12:07
Notice that $$\lim_{n\to\infty}\frac{1}{n^{2/n}}=1\ne0$$ so the first series is divergent.
For the second series let $$g(x)=\frac{(\ln x)^2}{\sqrt x}$$ then using the derivative prove that this function is increasing (to $0$) for a sufficient large $x$ and conclude the convergence of this series using the alternating series test.
$\begingroup$ The first series does not converge. $\endgroup$ – 5xum Mar 27 '14 at 8:37
$\begingroup$ Nice work, Sami! $\endgroup$ – Namaste Mar 28 '14 at 11:44
Both series are so called alternating series. For them a very nice test shows if they converge. Here is the link to it.
For the first series, you may encounter some problems. Take a look at what the limit $$\lim_{n\to\infty}\frac{1}{n^{\frac2n}}$$ is.
5xum5xum
$\begingroup$ The first one is not alternating. Recall that the general term of an alternating series (1) goes to zero and (2) is decreasing in absolute value. $\endgroup$ – Did Mar 27 '14 at 12:04
$\begingroup$ @Did It is alternating. By definition (see en.wikipedia.org/wiki/Alternating_series), any series which has elements which switch their sign is alternating. What you described is the conditions that must be filled for an alternating series to be convergent. $\endgroup$ – 5xum Mar 27 '14 at 13:16
$$\lim_{n\to\infty}{n^{\frac 2n}}=1,$$ so $$\frac{{(-1)}^{n}}{n^{\frac2n}}\not\to 0$$ and the first series does not converge.
Martín-Blas Pérez PinillaMartín-Blas Pérez Pinilla
Not the answer you're looking for? Browse other questions tagged sequences-and-series convergence or ask your own question.
The Series( $\sum_{1}^{+ \infty}\frac{1}{n! + n}$ convergence or divergence?
does the series converge? $\sum_{n=1}^\infty\left(\frac {3}{5^n}+\frac 2n\right) $
convergence of a series..
Convergence of the power series
Series-Convergence and divergence problem
Series Convergence, Comparison theorem
On the convergence of $\sum_{n=1}^{+\infty}\frac{1}{n}\,\cos\left(\frac{\pi n}{2}\right)$
About the convergence of a real series
Determine the convergence/divergence of $\sum_{n=1}^{\infty}\frac{\ln{n!}}{n^3}$
Does the series $\sum_{n=1}^\infty 1/(n+n \cos(n))$ converge or diverge? | CommonCrawl |
Eisenstein integer
In mathematics, the Eisenstein integers (named after Gotthold Eisenstein), occasionally also known[1] as Eulerian integers (after Leonhard Euler), are the complex numbers of the form
$z=a+b\omega ,$
"Eulerian integer" and "Euler integer" redirect here. For other uses, see List of topics named after Leonhard Euler § Euler's numbers.
where a and b are integers and
$\omega ={\frac {-1+i{\sqrt {3}}}{2}}=e^{i2\pi /3}$
is a primitive (hence non-real) cube root of unity. The Eisenstein integers form a triangular lattice in the complex plane, in contrast with the Gaussian integers, which form a square lattice in the complex plane. The Eisenstein integers are a countably infinite set.
Properties
The Eisenstein integers form a commutative ring of algebraic integers in the algebraic number field $\mathbb {Q} (\omega )$ — the third cyclotomic field. To see that the Eisenstein integers are algebraic integers note that each z = a + bω is a root of the monic polynomial
$z^{2}-(2a-b)\;\!z+\left(a^{2}-ab+b^{2}\right)~.$
In particular, ω satisfies the equation
$\omega ^{2}+\omega +1=0~.$
The product of two Eisenstein integers a + bω and c + dω is given explicitly by
$(a+b\;\!\omega )\;\!(c+d\;\!\omega )=(ac-bd)+(bc+ad-bd)\;\!\omega ~.$
The 2-norm of an Eisenstein integer is just its squared modulus, and is given by
${\left|a+b\;\!\omega \right|}^{2}\,=\,{(a-{\tfrac {1}{2}}b)}^{2}+{\tfrac {3}{4}}b^{2}\,=\,a^{2}-ab+b^{2}~,$
which is clearly a positive ordinary (rational) integer.
Also, the complex conjugate of ω satisfies
${\bar {\omega }}=\omega ^{2}~.$
The group of units in this ring is the cyclic group formed by the sixth roots of unity in the complex plane: $\left\{\pm 1,\pm \omega ,\pm \omega ^{2}\right\}~,$ the Eisenstein integers of norm 1.
Euclidean domain
The ring of Eisenstein integers forms a Euclidean domain whose norm N is given by the square modulus, as above:
$N(a+b\,\omega )=a^{2}-ab+b^{2}.$
A division algorithm, applied to any dividend $\alpha $ and divisor $\beta \neq 0$, gives a quotient $\kappa $ and a remainder $\rho $ smaller than the divisor, satisfying:
$\alpha =\kappa \beta +\rho \ \ {\text{ with }}\ \ N(\rho )<N(\beta ).$
Here $\alpha ,\beta ,\kappa ,\rho $ are all Eisenstein integers. This algorithm implies the Euclidean algorithm, which proves Euclid's lemma and the unique factorization of Eisenstein integers into Eisenstein primes.
One division algorithm is as follows. First perform the division in the field of complex numbers, and write the quotient in terms of ω:
${\frac {\alpha }{\beta }}\ =\ {\tfrac {1}{\ |\beta |^{2}}}\alpha {\overline {\beta }}\ =\ a+bi\ =\ a+{\tfrac {1}{\sqrt {3}}}b+{\tfrac {2}{\sqrt {3}}}b\omega ,$
for rational $a,b\in \mathbb {Q} $. Then obtain the Eisenstein integer quotient by rounding the rational coefficients to the nearest integer:
$\kappa =\left\lfloor a+{\tfrac {1}{\sqrt {3}}}b\right\rceil +\left\lfloor {\tfrac {2}{\sqrt {3}}}b\right\rceil \omega \ \ {\text{ and }}\ \ \rho ={\alpha }-\kappa \beta .$
Here $\lfloor x\rceil $ may denote any of the standard rounding-to-integer functions.
The reason this satisfies $N(\rho )<N(\beta )$, while the analogous procedure fails for most other quadratic integer rings, is as follows. A fundamental domain for the ideal $\mathbb {Z} [\omega ]\beta =\mathbb {Z} \beta +\mathbb {Z} \omega \beta $, acting by translations on the complex plane, is the 60°–120° rhombus with vertices $0,\beta ,\omega \beta ,\beta +\omega \beta $. Any Eisenstein integer α lies inside one of the translates of this parallelogram, and the quotient $\kappa $ is one of its vertices. The remainder is the square distance from α to this vertex, but the maximum possible distance in our algorithm is only ${\tfrac {\sqrt {3}}{2}}|\beta |$, so $|\rho |\leq {\tfrac {\sqrt {3}}{2}}|\beta |<|\beta |$. (The size of ρ could be slightly decreased by taking $\kappa $ to be the closest corner.)
Eisenstein primes
For the unrelated concept of an Eisenstein prime of a modular curve, see Eisenstein ideal.
If x and y are Eisenstein integers, we say that x divides y if there is some Eisenstein integer z such that y = zx. A non-unit Eisenstein integer x is said to be an Eisenstein prime if its only non-unit divisors are of the form ux, where u is any of the six units. They are the corresponding concept to the Gaussian primes in the Gaussian integers.
There are two types of Eisenstein prime. First, an ordinary prime number (or rational prime) which is congruent to 2 mod 3 is also an Eisenstein prime. Second, 3 and each rational prime congruent to 1 mod 3 are equal to the norm x2 − xy + y2 of an Eisentein integer x + ωy. Thus, such a prime may be factored as (x + ωy)(x + ω2y), and these factors are Eisenstein primes: they are precisely the Eisenstein integers whose norm is a rational prime.
The first few Eisenstein primes of the form 3n − 1 are:
2, 5, 11, 17, 23, 29, 41, 47, 53, 59, 71, 83, 89, 101, ... (sequence A003627 in the OEIS).
Natural primes that are congruent to 0 or 1 modulo 3 are not Eisenstein primes: they admit nontrivial factorizations in Z[ω]. For example:
3 = −(1 + 2ω)2
7 = (3 + ω)(2 − ω).
In general, if a natural prime p is 1 modulo 3 and can therefore be written as p = a2 − ab + b2, then it factorizes over Z[ω] as
p = (a + bω)((a − b) − bω).
Some non-real Eisenstein primes are
2 + ω, 3 + ω, 4 + ω, 5 + 2ω, 6 + ω, 7 + ω, 7 + 3ω.
Up to conjugacy and unit multiples, the primes listed above, together with 2 and 5, are all the Eisenstein primes of absolute value not exceeding 7.
As of February 2023, the largest known real Eisenstein prime is the ninth largest known prime 10223 × 231172165 + 1, discovered by Péter Szabolcs and PrimeGrid.[2] All larger known primes are Mersenne primes, discovered by GIMPS. Real Eisenstein primes are congruent to 2 mod 3, and all Mersenne primes greater than 3 are congruent to 1 mod 3; thus no Mersenne prime is an Eisenstein prime.
Eisenstein series
The sum of the reciprocals of all Eisenstein integers except 0 raised to the sixth power can be expressed in terms of the gamma function:
$\sum _{z\in \mathbf {E} \setminus \{0\}}{\frac {1}{z^{6}}}=G_{6}\left(e^{\frac {2\pi i}{3}}\right)={\frac {\Gamma (1/3)^{18}}{8960\pi ^{6}}}$
where $\mathbf {E} $ are the Eisenstein integers and $G_{6}$ is the Eisenstein series of weight 6.[3]
Quotient of C by the Eisenstein integers
The quotient of the complex plane C by the lattice containing all Eisenstein integers is a complex torus of real dimension 2. This is one of two tori with maximal symmetry among all such complex tori. This torus can be obtained by identifying each of the three pairs of opposite edges of a regular hexagon. (The other maximally symmetric torus is the quotient of the complex plane by the additive lattice of Gaussian integers, and can be obtained by identifying each of the two pairs of opposite sides of a square fundamental domain, such as [0,1] × [0,1].)
See also
• Gaussian integer
• Cyclotomic field
• Systolic geometry
• Hermite constant
• Cubic reciprocity
• Loewner's torus inequality
• Hurwitz quaternion
• Quadratic integer
• Dixon elliptic functions
Notes
1. Both Surányi, László (1997). Algebra. TYPOTEX. p. 73. and Szalay, Mihály (1991). Számelmélet. Tankönyvkiadó. p. 75. call these numbers "Euler-egészek", that is, Eulerian integers. The latter claims Euler worked with them in a proof.
2. "Largest Known Primes". The Prime Pages. Retrieved 2023-02-27.
3. "Entry 0fda1b - Fungrim: The Mathematical Functions Grimoire". fungrim.org. Retrieved 2023-06-22.
External links
• Eisenstein Integer--from MathWorld
Algebraic numbers
• Algebraic integer
• Chebyshev nodes
• Constructible number
• Conway's constant
• Cyclotomic field
• Eisenstein integer
• Gaussian integer
• Golden ratio (φ)
• Perron number
• Pisot–Vijayaraghavan number
• Quadratic irrational number
• Rational number
• Root of unity
• Salem number
• Silver ratio (δS)
• Square root of 2
• Square root of 3
• Square root of 5
• Square root of 6
• Square root of 7
• Doubling the cube
• Twelfth root of two
Mathematics portal
Systolic geometry
1-systoles of surfaces
• Loewner's torus inequality
• Pu's inequality
• Filling area conjecture
• Bolza surface
• Systoles of surfaces
• Eisenstein integers
1-systoles of manifolds
• Gromov's systolic inequality for essential manifolds
• Essential manifold
• Filling radius
• Hermite constant
Higher systoles
• Gromov's inequality for complex projective space
• Systolic freedom
• Systolic category
| Wikipedia |
Rapid computation and visualization of data from Kano surveys in R
Reynir S. Atlason1,2 &
Davide Giacalone ORCID: orcid.org/0000-0003-2498-06323
The Kano model for user satisfaction is a popular survey-based method used by product designers to prioritize the inclusion and implementation of product features based on users' requirements. Despite its overall simplicity, a current drawback of the Kano approach is that the data analysis and processing of users' responses is laborsome and rather prone to human error. To address this drawback, this paper provides and presents a complete code to conduct a rapid yet comprehensive computation and visualization of Kano data in R.
A detailed walkthrough of the code is provided, together with a sample dataset to demonstrate its functionality. The code is encapsulated on a simple function that can substantially decrease the time for evaluating Kano results, speeding up its application in the context of product development.
A Kano survey is a popular tool in product design to inform decisions on whether to implement a particular feature in a product, and to which degree, based on perceived user' needs, sometimes referred to as "functional requirements" (FRs) [1]. The basic tenet of the Kano approach is that product features affect user satisfaction differently, with some of these relationships being linear while others non-linear (Fig. 1). Depending on the user' responses, the Kano model classifies FRs in three main classes [2]. The first class is called "Must-be requirements": features which, when not fulfilled, cause dissatisfaction in the customers. The presence of such requirements is generally taken for granted, but their increasing implementation will, in and of itself, not increase the user satisfaction. For example, mobile phones customers would take for granted the possibility of connecting to the internet, but increasing the speed of the connection beyond a certain point would not impact their satisfaction significantly. The second class is known as "One dimensional requirements". Such features exhibits a linear relationship with customers' satisfaction: the more the feature is implemented in the product, the more satisfied the users become. Keeping with the mobile phone example, battery life could be an example of such class. The last class is referred to as "Attractive requirements", and is often the most sought after by product developers [2], as it includes features not expected by the users (and thus their exclusion from the product would not result in decreased satisfaction), but whose presence may increase the user satisfaction greatly. For example, the possibility of using a mobile phone as a virtual or augmented reality device might fall within that category. Additional FRs classes of the Kano model include "Indifferent" features, whose absence or presence does not affect user satisfaction, "Reverse" FRs, i.e. features whose absence increases user satisfaction, and "Questionable", when a user indicates that they like both the presence and the absence of a FR. It should be noted that the classification of FR in the Kano framework is subject to change over time (e.g., virtual reality on a phone may be an attractive FR requirement at the time of writing, but in time might become a one-dimensional and eventually a must-be as the technology reaches maturity), and it is also dependent on the context of usage of the products (e.g., seat-back screens on airplanes may be exciting on domestic flights but expected on long distance one), the characteristics of the users, etc.
Relationship between implementation of product features and user satisfaction
In product development, the Kano model may help designers solve potential trade-offs by showing which features maximize user satisfaction (see e.g., [3,4,5], and [6] for application examples). Software solutions for the analysis of these data are, however, very limited. As a result, processing and analyzing this type of data is currently very laborsome and prone to human error. Moreover, the lack of dedicated software solutions may significantly limit its applications in industrial product development. To address this gap, we present a complete R code for the rapid computation and visualization of Kano data, based on the modeling approach proposed by [7].
Quantitative Kano modelling
Though originally a qualitative method [1], quantitative extensions of the Kano model have been proposed in recent years to increase its actionability (e.g. [7] and [8]). A quantitative Kano survey contains questions about the FRs for a target product or service. For each feature, two questions are asked: one functional and one dysfunctional. For example, if being asked about the weight of a mobile phone, the user might be asked "If the phone is as light as a matchbox, how do you feel?", and then subsequently "If the phone is heavier than a matchbox, how do you feel?". Each question has five possible outcomes:
I like it that way,
It must be that way,
I am neutral,
I can live with it that way,
I dislike it that way.
User responses are then collected into a classification table, used to evaluate whether each FR is attractive, one-dimensional, must-be, indifferent, reverse or questionable (Table 1).
Table 1 Evaluation matrix for classification
After classifying the features, we calculate two values for each of them: user satisfaction (CS) and user dissatisfaction (DS). Those values represent, respectively, the user satisfaction when a FR is fully implemented (CS), and dissatisfaction when a FR is completely excluded (DS). The CS value can be expressed as follows [7]:
$$\begin{aligned} CS_i=\frac{f_A+f_O}{f_A+f_O+f_M+f_i} \end{aligned}$$
where \(f_A\) denote the number of attractive, \(f_O\) the number of one-dimensional, \(f_M\) the number of must-be and \(f_I\) indifferent responses. Similarly, the following equation can be used to calculate the DS value:
$$\begin{aligned} DS_i=\frac{f_O+f_M}{f_A+f_O+f_M+f_i} \end{aligned}$$
Subsequently, two points are located for each FR, which can be plotted as (1, \(CS_i\)) and (0, \(-DS_i\)) [7]. Again, these points define, respectively, the user satisfaction when the feature is fully implemented or fully excluded from the product. To find the relationship functions with user satisfaction, one must first identify if the FR is a must-be, one-dimensional or attractive. This is done straightforwardly by considering the mode of the users' answers for that particular FR. The relationship function can be written as \(S=f(x,a,b)\), where S is the user satisfaction, x the level of fulfilment, a and b are the adjustment parameters for the Kano categories of user requirements.
For one-dimensional FRs the function is \(S=a_{1}x+b_1\) where \(a_1\) denotes the slope and \(b_1\) is the intercept, denoting the DS value when \(x=0\). Entering CS and DS points, as previously calculated, into the equation we get \(a_1=CS_i+DS_i\) and \(b_1=DS_i\). Therefore, the function for one-dimensional product features can be written as follows [7]:
$$\begin{aligned} S_i=(CS_i-DS_i)x_i+DS_i \end{aligned}$$
If the feature is an attractive one, the function is instead considered exponential (Fig. 1), and expressed as \(S=a_2e^x+b_2\). We now get \(a_2=\frac{CS_i-DS_i}{e-1}\) and \(b_2=\frac{-CS_i-DS_i}{e-1}\). We can therefore see that the function for such FRs is [7]:
$$\begin{aligned} S_i=\frac{CS_i-DS_i}{e-1}e^{x_1}-\frac{CS_i-eDS_i}{e-1} \end{aligned}$$
For must-be FRs, the function can also be estimated using an exponential function, which in this is \(S=a_3(e^x+b_3\)). We then acquire \(a_3\) and \(b_3\) by using \(a_3=\frac{e(CS_i-DS_i)}{e-1}\) and \(b_3=\frac{eCS_i-DS_i}{e-1}\). The functions for must-be FRs can therefore be plotted as follows [7]:
$$\begin{aligned} S_i=-\frac{e(CS_i-DS_i)}{e-1}e^{-x}+\frac{eCS_i-DS_i}{e-1} \end{aligned}$$
Computation in R
In this section, a R function (called kano) for conducting the analysis explained above is proposed. The kano function does three main things: (1) classification of product features into Kano classes (Table 1), (2) calculation of CS and DS values, and (3) plotting of relationships functions between individual product features and customers' satisfaction. To provide a reproducible example, we consider a dataset containing Kano data for six features of a hypothetical product. The present section provides a step-by-step walkthrough of its analysis in R. Both the dataset and the code used for the analysis are provided as Additional files 1 and 2.
Data should be imported as a n*2 dataset. The columns in Additional file 1 dataset consist of the functional and dysfunctional answers, sequentially listed for all FRs and respondents. The answers stored using numerical values from 1 to 5 matching those given in Table 1 (1 = Like, 2 = Must-be, 3 = Neutral, 4 = Live with, 5 = Dislike).
The kano function is of the form function(dataset,FR), meaning that the only input required by the user is to specify the name of the dataset and the number of product features under study. In our example, after importing the data the user can simply run the following code kano(dataset=data,FR=6) (or even simply kano(data,6)) to conduct the analysis. The function prints the output in the console, exports the numerical results to three .csv files, and graphs the functions in the R viewer. Note that the function cannot handle missing data and will return an error if it finds any.
Detailed walkthrough
After loading some packages needed using library, the first portion of the function runs some diagnostic checks on the dataset. Namely, it checks that the data is of class data.frame, that it does not contain missing values, and that the number of FRs given by the user is correct. If any of these conditions is violated, the function will stop and return a error message explaining the problem to the user.
If there are no issues with the data, the function starts by creating two sequences of numbers (x and y) that will be used for calculating the functions (Eqs. 3–5).
It then creates a classification table equivalent to Table 1:
The next portion of the code uses the evaluation table to classify each user's combination of functional and dysfunctional answers, and converts the evaluated answers into a dataframe:
Then, the classified data is merged with the original dataset (which now includes the classification of the FRs). We now count the entries for each class and isolate the mode (i.e. the answer with the highest frequency of occurrence) for each FR, and we use this as criterion for classifying each FR into a Kano class:
From the classified data, CS and DS values associated with each FR are calculated using the following two for loops:
We have now finished the classification, and store the results in the splitted data frame. We then move on to calculate the Must-be function (Eq. 5). The will return a function if and only if there actually are any FRs for which the mode is "M":
We then calculate the function for One-dimensional FRs (Eq. 3). Again, this will happen only if there actually are any FRs for which the mode is "O".
Finally, we calculate the function for Attractive FRs (Eq. 4). Again, the code returns only if it finds FRs whose mode is "A".
Using the following two for loops, we look for Indifferent and Reverse FRs (since these FRs are not typically of interest to product developers, these are only located but not plotted).
As we have located and collected all data points for the One-dimensional, Must-be and Attractive FRs, we can plot them:
The results are written in three different .csv files (one containing A, M and O results, the others containing Indifferent and Reverse FRs), and printed in the R console. R provides first plottable results (for Attractive, Must-be and One-dimensional requirements), then lists Indifferent and Reverse FRs. The results are printed in a data frame where the left column states the number of FRs, the middle column states the classification of the FR, while the two rightern most columns show the values of the functions when \(x=1\) and \(x=0\) (meaning that the FR is fully implemented or fully excluded, respectively).
In this sample dataset, one FR ("1") is found to be indifferent, and none to be reverse. Three FRs are Must-be ("2", "3", and "5"), one is One-dimensional ("4"), and one is attractive ("6").
The kano function provides a plot for rapid visualization of the individual functions relating the degree of implementation of each FR to user satisfaction (Fig. 2). Together with the numerical results, such plot can be a useful tool to aid product developers in deciding on which FRs should be prioritized and to which degree they should be implemented. For example, Fig. 2 suggests that four of the original six FRs ("2", "3", "4", and "5") should definitely be included, as not doing so (corresponding to \(x=0\) in Fig. 2) would results in great dissatisfaction for the users. However, with respect to degree of implementation, the one One-dimensional FR ("4") is clearly the one it makes more sense to maximize, whereas for the three Must-be attributes we can see diminishing returns in terms of user satisfaction with increasing implementation (Fig. 2). Lastly, we have the one Attractive FR ("6"), whose exclusion (\(x=0\)) would not (much) reduce satisfaction, but its inclusion would result in a marked rise in user satisfaction. Clearly including FR "6", even to a limited degree, would be a good idea and could also help with differentiating that product from others within the same category.
Individual functions relating the degree of implementation of each FR to user satisfaction plotted in the sample dataset
This short paper presented a code for rapid computation and visualization of quantitative Kano data in R, packaged in a simple function (kano(data,FR)) that only requires the user to specify the name of the dataset and the number of FRs to be evaluated. As demonstrated in the worked example, the function allows practitioners to (1) classify FRs according to the Kano framework, (2) compute CS and DS values associated with each FR, (3) compute functions relating each FR to user satisfaction, and (4) plot the results for rapid inspection and visualization. It can assist practitioners and product developers to make informed decisions on which FRs should be implemented (and to which degree) based on Kano results, as well as to make the analysis of this type of data faster and less cumbersome.
This paper only presents a fictional dataset. For examples of real applications of the code in product development context, the reader is referred to two recent papers [3, 4]. Our kano function is based the computational approach proposed by [7], whereas alternatives algorithms for classifying Kano attributes, e.g. [8], are not considered. Finally, the function only considers Kano results at an aggregated level, without possibility for segmentation. The possibility to link results to the user background, as recently proposed in [3], should be a welcome development of the present code.
Kano N, Seraku N, Takahashi F, Tsuji S. Attractive quality and must-be quality. J Jap Soc Qual Control. 1984;14:2.
Sauerwein E, Bailom F, Matzler K, Hinterhuber HH. The kano model: how to delight your customers. Int Work Sem Prod Econ. 1996;1(4):313–27 Innsbruck.
Atlason RS, Stefansson AS, Wietz M, Giacalone D. A rapid kano-based approach to identify optimal user segments. Res Eng Des. 2018;29(3):459–67.
Atlason RS, Giacalone D, Parajuly K. Product design in the circular economy: users' perception of end-of-life scenarios for electrical and electronic appliances. J Cleaner Prod. 2017;168:1059–69.
von Dran G, Zhang P, Small R. Quality websites: an application of the kano model to website design. In: AMCIS 1999 proceedings. 1999. p. 314.
Lehtola L, Kauppinen M. Suitability of requirements prioritization methods for market-driven software product development. Softw Process. 2006;11(1):7–19.
Wang T, Ji P. Understanding customer needs through quantitative analysis of Kano's model. Int J Qual Reliab Manag. 2010;27(2):173–84.
Xu Q, Jiao RJ, Yang X, Helander M, Khalid HM, Opperud A. An analytical kano model for customer need analysis. Des Stud. 2009;30(1):87–110.
RSA developed the software code. DG contributed to the conceptualization and writing of the manuscript together with RSA. All authors read and approved the final manuscript.
The code and the sample datasets used in the paper are provided as additional files.
SDU Life Cycle Engineering, Dept. of Chemical Engineering, Biotechnology and Environmental Technology, University of Southern Denmark, Campusvej 55, 5230, Odense, Denmark
Reynir S. Atlason
Circular Solutions ehf., Ljósakur 6, 210, Gardabaer, Iceland
SDU Innovation and Design Engineering, Dept. of Technology and Innovation, University of Southern Denmark, Campusvej 55, 5230, Odense, Denmark
Davide Giacalone
Correspondence to Davide Giacalone.
R code to perform the analysis explained in the paper.
Sample dataset used in this paper.
Atlason, R.S., Giacalone, D. Rapid computation and visualization of data from Kano surveys in R. BMC Res Notes 11, 839 (2018). https://doi.org/10.1186/s13104-018-3945-x
Kano model | CommonCrawl |
The gradual evolution of buyer–seller networks and their role in aggregate fluctuations
Ryohei Hisano1,
Tsutomu Watanabe2,
Takayuki Mizuno3,
Takaaki Ohnishi1 &
Didier Sornette4
Applied Network Science volume 2, Article number: 9 (2017) Cite this article
Buyer–seller relationships among firms can be regarded as a longitudinal network in which the connectivity pattern evolves as each firm receives productivity shocks. Based on a data set describing the evolution of buyer–seller links among 55,608 firms over a decade and structural equation modeling, we find some evidence that interfirm networks evolve reflecting a firm's local decisions to mitigate adverse effects from neighbor firms through interfirm linkage, while enjoying positive effects from them. As a result, link renewal tends to have a positive impact on the growth rates of firms. We also investigate the role of networks in aggregate fluctuations.
The interfirm buyer–seller network is important from both the macroeconomic and the microeconomic perspectives. From the macroeconomic perspective, this network represents a form of interconnectedness in an economy that allows firm-level idiosyncratic shocks to be propagated to other firms1. Previous studies has suggested that this propagation mechanism interferes with the averaging-out process of shocks, and possibly has an impact on macroeconomic variables such as aggregate fluctuations (Acemoglu et al. 2011; Acemoglu et al. 2012; Carvalho 2014; 2007; Shea 2002; Foerster et al. 2011; Malysheva and Sarte 2011). From the microeconomic perspective, a network at a particular point of time is a result of each firms link renewal decisions in order to avoid (or share) negative (or positive) shocks with its neighboring firms. These two views of a network is related by the fact that both concerns propagation of shocks. The former view stresses the fact that idiosyncratic shocks propagates through a static network while the latter provides a more dynamic view where firms have the choice of renewing its link structure in order to share or avoid shocks. The question here is that it is not clear how the latter view affects the former view. Does link renewal increase aggregate fluctuation due to firms forming new links that conveys positive shocks or does it decrease aggregate fluctuation due to firms severing links that conveys negative shocks or does it have a different effect?
It is important to stress the fact that previous research, in macroeconomics as listed above, has implicitly assumed a static link structure where link renewal does not take place. However, anecdotal evidence suggest that firms may renew their link structure in order to avoid negative shocks and share positive shocks with their neighboring firms. For instance, in the financial crisis of 2008 many banks were reported to sever its links with bad performing firms while forming new links to better performing firms. If these decisions took place broadly then shocks would not propagate as the previous papers have suggested.
To investigate the trade–off between the propagation of shocks and link renewal, we conduct an empirical analysis on the effect of link renewal on the overall growth rate of an economy. Our analysis is novel in the sense that we take the link renewal aspect of the network explicitly into account. This is performed by employing a firm level data instead of sectoral level data. Due to data availability, we use a firm-level dataset from Japan where we have both network data as well as log growth rate of each firms over a decade. We hope that similar results holds for other countries as well.
Using the unique dataset, we take structural equation modeling to estimate the effect of link renewal on the overall growth rate of a network. Our model can be seen as a firm-level variant of the multi-sector model of (Long and Plosser 1983), which is canonical in the business-cycle literature. After estimation of the structural parameters, wherein we discuss the results and identification issues, the effect of link renewal is estimated by performing a counterfactual analysis of the propagation of shocks. Specifically, the analysis is performed by first estimating the individual shocks using the estimated structural model and then propagating the shocks back using networks from different years and comparing the consequences. From this excercise our first result shows that the current network is often the best network configuration, which optimizes both the propagation of positive shocks and the avoidance of negative shocks compared with previous networks. Furthermore, we show that for positive shocks, the future network is often better than the current network in the sense that it propagates positive shocks better than the current network. This is explained by the asymmetry in cost between severing a link and link formation. It is easier to sever an existing link when one's neighbor faces negative shocks than to form a new link, or a new path to distant targeted nodes, in the opposite case. We then provide some evidence that link renewal has a positive effect of increasing the average growth rate of firms, thereby answering to the main question of the paper. Finally, by comparing the average log growth rate for each year and the average individual shocks estimated from our model, we show that at least 37% of the aggregate fluctuations can be explained by the network effect.
The rest of the paper is organized as follows. In "Introduction" section, we summarize the basic notation used throughout the paper. We also offer a brief description of the dataset used in the paper and provide a basic descriptive analysis. "Data and notation" section presents the structural model. "Model" section illustrates our inference procedure and presents the estimation results. We also discuss identification issues. In "Estimation" section, we use the model to perform counterfactual analysis of the propagation of shocks and address the gradual evolution of the network. "Counterfactual analysis of propagation of shocks" section addresses the impact of the interfirm buyer–seller network on aggregate fluctuations. "Network effect on aggregate fluctuations" section concludes.
Data and notation
The network and financial data used in this paper are from the Teikoku Data Bank2. These data are based on questionnaires completed by more than 100,000 firms in Japan for the accounting years 2003 to 2012. We use a subset of this data where we have both network and financial information throughout the 10-year period (i.e., 55,608 firms). In the questionnaires, firms are asked to name several (up to five) upstream and downstream firms with which they trade. This scheme is akin to the fixed rank nomination scheme used in social network analysis (Hoff et al. 2013).
We define two types of adjacency matrix: downstream and upstream. We denote by G the adjacency matrix describing the downstream network, where the downstream firms are listed in each row. Thus, it is reported by firm i that firm j buys from firm i if and only if G ij =1. H is defined similarly for the upstream adjacency matrix. When necessary, we use subscripts to indicate time points, so the buyer network for accounting year 2012 is denoted by G 2012. We could combine these two adjacency matrices and create matrices such that H=G T holds using interpolation of links. However, because the data do not include the weight (i.e. transaction volume) spurious links might be formed using this interpolation. To elaborate on this point, suppose that a stationery store sells a considerable number of pencils to firm A, which manufactures cars. From the stationery store's point of view, firm A is a major buyer that determines its sales revenue. However, from firm A's point of view, the stationery store is far less important than the upstream firm from which it purchases automobile parts for use in production. Because in this paper we focus on links that have strong relationships, we focus on the raw form without performing any interpolation of relations. It is worth noting that thus G does not equal its transpose of H. Table 1 summarizes some basic descriptive statistics concerning the log growth rate of firms during the period 2003–2012. Log growth rate is measured by \(log \frac {S(t)}{S(t-1)}\) where S(t) describes sales reported in each firms financial statement. It can be seen that the average log growth rate of firms fluctuates around 0, showing a moderate cycle. As stated previously, because we are using a subset of the data, 55,608 firms were used to calculate the average log growth each year. Table 2 summarizes the number of nonzero elements in the two adjacency matrices, as well as their evolution. It can be seen that, except for 2008, the numbers of links formed and severed have shown a steady evolution. It can also be seen that the overall number of links appears to be stable over time.
Table 1 Average log growth rate of firms and standard deviation
Table 2 Number of nonzero elements in the two adjacency matrices and the number of new links (nonzero elements) formed and severed in the two matrices
In Fig. 1, we present a contour plot showing the log growth rate of the following year (contour) to the current log growth rate (x-axis) and current size (y-axis) for each firm where the contour was estimated using two-dimensional splines. It can be seen that above 8.1 billion yen (i.e., exp(9)), there is a clear persistent pattern whereby a positive growth rate tends to be repeated, and vice versa.
(Color online) Contour plot showing the log growth rate of the following year to the current log growth rate and the current size for each firm. The contour was estimated using two-dimensional splines
One reason which could explain the irregular pattern among the small and medium sized firms (i.e. middle left and middle right area) is subisidiary firms, which are affected by decisions made by their parent company (e.g., participating in an absorption-type merger, corporate group restructuring). However, even ignoring this part of the data, it can be seen that overall, there seems to be a persistent pattern in the log growth rate of firms.
In Table 3, we show the proportions of positive and negative log growth rates of firms around newly formed and severed links. First-order, second-order, and third-order nodes are defined by the steps needed to reach the node from the newly formed or severed link. For the sake of clarity, a schematic diagram showing the first-order, second-order, and third-order nodes is provided in Fig. 2. Bold font in Table 3 indicates the cases where (i) the proportion of positive log growth rate of nodes is higher for newly formed links than severed links or (ii) the proportion of negative log growth rate of nodes is higher for severed links than newly formed links in a given year. It can be seen that for all years, the network tends to form links between nodes experiencing a positive firm-specific idiosyncratic shock (and vice versa). This provides our first insight into the connection between the log growth rate of firms and the link renewal process of the network.
Schematic diagram showing first-order, second-order, and third-order nodes of formed and severed links. Dashed lines indicate newly formed or severed links. The numbers represent the log growth rate of each firm
Table 3 Proportions of positive and negative log growth rates of firms around a newly formed or severed link
The model that we use in this paper is
$$\begin{array}{@{}rcl@{}} \left(I-\beta_{G}G_{t}-\beta_{H}H_{t} \right)y_{t}=\left(\beta_{LG}G_{t-1}+\beta_{LH}H_{t-1} \right)y_{t-1}+\gamma y_{t-1}+\epsilon_{t}, \end{array} $$
where y t denotes the growth rate of sales of each firm3 and ε t denotes the normal firm-specific idiosyncratic shock characterized by μ and σ. The intuition behind the model is that log growth rate of a firm could be broken down into three parts: economy–wide plus firm–level idiosyncratic shocks, lagged effect from previous year, and propagation effect from the interfirm buyer–seller network (both simultaneous and lagged). There are seven unknown parameters in total.
Our model can be seen as a firm-level variant of the multi-sector model of (Long and Plosser 1983), which is canonical in the business-cycle literature. In (Long and Plosser 1983), each sector (i.e., firm) is explicitly assumed to use materials produced by other sectors (i.e., firms), and these sectoral linkages represent interconnectedness in the economy, propagating idiosyncratic sector-specific shocks to other sectors. Previous works have used the multi-sector business cycle model to break down aggregate fluctuations down into aggregate economy–wide common shocks and sectoral shocks (Abe 2004; Foerster et al. 2011). These models have been used to shed light on aspects of sectoral growth and business cycles. The goal in this paper is to bring this model to the firm level studying the propagation of firm–level idiosyncratic shocks. The difference between (Long and Plosser 1983)'s sectoral-level and firm-level linkages lies in the link renewal process among firms. In a sectoral-level setting, if the total demand for goods from other sectors is kept the same, then the strength of the links with other sectors does not change. However, even in this case, the interfirm network structure might differ due to link renewal behaviors at the firm level. Our main goal in this is paper is to take this link renewal behavior explicitly into account.
The general consensus in macroeconomics has been that sector-specific shocks should average out over the entire economy based on Lucas's "diversification argument" (Lucas 1977). However, this view has recently been challenged from the network perspective by several authors (Shea 2002; Acemoglu et al. 2012; Acemoglu et al. 2011; Carvalho 2007) suggesting that in the presence of certain sectoral network structures, this argument may not apply. In particular, (Acemoglu et al. 2012) has shown that the rate of decay in aggregate fluctuations depends on the network structure governing interdependency among sectors. Our model is closely related to (Acemoglu et al. 2012), but much closer to (Shea 2002) in that we model effects from both upstream and downstream linkages. Our work is also related in spirit to (Foerster et al. 2011; Malysheva and Sarte 2011) in providing a systematic econometric analysis of the propagation of shocks and the relationship to aggregate fluctuations. The difference is that while (Foerster et al. 2011; Malysheva and Sarte 2011) focus on sectorial linkages, we focus more on micro connections in interfirm networks.
Inference of parameters is most easily performed using Bayesian inference (Westveld and Hoff 2011; Goldsmith-Pinkham and Imbens 2013). In our case, this is also due to the heavy computation involved in handling large amounts of network data. Using Eq. (1) and placing conjugate normal priors on β G ,β H ,β LG ,β LH ,γ, and μ 0, and a scaled inverse gamma prior on σ 0, y t obeys a multivariate normal distribution with
$$\begin{array}{@{}rcl@{}} \mu =\left(I-\beta_{G}G-\beta_{H}H \right)^{-1}\left(\mu_{0}+\left(\beta_{LG}G_{t-1}+\beta_{LH}H_{t-1}+\gamma I \right)y_{t-1}\right), \end{array} $$
$$\begin{array}{@{}rcl@{}} \quad\Sigma =\left(I-\beta_{G}G-\beta_{H}H \right)^{-1}\left(I-\beta_{G}G^{'}-\beta_{H}H^{'} \right)^{-1}\sigma_{0}^{2}. \end{array} $$
To perform maximum likelihood in this setting, it is necessary to calculate the determinant |Σ|, where Σ has size 55,6082 even when focusing our attention on just one year. The time complexity of calculating this determinant is cubic, making it impractical to evaluate when optimizing the likelihood4. The other term that involves heavy computation is the inverse matrix. We approximated the inverse matrix using the first 30 terms of the Neumann series (or power series) as in (Bramoulle et al. 2009).
The unknown parameters in our model are β G ,β H ,β LG ,β LH ,γ,μ 0, and σ 0. Bayesian inference was performed with diffuse priors (i.e. n o r m a l(0,100) for βs, γ and μ and s c a l e d−i n v e r s e−g a m m a(1,1) for σ 0), using Gibbs sampling of 10 years of data, which converged quite rapidly. A Markov chain of 10,500 iterations was generated, the first 500 of which were dropped as burn-in steps. We provide a trace plot of β G in Fig. 3. Other paratemers converged similarly. Thinning was performed every 10 steps, resulting in 1000 samples, which we used to approximate the joint posterior.
Trace plot of β G
Table 4 reports the posterior mean of the parameters along with 99% posterior confidence intervals. In general, all the parameters related to network effects are significantly different from 0, suggesting that the network effect is present as both a lag and a contemporaneous effect. The parameter γ being significantly positive implies that there is persistency in firms log growth rate as was expected from Fig. 2. The parameter μ 0 being slightly negative corresponds to the fact the overall Japan was shrinking during the period of analysis.
Table 4 Parameter estimates. Posterior mean and 99% posterior confidence intervals are reported
Identification issues resulting from measurement errors
Although the use of the log growth rate in analyzing network effects is due to stationarity concerns log differencing makes each variables noisier. Moreover sloppy reporting by small and medium-sized firms also contaminates the variable with additional measurement errors. Estimation of true regression parameters when all measurements have additional noise was studied by Frisch in the 1930s under the rubric of statistical confluence analysis (Frisch 1934; Hendry and Morgan 1989). Similar to its modern descendant, partial identification (Manski 2009; Tamer 2010), our results show that estimation of the structural parameters ignoring measurement error provides lower bounds on estimates of the true structural parameters.
While this argument may seem trivial at first, it is important when we estimate the effects of the interfirm buyer–seller network on aggregate fluctuations in "Network effect on aggregate fluctuations" section. As noted in the Introduction, since our interest is in aggregate fluctuation we are interested not in each firm's log growth rate, but in the average log growth rate of all firms in an economy at a particular year. Additional zero mean measurement errors for each firm disappear when we take the average of these growth rates, and thus have no impact on the overall dynamics of the average log growth rate. However, we are trying to estimate these underlying parameters from log growth rates including additional measurement errors. In this case, our estimated parameters (e.g., the parameter estimates reported in Table 4) would be different from the true structural parameters responsible for generating the aggregate fluctuations in the average log growth rate of firms.
Taking measurement errors into account, our observed log growth rate of firms is generated from
$$\begin{array}{@{}rcl@{}} \left(I-\beta_{G}G_{t}-\beta_{H}H_{t} \right)z_{t}=\left(\beta_{LG}G_{t-1}+\beta_{LH}H_{t-1} \right)z_{t-1}+\gamma z_{t-1}+\epsilon_{t}, \end{array} $$
$$\begin{array}{@{}rcl@{}} y_{t} = z_{t} + \eta_{t}, \end{array} $$
where the first equation models the network effect as in Eq. (1) and the second one models additional measurement errors. Assuming that η has mean 0 and a finite first moment, the law of large numbers guarantees that this additional measurement error cancels out in the aggregate.
Assuming that both ε t and η t are normally distributed random variables, it is obvious that there is a simple relationship between the parameter estimates ignoring this additional structure and the true parameters. The relationship is
$$\begin{array}{@{}rcl@{}} \theta_{apparent}=r*\theta_{true}, \end{array} $$
where r is defined as
$$\begin{array}{@{}rcl@{}} r:=\frac{var(\epsilon_{t})}{var\left(\epsilon_{t}\right)+var\left(\eta_{t}\right)}. \end{array} $$
Hence, our parameter estimates ignoring measurement errors, as in Table 4, give a scaled estimate of the true parameters.
This effect is confirmed by the following experiments. We first generate the underlying true log growth rates of firms using the actual network data with β G =0.06, β H =0.06, β LG =0.04, β LH =0.04, γ=−0.3, μ=0, and σ=0.3. Then, for each firm, we add additional noise η∼n o r m a l(0,0.15). Table 5 reports the posterior means of parameter estimates with and without this additional noise. We see that the parameters are scaled as predicted by Eq. (7).
Table 5 Parameter estimates with measurement errors
In summary, the analysis performed in this section have clarified that the estimated structural parameters only provide a lower bound on the true parameters. This was a result of identification issues concerning measurement errors. Hence the message here is that our evaluation of propagation of shocks, performed in the next sections using the estimated parameters, could only be seen as a lower bound concerning the true level of propagation in an economy.
Counterfactual analysis of propagation of shocks
To assess the nature of the evolving network, we perform counterfactual analysis of the propagation of shocks. We do this by the following procedure. Using a structural model describing the interfirm buyer–seller network, we estimate the structural firm-specific shocks for year t as
$$\begin{array}{@{}rcl@{}} e_{t} := \left(I-\beta_{G}G_{t}-\beta_{H}H_{t} \right)y_{t} \end{array} $$
where β G and β H are parameters, e t and y t are vectors, and the rest matrices. Using these estimates for all firms, we compute a firm's growth in a counterfactual world, assuming that the structure of the network is that of year t ′ instead of year t by
$$\begin{array}{@{}rcl@{}} y_{t'|t} := \left(I-\beta_{G}G_{t'}-\beta_{H}H_{t'} \right)^{-1}e_{t}. \end{array} $$
Note that y t|t (i.e., propagating shocks using the network from the same year as the log growth rate) is the same as y t . Comparing \(y_{t'|t}\phantom {\dot {i}\!}\) for different years enables us to ascertain what the log growth rate of firms might have been if the network structure was that of year t ′. Moreover, motivated by Table 3, we perform this analysis of evolving networks by separating the estimated e t s into positive shocks (i.e., \(e_{t}^{pos}\)) and negative shocks (i.e., \(e_{t}^{neg}\)) where we set all the values that are not positive in the former case or negative in the latter case to 0. We propagate each of these shocks in the network. Thus, \(y_{t'|t}\phantom {\dot {i}\!}\) is now replaced by
$$\begin{array}{@{}rcl@{}} y_{t'|t}^{pos} := \left(I-\beta_{G}G_{t'}-\beta_{H}H_{t'} \right)^{-1}e_{t}^{pos} \end{array} $$
for positive shocks and
$$\begin{array}{@{}rcl@{}} y_{t'|t}^{neg} := \left(I-\beta_{G}G_{t'}-\beta_{H}H_{t'} \right)^{-1}e_{t}^{neg} \end{array} $$
for negative shocks. We assume that the structural parameters are fixed and set them as β G =0.06 and β H =0.05. As before, we approximated the inverse matrix using the first 30 terms of the Neumann series (or power series) as in (Bramoulle et al. 2009) to speed up calculations.
Comparing \(y_{t'|t}^{pos}\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}y_{t'|t}^{neg}\) for different years enables us to compare the propagation (avoidance) performance of each network in the face of positive and negative shocks that arrived in year t. Figures 4 and 5 show the results of comparing the standard deviation of \(y_{t'|t}^{pos}\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}y_{t'|t}^{neg}\) for all years. It can be seen that the current network is often the best network configuration, which optimizes both the propagation of positive shocks and the avoidance of negative shocks compared with past networks. Furthermore, we see that for positive shocks, the future network is often better than the current network in the sense that it propagates positive shocks better than the current network. We also note that the improvement caused by rewiring the network just after the shock has arrived is higher for negative shocks than for positive shocks.
Standard deviation of each \(y_{t'|t}^{neg}\)s for years 2003 to 2012. The horizontal red line denotes the standard deviation in the year analyzed (i.e., t). a 2003; b 2004; c 2005; d 2006; e 2007; f 2008; g 2009; h 2010; i 2011; j 2012
Standard deviation of each \(y_{t'|t}^{pos}\)s for years 2003 to 2012. The horizontal blue line denotes the standard deviation in year t. a 2003; b 2004; c 2005; d 2006; e 2007; f 2008; g 2009; h 2010; i 2011; j 2012
This is quite an interesting result, and is worth elaborating. The main reason is the asymmetry between forming and severing links. Severing a link, and often switching to better (but not necessarily the best) nodes, is easier than forming a link targeting good (if not the best) nodes facing positive shocks. This is because the latter requires additional search costs and negotiation time for the two firms to reach agreement. Further, because of the existence of layers (or a hierarchical structure) in the network, creating a path to distant nodes with which one is unable to form a direct link is a complex task that requires decisions by one's neighbors. For example, if a firm wants to buy automobile parts that use a certain high-quality metal, it has to find an automobile parts manufacturer that uses the metal in their own production or wait until some automobile parts manufacturer starts using the metal in their own production. Given this basic limitation governing the microeconomic link renewal process of firms, link formation can only evolve gradually in response to newly arrived shocks. The view of local rewiring of links is also shared with works in social networks such as (Mele 2010; Krivitsky and Handcock 2014).
If there was a hypothetical social planner that could rewire all the network structures in an economy to an optimal state, the behavior summarized in this section would not take place. However, in reality, microscopic connectivity patterns are determined by each agent's decisions to avoid negative shocks and share positive shocks. These decisions are made based on local information which each firms gathers without having access to the full picture of the global state of the network. Moreover, apart from the fact that they only have access to local information, there is asymmetry in cost between forming and severing links which also contributes to the gradual process of link renewal. The analysis performed in this section provides some insights into the gradual evolution process, suggesting how the decentralized myopic decisions of individual firms gradually lead to an improvement in the overall state of the network.
Network effect on aggregate fluctuations
Using the parameters reported in the previous section, we estimate the role of networks in aggregate fluctuations by comparing the average log growth rate of firms (i.e., y t ) and the average shocks for individual firms (i.e., e t ). For each year, we calculate e t s by
$$\begin{array}{@{}rcl@{}} e_{t} := \left(I-\beta_{G}G_{t}-\beta_{H}H_{t} \right)y_{t} - \left(\beta_{LG}G_{t-1} + \beta_{LH}H_{t-1} \right)y_{t-1} - \gamma y_{t-1}. \end{array} $$
The average e t is used as the average shock for individual firms. We also simulate each firms log growth rate assuming that there was no link renewal during the whole period of study. This is performed by using Eq. (9), setting t ′ as 2003. The average value of \(y_{t'|t}\phantom {\dot {i}\!}\) is used as the average log growth rate in the counterfactual world assuming that no link renewal took place during the whole period of study.5
Figure 6 shows the results. By comparing the case when there is link renewal (black rectangles) and without link renewal (blue square), we see that the average log growth rate shifts downwards when there is no link renewal. This was expected because as was seen in the previous sections link renewal has two effects. One trying to mitigate negative shocks from propagating and one trying to share positive shocks with their neighboring firms. In recession period, link renewal is more motivated by the former process making the black circles higher than the blue squares (because by link renewal the network succeeded in mitigating negative shocks). While in boom period, link renewal is motivated more by the latter process also making the black circle higher than the blue squares (because by link renewal the network succeeded in sharing positive shocks).
Time series of average log growth rates (black circles), average shocks for individual firms (red triangles) and simulated average log growth rate assuming that there was no link renewal (blue square) for years 2004 to 2012
Figure 7 shows the cumulative average log growth rate of each of the cases depicted in Fig. 6. Comparing the cases when link renewal take place (black circles) and when firms are connected and without link renewal (blue square) in Fig. 7, we see that on average firm growth rate is 0.0027 higher when there is link renewal.6 Hence we conclude that link renewal has the positive effect of increasing the average log growth rate of an economy by effectively mitigating negative shocks and sharing positive shocks among firms.
Time series of cumulative average log growth rate (black circles), cumulative average shocks for individual firms (red triangles) and simulated average log growth rate assuming that there was no link renewal (blue square) for years 2004 to 2012
We next investigate aggregate fluctuation. Comparing the two cases when firms are not connected (red triangle) and connected (black circles) in Fig. 6, we see that the average log growth rate tends to fluctuate more when they are connected. It is worth emphasizing that we only have nine data points in the calculation. Nevertheless, the estimated standard deviation of the fluctuation is 0.023, while that of the original average log growth rate of firms is 0.037. Thus, the network effect on aggregate fluctuations can be calculated as 1−0.023/0.037, which is around 37%. Note that as discussed in "Estimation" section, the estimated structural parameters provide a lower bound as a result of identification issues concerning measurement errors. Therefore, we conclude that at least 37% of the aggregate fluctuations can be explained by the network effect.7
It is also worth noting that this figure is similar to that in (Foerster et al. 2011), who studied variability in log growth of the IP index in the United States and showed that, after the great moderation, 50% of the variability in log growth of the IP index could indeed be explained by sectoral linkages.
In order to answer the question concerning the trade-off between propagation of shocks and link renewal in the interfirm buyer–seller network, we provided an empirical analysis on the effect of link renewal on the overall growth rate of an economy. To this aim we used a firm-level dataset from Japan where we have both network data as well as log growth rate of fimrs over a decade. Using the unique dataset, we took structural equation modeling to estimate the effect of link renewal. By means of counterfactual analysis, we first showed that the current network is often the best network configuration which optimizes both the propagation of positive shocks and avoidance of negative shocks compared with previous networks, perhaps reflecting each firms motivation to avoid other's negative shocks and share other's positive shocks. We then showed that for positive shocks, the future network is often better than the current network in the sense that it propagates positive shocks better than the current network. This asymmetric behavior was explained by the asymmetry in cost between severing and forming links. We then provided some evidence that link renewal has a positive effect of increasing the average growth rate of firms at the macroeconomic level answering to the main motivation of the paper. Last but not least, as a bonus of our structural equation modeling, we also showed that at least 37% of the aggregate fluctuations can be explained by the network effect. This is in line with previous research which focused on sectoral linkages such as (Foerster et al. 2011).
1 Examples of firm–level idiosyncratic shock includes: productivity shocks stemming from successful innovations, discovery of new export destination, changes in capacity utilization including strikes and supply shock such as sudden change in raw material prices. It should not be confused with economy–wide shocks such as inflation, wars and policy shocks.
2 http://www.tdb.co.jp/index.html.
3 Which is defined by the difference of logarithm of sales between two consecutive years.
4 It took about 5–8 h to calculate this term on a modern desktop computer using the fully optimized software (Danny et al. 2010)
5 To be more precise we are assuming that the network stayed as that of year 2003 during the whole period of study 2004-2012.
6 This is calculated by taking the mean of \(y_{t} - y_{t'|t}\phantom {\dot {i}\!}\).
7 As could be suspected by Fig. 6 the number only slightly changes when comparing the case when there is no link renewal to the case when firms are not connected at all.
Abe, N (2004) The multi-sector business cycle model and aggregate shocks: An empirical analysis. Japan Econ Rev 55(1): 101–118. doi:10.1111/j.1468-5876.2004.00296.x.
Acemoglu, D, Ozdaglar A, Tahbaz-Salehi A (2011) The network origins of large economic downturns. Working Paper 19230, National Bureau of Economic Research (July 2013). doi:10.3386/w19230.
Acemoglu, D, Carvalho VM, Ozdaglar A, Tahbaz-Salehi A (2012) The network origins of aggregate fluctuations. Econometrica 80(5): 1977–2016. doi:10.3982/ECTA9623.
Bramoulle, Y, Djebbari H, Fortin B (2009) Identification of peer effects through social networks. J Econ 150(1): 41–55. doi:10.1016/j.jeconom.2008.12.021.
Carvalho, V (2007) Aggregate fluctuations and the network structure of intersectoral trade. Economics Working Papers 1206, Department of Economics and Business, Universitat Pompeu Fabra.
Carvalho, VM (2014) From micro to macro via production networks. J Econ Perspect 28(4): 23–48. doi:10.1257/jep.28.4.23.
Danny, C, Gomes FM, Gomes FM, Computacional A, Sorensen DC (2010) ARPACK++: A C++ Implementation of ARPACK Eigenvalue Package.
Foerster, AT, Sarte P-DG, Watson MW (2011) Sectoral versus aggregate shocks: A structural factor analysis of industrial production. J Polit Econ 119(1): 1–38.
Frisch, R (1934) Statistical Confluence Analysis by Means of Complete Regression Systems. Universitetets okonomiske institutt, Oslo.
Goldsmith-Pinkham, P, Imbens GW (2013) Social networks and the identification of peer effects. J Bus Econ Stat 31(3): 253–264. doi:10.1080/07350015.2013.801251.http://dx.doi.org/10.1080/07350015.2013.801251
Hendry, DF, Morgan MS (1989) A re-analysis of confluence analysis. Oxf Econ Pap 41(1): 35–52.
Hoff, P, Fosdick B, Volfovsky A, Stovel K (2013) Likelihoods for fixed rank nomination networks. Netw Sci 1: 253–277.
Krivitsky, PN, Handcock MS (2014) A separable model for dynamic networks. J R Stat Soc Ser B 76(1): 29–46. doi:10.1111/rssb.12014.
Long, JB, Plosser CI (1983) Real business cycles. J Polit Econ 91(1): 39–69.
Lucas, RE (1977) Understanding business cycles. Carnegie-Rochester Conf Ser Public Policy 5(1): 7–29.
Malysheva, N, Sarte P-DG (2011) Sectoral disturbances and aggregate economic activity. Econ Q(2Q):153–173.
Manski, CF (2009) Identification for Prediction and Decision. Harvard University Press.
Mele, A (2010) A structural model of segregation in social networks. NET Institute Working Paper.
Shea, J (2002) Complementarities and comovements. J Money Credit Bank 34(2): 412–433.
Tamer, E (2010) Partial identification in econometrics. Ann Rev Econ 2(1): 167–195. doi:10.1146/annurev.economics.050708.143401.http://dx.doi.org/10.1146/annurev.economics.050708.143401
Westveld, AH, Hoff PD (2011) A mixed effects model for longitudinal relational and network data, with applications to international trade and conflict. Ann Appl Stat 5(2A): 843–872. doi:10.1214/10-AOAS403.
The authors are grateful to Hiroshi Iyetomi, Shoji Fujimoto and the seminar participants at RIETI for their helpful comments to the previous versions of the paper. We would also like to thank the two anonymous reviewers for their helpful comments.
The data used in this paper is a proprietary data provided by a private company called Teikoku Data Bank, Japan. Any interested readers should consult the comapny directly for further information concerning the access to the data.
RH designed research, conducted research, analyzed the data and wrote the paper. TW designed resarch and wrote the paper. TM designed research. TO designed resarch. DS desinged research and wrote the paper. All authors read and approved the final manuscript.
We confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
Social ICT Research Center, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8654, Japan
Ryohei Hisano & Takaaki Ohnishi
Graduate School of Economics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8654, Japan
Tsutomu Watanabe
Information and Society Research Division, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, 101-8430, Japan
Takayuki Mizuno
Department of Management, Technology and Economics, ETH ZürichSwiss Federal Institute of Technology, Scheuchzerstrasse 7, Zürich, 8092, Switzerland
Didier Sornette
Ryohei Hisano
Takaaki Ohnishi
Correspondence to Ryohei Hisano.
Hisano, R., Watanabe, T., Mizuno, T. et al. The gradual evolution of buyer–seller networks and their role in aggregate fluctuations. Appl Netw Sci 2, 9 (2017). https://doi.org/10.1007/s41109-017-0030-7
Interfirm buyer–seller networks
Aggregate fluctuations
Link renewal
Firm growth | CommonCrawl |
New stability result for a Bresse system with one infinite memory in the shear angle equation
Collision-avoidance and flocking in the Cucker–Smale-type model with a discontinuous controller
Mathematical modeling of algal blooms due to swine CAFOs in Eastern North Carolina
Amy Henderson 1, , Emek Kose 2,, , Allison Lewis 3, and Ellen R. Swanson 4,
St. Mary's College of Maryland, Department of Economics, St. Mary's City, MD 20686, USA
St. Mary's College of Maryland, Department of Mathematics and Computer Science, St. Mary's City, MD 20686, USA
Lafayette College, Department of Mathematics, Easton, PA 18042, USA
Centre College, Department of Mathematics, Danville, KY 40422, USA
* Corresponding author: [email protected]
Received January 2020 Revised August 2021 Early access December 2021
Figure(14) / Table(1)
Dramatic strides have been made in treating human waste to remove pathogens and excess nutrients before discharge into the environment, to the benefit of ground and surface water quality. Yet these advances have been undermined by the dramatic growth of Confined Animal Feeding Operations (CAFOs) which produce voluminous quantities of untreated waste. Industrial swine routinely produce waste streams similar to that of a municipality, yet these wastes are held in open-pit "lagoons" which are at risk of rupture or overflow. Eastern North Carolina is a coastal plain with productive estuaries which are imperiled by more than 2000 permitted swine facilities housing over 9 million hogs; the associated 3,500 permitted manure lagoons pose a risk to sensitive estuarine ecosystems, as breaches or overflows send large plumes of nutrient and pathogen-rich waste into surface waters. Understanding the relationship between nutrient pulses and surface water quality in coastal environments is essential to effective CAFO policy formation. In this work, we develop a system of ODEs to model algae growth in a coastal estuary due to a manure lagoon breach and investigate nutrient thresholds above which algal blooms are unresolvable.
Keywords: Algal blooms, dynamical systems, math modeling, nutrient threshold for resolvable blooms.
Mathematics Subject Classification: 34C60, 92B25.
Citation: Amy Henderson, Emek Kose, Allison Lewis, Ellen R. Swanson. Mathematical modeling of algal blooms due to swine CAFOs in Eastern North Carolina. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021151
D. F. Boesch, D. M. Anderson and R. A. Horner et al., Harmful algal blooms in coastal waters: Options for prevention, control, and mitigation, NOAA Coastal Ocean Program Decision Analysis Series, 10 (1997). Google Scholar
J. M. Burkholder et al., Impacts to a coastal river and estuary from rupture of a large swine waste holding lagoon, Journal of Environmental Quality, 26, (1997), 1451–1466. Google Scholar
J. Burkholder, B. Libra and P. Weyer et al., Impacts of waste from concentrated animal feeding operations on water quality, Environ. Health Persp., 115, (2007), 308–312. doi: 10.1289/ehp.8839. Google Scholar
E. J. Buskey and D. A. Stockwell, Effects of a persistent "brown tide" on zooplankton populations in the Laguna Madre of south Texas, Toxic Phytoplankton Blooms in the Sea, (1993), 659–666. Google Scholar
S. J. Du Plooy, N. K. Carrasco and R. Perissinotto, Effects of zooplankton grazing on the bloom-forming Cyanothece sp. in a subtropical estuarine lake, J. Plankton Res., 39 (2017), 826-835. doi: 10.1093/plankt/fbx039. Google Scholar
S. P. Epperly and S. W. Ross, Characterization of the North Carolina Pamlico-Albemarle estuarine complex, Estuarine Ecol., 1986. Google Scholar
J. A. Freund, S. Mieruch and B. Scholze et al., Bloom dynamics in a seasonally forced phytoplankton-zooplankton model: Trigger mechanisms and timing effects, Ecol. Complex., 3 (2006), 129-139. doi: 10.1016/j.ecocom.2005.11.001. Google Scholar
R. E. Fuhrman, History of water pollution control, J. Water Pollut. Con. F., 56 (1984), 306-313. Google Scholar
J. C. Goldman and E. Carpenter, A kinetic approach to the effect of temperature on algal growth, Limnol. Oceanogr., 19 (1974), 756-766. Google Scholar
S. M. Z. Hossain, N. Al-Bastaki and A. M. A. Alnoaimi et al., Mathematical modeling of temperature effect on algal growth for biodiesel application, Renewable Energy and Environ. Sustainability, 4 (2019), 517-528. doi: 10.1007/978-3-030-18488-9_41. Google Scholar
C. Hribar, Understanding concentrated animal feeding operations and their impact on communities, The National Assoc. of Local Boards of Health, 2010. Google Scholar
J. Kravchenko, S. H. Rhew and I. Akushevich et al., Mortality and health outcomes in North Carolina communities located in close proximity to hog concentrated animal feeding operations, NC Med. J., 79 (2018), 278-288. doi: 10.18043/ncm.79.5.278. Google Scholar
M. A. Mallin, Impacts of industrial animal production on rivers and estuaries: Animal-waste lagoons and sprayfields near aquatic environments may significantly degrade water quality and endanger health, Am. Sci., 88 (2000), 26-37. Google Scholar
S. Marino, I. B. Hogue, C. J. Ray and D. E. Kirschner, A methodology for performing globaluncertainty and sensitivity analysis in systems biology, J. Theor. Biol., 254 (2008), 178-196. doi: 10.1016/j.jtbi.2008.04.011. Google Scholar
D. F. Martin, M. T. Doij and C. B. Stackhouse, Biocontrol of the Florida red tide organism, Gymnodinium breve, through predator organisms, Environ. Lett., 4 (1973), 297-301. doi: 10.1080/00139307309435500. Google Scholar
W. D. McBride and N. Key, US hog production from 1992 to 2009: Technology, restructuring, and productivity growth, USDA Econ. Res. Report, 158 (2013). Google Scholar
A. Shirota, Red tide problem and countermeasures, Int. J. Aquaculture and Fisheries Tech., 1 (1989), 195-293. Google Scholar
J. B. Shukla, A. K. Misra and P. Chandra, Modeling and analysis of the algal bloom in a lake caused by discharge of nutrients, Appl. Math. Comput., 196 (2008), 782-790. doi: 10.1016/j.amc.2007.07.010. Google Scholar
V. H. Smith, Responses of estuarine and coastal marine phytoplankton to nitrogen and phosphorus enrichment, Limnol. Oceanogr., 51 (2006), 377-384. doi: 10.4319/lo.2006.51.1_part_2.0377. Google Scholar
K. A. Steidinger, A re-evaluation of toxic dinoflagellate biology and ecology, Prog. Phycol. Res., 2 (1983), 147-188. Google Scholar
M. Swinker, Human health effects of hog waste, NC Med. J., 59 (1998), 16-18. Google Scholar
J. M. Testa, Y. Li and Y. J. Lee et al., Quantifying the effects of nutrient loading on dissolved O2 cycling and hypoxia in Chesapeake Bay using a couple hydrodynamic-biogeochemical model, J. Marine Syst., 139 (2014), 139-158. Google Scholar
J. E. Truscott and J. Brindley, Ocean plankton populations as excitable media, Bull. Math. Biol., 56 (1994), 981-998. Google Scholar
S. Wing, D. Cole and G. Grant, Environmental injustice in North Carolina's hog industry, Environ. Health Persp., 108 (2000), 225-231. doi: 10.1289/ehp.00108225. Google Scholar
S. Wing, S. Freedman and L. Band, The potential impact of flooding on confined animal feeding operations in eastern north carolina, Environ. Health Persp., 110 (2002), 387-391. doi: 10.1289/ehp.02110387. Google Scholar
J. Zhao and Y. Yan, Dynamics of a seasonally forced phytoplankton-zooplankton model with impulsive biological control, Discrete Dyn. Nat. Soc., 2016 (2016). doi: 10.1155/2016/2560195. Google Scholar
What Are Phytoplankton?, Available from: https://oceanservice.noaa.gov/facts/phyto.html. Google Scholar
North Carolina Department of Environmental Quality: MajorHydro, Available from: http://data-ncdenr.opendata.arcgis.com/datasets/majorhydro. Google Scholar
North Carolina Department of Environmental Quality: List of Permitted Animal Facilities, Available from: https://deq.nc.gov/cafo-map. Google Scholar
TIGER/Line Shapefiles, Available from: https://www.census.gov/geographies/mapping-files/time-series/geo/tiger-line-file.html. Google Scholar
Zooplankton Vs. Phytoplankton, Available from: https://sciencing.com/zooplankton-vs-phytoplankton-5432413.html. Google Scholar
30] and the N.C. Department of Environmental Quality [28]">Figure 1. North Carolina Major River Systems Map produced using QGIS. Data obtained from the U.S. Census Bureau [30] and the N.C. Department of Environmental Quality [28]
30] and the N.C. Department of Environmental Quality [28,29]">Figure 2. Locations of swine CAFOs (brown dots) relative to major river basins which drain into the Pamlico-Albemarle Sound estuary. Map produced using QGIS. Data obtained from the U.S. Census Bureau [30] and the N.C. Department of Environmental Quality [28,29]
Figure 3. System dynamics of Model 1 with no additional nutrients added over a 30-day time period. With no additional nutrient influx, any current algal presence quickly resolves itself. As the algae dies out, the amount of dissolved oxygen in the system flourishes
Figure 4. System dynamics of Model 2 with no additional nutrients added over a 30-day time period. The presence of zooplankton in the system results in a quicker decline in the algae population (Day 5 comparison: $ A $ = 0.6127 $ \mu $g/L in Model 2, as opposed to $ A $ = 1.642 $ \mu $g/L in Model 1 - see Figure 3)
Figure 5. PRCC sensitivity scores for (a) Model 1 and (b) Model 2
Figure 6. Changes in the eigenvalue $ \lambda_2 $ corresponding to the equilibrium point $ (A,O,N) = (200, 1,150.0171) $, depending on $ \beta_{N} \text{ and } \mu_{AN} $
Figure 7. Bifurcation diagram of Model 1 relating the steady nitrogen levels to the average temperature at varying values of $ K_N $, the half-saturation constant for nutrient uptake
Figure 8. System dynamics of Model 1 with constant nutrient flow at $ 19.6 $ mg/L over a 60-day time period. The algal bloom is resolvable in this case
Figure 9. System dynamics of Model 1 with constant nutrient flow at $ 19.8 $ mg/L over a 60-day time period. The algal bloom is unresolvable in this case and we find that $ \lambda = 19.7 $ is a bifurcation value for Model 1
Figure 10. System dynamics of Model 2 with constant nutrient flow at $ 300 $ mg/L over a one year time period. The algal bloom is resolvable in this case
Figure 11. Model 1 under variable nutrient flow $ \lambda(t) = 20te^{-t/5} $ in a 60-day period
Figure 12. Long-term dynamics of Model 2 with variable nutrient flow term, $ \lambda(t) = 20te^{-t/5} $. Note that the dissolved oxygen population does recover from the initial hypoxia when the algal population eventually reaches zero
Figure 13. Long-term dynamics of Model 1 with two breaches, 3 months apart from each other, under variable nutrient flow, $ \lambda(t) = 20te^{-t/5} $
Table 1. Table of parameter descriptions and values. Where literature values are unavailable, parameters are estimated manually to produce behavior consistent with that which would be expected in model simulations
Name Description Estimate Units Reference
$A_0$ Arrhenius equation constant 5.35$\times 10^9$ days$^{-1}$ [9]
$E/R$ Activation energy/universal gas constant 6472 $^{\circ}$K [9]
$T$ Average air temperature 305.3722 $^{\circ}$K Estimated
$K_N$ Half-saturation constant for nutrient uptake 50.5226 mg/L [10]
$\delta_1$ Natural algal death rate 0.5 days$^{-1}$ [18]
$\delta_2$ Algal death rate due to overcrowding 0.01 L/$(\mu$g$\cdot$days) Estimated
$R_M$ Maximum specific predation rate 0.7 days$^{-1}$ [23]
$\alpha$ Governing rate for predation maximum achievement 5.7 $\mu$g$^2$/L$^2$ [23]
$q_0$ Constant influx of dissolved oxygen 6 days$^{-1}$ Estimated
$\delta_0$ Natural depletion rate of dissolved oxygen 1 days$^{-1}$ [18]
$\alpha_0$ Depletion rate of dissolved oxygen due to algae consumption 0.01 mg/$\mu$g Estimated
$\lambda(t)$ Nutrient flow due to spill event Variable mg/(L$\cdot$ days)
$\beta_N$ Influx rate of nutrients due to death of algae 0.2 mg/$\mu$g Estimated
$\mu_{AN}$ Consumption rate of nutrients by algae 0.5 mg/$\mu$g Estimated
$\gamma$ Production rate of zooplankton 0.05 Unitless [23]
$\delta_Z$ Natural death rate of zooplankton 0.017 days$^{-1}$ [23]
Christopher K.R.T. Jones, Bevin Maultsby. A dynamical approach to phytoplankton blooms. Discrete & Continuous Dynamical Systems, 2017, 37 (2) : 859-878. doi: 10.3934/dcds.2017035
Lorena Rodríguez-Gallego, Antonella Barletta Carolina Cabrera, Carla Kruk, Mariana Nin, Antonio Mauttone. Establishing limits to agriculture and afforestation: A GIS based multi-objective approach to prevent algal blooms in a coastal lagoon. Journal of Dynamics & Games, 2019, 6 (2) : 159-178. doi: 10.3934/jdg.2019012
M. Dambrine, B. Puig, G. Vallet. A mathematical model for marine dinoflagellates blooms. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 615-633. doi: 10.3934/dcdss.2020424
Florian Rupp, Jürgen Scheurle. Analysis of a mathematical model for jellyfish blooms and the cambric fish invasion. Conference Publications, 2013, 2013 (special) : 663-672. doi: 10.3934/proc.2013.2013.663
Laura Gardini, Iryna Sushko. Preface: Special issue on nonlinear dynamical systems in economic modeling. Discrete & Continuous Dynamical Systems - B, 2021, 26 (11) : i-iv. doi: 10.3934/dcdsb.2021241
Loïc Louison, Abdennebi Omrane, Harry Ozier-Lafontaine, Delphine Picart. Modeling plant nutrient uptake: Mathematical analysis and optimal control. Evolution Equations & Control Theory, 2015, 4 (2) : 193-203. doi: 10.3934/eect.2015.4.193
H. W. Broer, Renato Vitolo. Dynamical systems modeling of low-frequency variability in low-order atmospheric models. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 401-419. doi: 10.3934/dcdsb.2008.10.401
Dorota Bors, Robert Stańczy. Dynamical system modeling fermionic limit. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 45-55. doi: 10.3934/dcdsb.2018004
Jaewook Ahn, Sun-Ho Choi, Minha Yoo. Global wellposedness of nutrient-taxis systems derived by a food metric. Discrete & Continuous Dynamical Systems, 2021, 41 (12) : 6001-6022. doi: 10.3934/dcds.2021104
H. T. Banks, R. A. Everett, Neha Murad, R. D. White, J. E. Banks, Bodil N. Cass, Jay A. Rosenheim. Optimal design for dynamical modeling of pest populations. Mathematical Biosciences & Engineering, 2018, 15 (4) : 993-1010. doi: 10.3934/mbe.2018044
El Houcein El Abdalaoui, Sylvain Bonnot, Ali Messaoudi, Olivier Sester. On the Fibonacci complex dynamical systems. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2449-2471. doi: 10.3934/dcds.2016.36.2449
Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355
Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3093-3108. doi: 10.3934/dcds.2020399
Fritz Colonius, Marco Spadini. Fundamental semigroups for dynamical systems. Discrete & Continuous Dynamical Systems, 2006, 14 (3) : 447-463. doi: 10.3934/dcds.2006.14.447
John Erik Fornæss. Sustainable dynamical systems. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1361-1386. doi: 10.3934/dcds.2003.9.1361
Vieri Benci, C. Bonanno, Stefano Galatolo, G. Menconi, M. Virgilio. Dynamical systems and computable information. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 935-960. doi: 10.3934/dcdsb.2004.4.935
Mădălina Roxana Buneci. Morphisms of discrete dynamical systems. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 91-107. doi: 10.3934/dcds.2011.29.91
Josiney A. Souza, Tiago A. Pacifico, Hélio V. M. Tozatti. A note on parallelizable dynamical systems. Electronic Research Announcements, 2017, 24: 64-67. doi: 10.3934/era.2017.24.007
Philippe Marie, Jérôme Rousseau. Recurrence for random dynamical systems. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 1-16. doi: 10.3934/dcds.2011.30.1
Tobias Wichtrey. Harmonic limits of dynamical systems. Conference Publications, 2011, 2011 (Special) : 1432-1439. doi: 10.3934/proc.2011.2011.1432
Amy Henderson Emek Kose Allison Lewis Ellen R. Swanson | CommonCrawl |
\begin{definition}[Definition:Euclid's Definitions - Book III/6 - Segment of Circle]
{{EuclidSaid}}
:''A '''segment of a circle''' is the figure contained by a straight line and a circumference of a circle.''
{{EuclidDefRef|III|6|Segment of Circle}}
\end{definition} | ProofWiki |
Pareto distribution
A continuous probability distribution with density
$$ p( x) = \left \{ \begin{array}{ll} \frac \alpha {x _ {0} } \left ( \frac{x _ {0} }{x} \right ) ^ {\alpha + 1 } , & x _ {0} < x < \infty , \\ 0, & x \leq x _ {0} , \\ \end{array} \right. $$
depending on two parameters $ x _ {0} > 0 $ and $ \alpha > 0 $. As a "cut-off" version the Pareto distribution can be considered as belonging to the family of beta-distributions (cf. Beta-distribution) of the second kind with the density
$$ \frac{1}{B( \mu , \alpha ) } \frac{x ^ {\mu - 1 } }{( 1+ x) ^ {\mu + \alpha } } ,\ \ \mu , \alpha > 0,\ \ 0 < x < \infty , $$
for $ \mu = 1 $. For any fixed $ x _ {0} $, the Pareto distribution reduces by the transformation $ x = x _ {0} /y $ to a beta-distribution of the first kind. In the system of Pearson curves the Pareto distribution belongs to those of "type VI" and "type XI" . The mathematical expectation of the Pareto distribution is finite for $ \alpha > 1 $ and equal to $ \alpha x _ {0} /( \alpha - 1) $; the variance is finite for $ \alpha > 2 $ and equal to $ \alpha x _ {0} ^ {2} /( \alpha - 1) ^ {2} ( \alpha - 2) $; the median is $ 2 ^ {1/ \alpha } x _ {0} $. The Pareto distribution function is defined by the formula
$$ {\mathsf P} \{ X < x \} = 1 - \left ( \frac{x _ {0} }{x} \right ) ^ \alpha ,\ \ x > x _ {0} ,\ \ \alpha > 0. $$
The Pareto distribution has been widely used in various problems of economical statistics, beginning with the work of W. Pareto (1882) on the distribution of profits. It is sometimes accepted that the Pareto distribution describes fairly well the distribution of profits exceeding a certain level in the sense that it must have a tail of order $ 1/x ^ \alpha $ as $ x \rightarrow \infty $.
[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[a1] N.L. Johnson, S. Kotz, "Distributions in statistics: continuous univariate distributions" , Houghton Mifflin (1970)
[a2] H.T. Davis, "Elements of statistics with application to economic data" , Amer. Math. Soc. (1972)
Pareto distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Pareto_distribution&oldid=49651
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Pareto_distribution&oldid=49651"
TeX auto | CommonCrawl |
\begin{document}
\preprint{}
\title {Qubit metrology for building a fault-tolerant quantum computer }
\author{John M. Martinis}
\email{[email protected]}
\affiliation{University of California, Santa Barbara } \affiliation{Google, Inc.}
\volumeyear{year} \volumenumber{number} \issuenumber{number} \eid{identifier} \date{\today}
\maketitle
\textbf{Recent progress in quantum information has led to the start of several large national and industrial efforts to build a quantum computer. Researchers are now working to overcome many scientific and technological challenges. The program's biggest obstacle, a potential showstopper for the entire effort, is the need for high-fidelity qubit operations in a scalable architecture. This challenge arises from the fundamental fragility of quantum information, which can only be overcome with quantum error correction \cite{Shor95}. In a fault-tolerant quantum computer the qubits and their logic interactions must have errors below a threshold: scaling up with more and more qubits then brings the net error probability down to appropriate levels $\sim 10^{-18}$ needed for running complex algorithms. Reducing error requires solving problems in physics, control, materials and fabrication, which differ for every implementation. I explain here the common key driver for continued improvement - the metrology of qubit errors.}
We must focus on errors because classical and quantum computation are fundamentally different. The classical NOT operation in CMOS electronics can have zero error, even with moderate changes of voltages or transistor thresholds. This enables digital circuits of enormous complexity to be built as long as there are reasonable tolerances on fabrication. In contrast, quantum information is inherently error prone because it has continuous amplitude and phase variables, and logic is implemented using analog signals. The corresponding quantum NOT, a bit-flip operation, is produced by applying a control signal that can vary in amplitude, duration and frequency. More fundamentally, the Heisenberg uncertainty principle states that it is impossible to directly stabilize a single qubit since any measurement of a bit-flip error will produce a random flip in phase. The key to quantum error correction is measuring qubit parities, which detects bit flips and phase flips in pairs of qubits. As explained in the text box, the parities are classical-like so their outcomes can be known simultaneously.
When parity changes, one of the two qubits had an error, but which one is not known. To identify, encoding must use larger numbers of qubits. This idea can be understood with a simple classical example, the 3-bit repetition code as described in Fig.\,\ref{f:rep}. Logical states 0 (1) are encoded as 000 (111), and measurement of parities between adjacent bits A-B and B-C allows the identification (decoding) of errors as long as there is a change of no more than a single bit. To improve the encoding to detect both order $n=1$ and $n=2$ errors, the repetition code is simply increased in size to 5 bits, with 4 parity measurements between them. Order $n$ errors can be decoded from $2n+1$ bits and $2n$ parity measurements.
\begin{figure}
\caption{ 3-bit classical repetition code for bits A, B and C, with parity measurements between A-B and B-C. Table shows all combination of inputs and the resulting parity measurements. For an initial state of all zeros, a unique decoding from the measurement to the actual error is obtained for only the top four entries, where there is no more than a single bit error (order $n=1$). }
\label{f:rep}
\end{figure}
Quantum codes allow for the decoding of both bit- and phase-flip errors given a set of measurement outcomes. As for the above example, they decode the error properly as long as the number of errors is order $n$ or less. The probability for a decoding error can be computed numerically using a simple depolarization model that assumes a random bit- or phase-flip error of probability $\epsilon$ for each physical operation used to measure the parities. By comparing the known input errors with those determined using a decoding algorithm, the decoding or logical error probability is found to be \begin{align} P_l &\simeq \Lambda^{-(n+1)} \label{eq:Pl} \\ \Lambda &= \epsilon_t/\epsilon \ , \end{align} where $\epsilon_t$ is the threshold error, fit from the data. The error suppression factor is $\Lambda$, the key metrological figure of merit that quantifies how much the decoding error drops as the order $n$ increases by one. Note that $P_l$ scales with $\epsilon^{n+1}$, as expected for $n+1$ independent errors. The key idea is that once the physical errors $\epsilon$ are lower than the threshold $\epsilon_t$, then $\Lambda > 1$ and making the code larger decreases the decoding error exponentially with $n$. When $\Lambda <1$ error detection fails, and even billions of bad qubits do not help.
A key focus for fault tolerance is making qubit errors less than the threshold. For $\Lambda$ to be as large as possible, we wish to encode with the highest threshold $\epsilon_t$. The best practical choice is the surface code \cite{Bravyi98, Fowler12}, which can be thought of as a two-dimensional version of the repetition code that corrects for both bit and phase errors. A $4n+1$ by $4n+1$ array of qubits performs $n$-th order error correction, where about half of the qubits are used for the parity measurements. It is an ideal practical choice for a quantum computer because of other attributes: (i) Only nearest neighbor interactions are needed, making it manufacturable with integrated circuits. (ii) The code is upward compatible to logical gates, where measurements are simply turned off. (iii) The code is tolerant up to a significant density ($\sim 10\%$) of qubit dropouts from fabrication defects. (iv) The high error threshold arises from the low complexity of the parity measurement; a code with higher threshold is unlikely. (v) The simplicity of the measurement brings more complexity to the classical decoding algorithm, which fortunately is efficiently scalable. (vi) Detected errors can be tracked in software, so physical feed-forward corrections using bit- or phase-flip gates are not needed. (vii) The prediction Eq.\,(\ref{eq:Pl}) for $P_l$ is strictly valid only for the operative range $\Lambda \gtrsim 10$, where the threshold is $\epsilon_t \simeq 2\%$. At break-even $\Lambda=1$, the threshold is significantly smaller $0.7\%$.
\begin{figure}
\caption{ Life cycle of a qubit. Illustration showing the increasing complexity of qubit experiments, built up upon each other, described by technology levels I through VII. Numbers in parenthesis shows approximate qubit numbers. Key metrics are shown at bottom. Errors for 1 qubit, 2 qubit and measurement are described by $\epsilon_1$, $\epsilon_2$ and $\epsilon_m$, which leads to an error suppression factor $\Lambda$. Fault-tolerant error correction is achieved when $\Lambda > 1$. Scaling to large $n$ leads to $P_l \rightarrow 0$. }
\label{f:life}
\end{figure}
Typical quantum algorithms use $\sim 10^{18}$ operations \cite{Fowler12}, so we target a logical error $P_l = 10^{-18}$. Assuming an improvement $\Lambda=10$ for each order, we need $n=17$ encoding. The number of qubits for the surface code is $(4\cdot 17+1)^2=4761$. For $\Lambda=100$, this number lowers by a factor of 4. Although this seems like a large number of qubits from the perspective of present technology, we should remember that a cell phone with $10^{12}$ transistors, now routinely owned by most people in the world, was inconceivable only several decades ago.
Hardware requirements can be further understood by separating out the entire parity operation into one- and two-qubit logic and measurement components. Assuming errors in only one of these components, break-even thresholds are respectively 4.3\%, 1.25\% and 12\%: the 2-qubit error is clearly the most important, whereas measurement error is the least important. For the practical case when all components have non-zero errors, I propose the threshold targets \begin{align} \epsilon_{1}&\leq 0.1\% \\ \epsilon_{2}&\leq 0.1\% \\ \epsilon_{m}&\leq 0.5\% \ , \end{align} which gives $\Lambda \geq 17$. It is critical that all three error thresholds be met, as the worst performing error limits the logical error $P_l$. Measurement errors $\epsilon_{m}$ can be larger because its single component threshold 12\% is high. Two-qubit error $\epsilon_2$ is the most challenging because its physical operation is much more complex than for single qubits. This makes $\epsilon_2$ the primary metric around which the hardware should be optimized. The single qubit error $\epsilon_1$, being easier to optimize, should readily be met if the two-qubit threshold is reached. Note that although it is tempting to isolate qubits from the environment to lower one-qubit errors, in practice this often makes it harder to couple them together for two-qubit logic; I call such strategy ``neutrino-ized qubits''.
In the life cycle of a qubit technology, experiments start with a single qubit and then move to increasingly more complex multi-qubit demonstrations and metrology. The typical progression \cite{Devoret13} is illustrated in Fig.\,\ref{f:life}, where the technology levels and their metrics are shown together.
In level I, one and two-qubit experiments measure coherence times $T_1$ and $T_2$, and show basic functionality of qubit gates. Along with the one-qubit gate time $t_{g1}$, an initial estimate of gate error can be made. Determining the performance of a two-qubit gate is much harder since other decoherence or control errors will typically degrade performance. Swapping an excitation between two qubits is a simple method to determine whether coherence has changed. Quantum process tomography is often performed on one- and two-qubit gates \cite{Benhelm08}, which is important as it proves that proper quantum logic has been achieved. In this initial stage, it is not necessary to have low measurement errors, and data often have arbitrary units on the measurement axis. This is fine for initial experiments that are mostly concerned with the performance of qubit gates.
In level II, more qubits are measured in a way that mimics the scale-up process. This initiates more realistic metrology tests as to how a qubit technology will perform in a full quantum computer. Here, the application of many gates in sequence through randomized benchmarking (RB) enables the total error to grow large enough for accurate measurement, even if each gate error is tiny \cite{Ryan09}. Interleaved RB is useful for measuring the error probability of specific one- and two-qubit logic gates, and gives important information on error stability. Although RB represents an average error and provides no information on error coherence between gates, it is a practical metric to characterize overall performance \cite{Barends14}. For example, RB can be used to tune up the control signals for lower errors \cite{Kelly14b}. Process tomography can be performed for multiple qubits, but is typically abandoned because (i) the number of necessary measurements scales rapidly with increasing numbers of qubits, (ii) information on error coherence is hard to use and (iii) it is difficult to separate out initialization and measurement errors. Measurement error is also obtained in this level; differentiation should be made between measurement that destroys a qubit state or not, since the latter is eventually needed in level IV for logical qubits. A big concern is crosstalk between various logic gates and measurement outcomes, and whether residual couplings between qubits create errors when no interactions are desired. A variety of crosstalk measurements based on RB are useful metrology tools.
In level III an error detection or correction algorithm is performed \cite{Nigg14}, representing a complex systems test of all components. Qubit errors have to be low enough to perform many complex qubit operations. Experiments work to extend the lifetime of an encoded logical state, typically by adding errors to the various components to show improvement from the detection protocol relative to the added errors.
At level IV, the focus is measuring $\Lambda > 1$, demonstrating how a logical qubit can have less and less error by scaling up the order $n$ of error correction. The logical qubit must be measured in first and second order, which requires parity measurements that are repetitive in time so as to include the effect of measurement errors. Note that extending the lifetime of a qubit state in first order is not enough to determine $\Lambda$. Measuring $\Lambda > 1$ indicates that all first order decoding errors have been properly corrected, and that further scaling up should give lower logical errors. Because 81 qubits are needed for the surface code with $n=2$, a useful initial test is for bit-flip errors, requiring a linear array of 9 qubits. These experiments are important since they connect the error metrics of the qubits, obtained in level II, to actual fault-tolerant performance $\Lambda$. As there are theoretical and experimental approximations in this connection, including the depolarization assumption for theory and RB measurement for experiment, this checks the whole framework of computing fault-tolerance. A fundamentally important test for $n \geq 2$ is whether $\Lambda$ remains constant, since correlated errors would cause $\Lambda$ to decrease. Level IV tests continue until the order $n$ is high enough to convincingly demonstrate an exponential suppression of error. A significant challenge here is to achieve all error thresholds in one device and in a scalable design.
An experiment measuring the bit-flip suppression factor $\Lambda_X$ has been done with a linear chain of 9 superconducting qubits \cite{Kelly14a}. The measurement $\Lambda_X = 3.2$ shows logical errors have been reduced, with a magnitude that is consistent with the bit-flip threshold of $3\%$ and measured errors. This is the first demonstration that individual error measurements can be used to predict fault tolerance. For bit and phase fault tolerance, we need to improve only two-qubit errors and then scale.
In level V, since the lifetime of a logical state has been extended, the goal is to perform logical operations with minuscule error. Similar to classical logic that can be generated from the NOT and AND gates, arbitrary quantum logic can be generated from a small set of quantum gates. Here all the Clifford gates are implemented, such as the S, Hadamard, or controlled-NOT. The logical error probabilities should be measured and tested for degradation during logical gates.
In level VI, the test is for the last and most difficult logic operation, the T gate, which phase shifts the logical state by $45^\circ$. Here, state distillation must be demonstrated, and feed-forward from qubit errors conditionally controls a logical S gate \cite{Fowler12}. Because logical errors can be readily accounted for in software for all the logical Clifford gates in level V, feed-forward is only needed for this non-Clifford logical T gate.
Level VII is for the full quantum computer.
The strategy for building a fault-tolerant quantum computer is as follows. At level I, the coherence time should be at least 1000 times greater than the gate time. At level II, all errors need to be less than threshold, with particular attention given to hardware architecture and gate design for lowest 2 qubit error. Design should allow scaling without increasing errors. Scaling begins at level IV: 9 qubits give the first measurement of fault tolerance with $\Lambda_X$, 81 qubits give the proper quantum measure of $\Lambda$, and then about $10^3$ qubits allow for exponentially reduced errors. At level V through VII, $10^4$ qubits are needed for logical gates, and finally about $10^5$ qubits will be used to build a demonstration quantum computer.
The discussion here focuses on optimizing $\Lambda$, but having fast qubit logic is desirable to obtain a short run time. Run times can also be shortened by using a more parallel algorithm, as has been proposed for factoring. A 1000 times slower quantum logic can be compensated for with about 1000 times more qubits.
Scaling up the number of qubits while maintaining low error is a crucial requirement for level IV and beyond. Scaling is significantly more difficult than for classical bits since system performance will be affected by small crosstalk between the many qubits and control lines. This criteria makes large qubits desirable, since more room is then available for separating signals and incorporating integrated control logic and memory. Note this differs from standard classical scaling of CMOS and Moore's law, where the main aim is to decrease transistor size.
Superconducting qubits have macroscopic wavefunctions and are therefore well suited for the challenges of scaling with control. I expect qubit cells to be in the $30-300\,\mu$m size scale, but clearly any design with millions of qubits will have to properly tradeoff density with control area based on experimental capabilities.
In conclusion, progress in making a fault-tolerant quantum computer must be closely tied to error metrology, since improvements with scaling will only occur when errors are below threshold. Research should particularly focus on two-qubit gates, since they are the most difficult to operate well with low errors. As experiments are now within the fault-tolerant range, many exciting developments are possible in the next few years.
The author declares no competing financial interests.
\
\noindent\fbox{ \begin{minipage} {24em}
\textbf{Quantum parity.} An arbitrary qubit state is written as $\Psi=\cos(\theta/2)|0\rangle+e^{i\phi}\sin(\theta/2)|1\rangle$, where the continuous variables $\theta$ and $\phi$ are the bit amplitude and phase. A bit measurement collapses the state into $|0\rangle$ ($|1\rangle$) with probability $\cos^2(\theta/2)$ ($\sin^2(\theta/2)$), thus digitizing error. In general, measurement allows qubit errors to be described as either bit flip $\hat{X}$ ($\,|0\rangle \leftrightarrow |1\rangle\,$) or phase flip $\hat{Z}$ ($\,|1\rangle \leftrightarrow -|1\rangle\,$). According to the Heisenberg uncertainty principle, it is not possible to simultaneously measure the amplitude and phase of a qubit, so obtaining information on a bit flip induces information loss on phase equivalent to a random phase flip, and vice versa. This property comes fundamentally from bit and phase flips not commuting $[\hat{X},\hat{Z}] = \hat{X}\hat{Z}- \hat{Z}\hat{X} \neq 0$; the sequence of the two operations matter. Quantum error correction takes advantage of an interesting property of qubits $\hat{X}\hat{Z} = -\hat{Z}\hat{X}$, so that a change in sequence just produces a minus sign. With $\hat{X_1}\hat{X_2}$ and $\hat{Z_1}\hat{Z_2}$ corresponding to 2-qubit bit and phase parities, they now commute because a minus sign is picked up from each qubit \begin{align} [\hat{X_1}\hat{X_2},\hat{Z_1}\hat{Z_2}] &= \hat{X_1}\hat{X_2}\hat{Z_1}\hat{Z_2}-\hat{Z_1}\hat{Z_2}\hat{X_1}\hat{X_2} \\ &=\hat{X_1}\hat{X_2}\hat{Z_1}\hat{Z_2}-(-)^2 \hat{X_1}\hat{X_2}\hat{Z_1}\hat{Z_2} \\ &= 0 \ . \end{align} The two parities can now be known simultaneously, implying they are classical-like: a change in one parity can be measured without affecting the other. \end{minipage} }
\end{document} | arXiv |
\begin{document}
\title{Comparison principles for the time-fractional diffusion equations with the Robin boundary conditions. Part I: Linear equations}
\titlerunning{Comparison principles for the time-fractional diffusion equations \dots}
\author{
Yuri Luchko$^1$ \and
Masahiro Yamamoto$^2$
}
\authorrunning{Yu. Luchko \and M. Yamamoto}
\institute{Yuri Luchko$^{1,*}$ \at Department of Mathematics, Physics, and Chemistry, Berlin University of Applied Sciences and Technology, Luxemburger Str. 10, Berlin -- 13353, Germany \\ \email{[email protected]} $^*$ corresponding author
\and Masahiro Yamamto$^{2}$ \at Department of Mathematical Sciences, The University of Tokyo, Komaba, Meguro, Tokyo -- 153, Japan \\ Honorary Member of Academy of Romanian Scientists, Ilfov, nr. 3, Bucuresti, Romania \\ Correspondence member of Accademia Peloritana dei Pericolanti, Palazzo Universit\`a, Piazza S. Pugliatti 1, Messina -- 98122, Italy \\ \email{[email protected]} }
\date{Received: XX 2023 / Revised: .... / Accepted: ......}
\maketitle
\begin{abstract} {The main objective of this paper is analysis of the initial-boundary value problems for the linear time-fractional diffusion equations with a uniformly elliptic spatial differential operator of the second order and the Caputo type time-fractional derivative acting in the fractional Sobolev spaces. The boundary conditions are formulated in form of the homogeneous Neumann or Robin conditions. First we discuss the uniqueness and existence of solutions to these initial-boundary value problems. Under some suitable conditions on the problem data, we then prove positivity of the solutions. Based on these results, several comparison principles for the solutions to the initial-boundary value problems for the linear time-fractional diffusion equations are derived.}
\keywords{fractional calculus (primary), fractional diffusion equation, positivity of solutions, comparison principle}
\subclass{35B51 (primary) 35R11 26A33}
\end{abstract}
\section{Introduction} \label{sec:1}
\setcounter{section}{1} \setcounter{equation}{0}
In this paper, we deal with a linear time-fractional diffusion equation in the form $$
\partial_t^{\alpha} (u(x,t)-a(x)) = \sum_{i,j=1}^d \partial_i(a_{ij}(x)\partial_j u(x,t)) $$ \begin{equation} \label{(1.1)} + \sum_{j=1}^d b_j(x,t)\partial_ju(x,t) + c(x,t)u(x,t) + F(x,t),\quad
x \in \Omega,\, 0<t<T, \end{equation} where $\partial_t^{\alpha}$ is the Caputo fractional derivative of order $\alpha\in (0,1)$ defined on the fractional Sobolev spaces (see Section \ref{sec2} for the details) and $\Omega \subset \mathbb{R}^d, \ d=1,2,3$ is a bounded domain with a smooth boundary $\partial\Omega$. All the functions under consideration are supposed to be real-valued.
In what follows, we always assume that the following conditions are satisfied: \begin{equation} \label{(1.2)} \left\{ \begin{array}{rl} & a_{ij} = a_{ji} \in C^1(\overline{\Omega}), \quad 1\le i,j \le d, \\ & b_j,\, c \in C^1([0,T]; C^1(\overline{\Omega})) \cap C([0,T];C^2(\overline{\Omega})),
\quad 1\le j \le d, \\ & \mbox{and there exists a constant $\kappa>0$ such that}\\ & \sum_{i,j=1}^d a_{ij}(x)\xi_i\xi_j \ge \kappa \sum_{j=1}^d \xi_j^2, \quad x\in \Omega, \, \xi_1, ..., \xi_d \in \mathbb{R}. \end{array}\right. \end{equation}
Using the notations $\partial_j = \frac{\partial}{\partial x_j}$, $j=1, 2, ..., d$,
we define a conormal derivative $\ppp_{\nu_A} w$ with respect to the differential operator $\sum_{i,j=1}^d \partial_j(a_{ij}\partial_i)$ by \begin{equation}\label{(1.3)} \ppp_{\nu_A} w(x) = \sum_{i,j=1}^d a_{ij}(x)\partial_jw(x)\nu_i(x), \quad x\in \partial\Omega, \end{equation} where $\nu = \nu(x) =: (\nu_1(x), ..., \nu_d(x))$ is the unit outward normal vector to $\partial\Omega$ at the point $x := (x_1,..., x_d) \in \partial\Omega$.
For the equation \eqref{(1.1)}, we consider the initial-boundary value problems with the homogeneous Neumann boundary condition \begin{equation} \label{(1.3a)} \partial_{\nu_A}u = 0 \quad \mbox{on $\partial\Omega \times (0,T)$} \end{equation} or the more general homogeneous Robin boundary condition \begin{equation} \label{(1.4)} \partial_{\nu_A}u + \sigma(x)u = 0 \quad \mbox{on $\partial\Omega \times (0,T)$}, \end{equation} where $\sigma$ is a sufficiently smooth function on $\partial\Omega$ that satisfies the condition $\sigma(x) \ge 0,\ x\in \partial\Omega $.
For partial differential equations of the parabolic type that correspond to the case $\alpha=1$ in the equation \eqref{(1.1)}, several important qualitative properties of solutions to the corresponding initial-boundary value problems are known. In particular, we mention a maximum principle and a comparison principle for the solutions to these problems (\cite{PW}, \cite{RR}).
The main purpose of this paper is the comparison principles for the linear time-fractional diffusion equation \eqref{(1.1)} with the Neumann or the Robin boundary conditions.
For the equations of type \eqref{(1.1)} with the Dirichlet boundary conditions, the maximum principles in different formulations were derived and used in \cite{Bor,Lu1,luchko-1,luchko-2,Lu2,LY1,LY2,LY3,Za}. For a maximum principle for the time-fractional transport equations we refer to \cite{LSY}. In \cite{Kir}, a maximum principle for the more general space- and time-space-fractional partial differential equations has been derived.
Because any maximum principle involves the Dirichlet boundary values, its formulation in the case of the Neumann or Robin boundary conditions requires more cares. For this kind of the boundary conditions, both positivity of solutions and the comparison principles can be derived under some suitable restrictions on the problem data. One typical result of this sort says that the solution $u$ to the equation \eqref{(1.1)} with the boundary condition \eqref{(1.3a)} or \eqref{(1.4)} and an appropriately formulated initial condition is non-negative in $\Omega\times (0,T)$ if the initial value $a$ and the non-homogeneous term $F$ are non-negative in $\Omega$ and in $\Omega \times (0,T)$, respectively. Such positivity properties and their applications have been intensively discussed and used for the partial differential equations of parabolic type ($\alpha=1$ in the equation \eqref{(1.1)}), see, e.g., \cite{E},
\cite{Fr}, \cite{Pao2}, or \cite{RR}.
However, to the best knowledge of the authors, no results of this kind have been published for the time-fractional diffusion equations in the case of the Neumann or Robin boundary conditions. The main subject of this paper is in derivation of a positivity property and the comparison principles for the linear equation \eqref{(1.1)} with the boundary condition \eqref{(1.3a)} or \eqref{(1.4)} and an appropriately formulated initial condition. In the subsequent paper, these result will be extended to the case of the semilinear time-fractional diffusion equations. The arguments employed in these papers rely on an operator theoretical approach to the fractional integrals and derivatives in the fractional Sobolev spaces that is an extension of the theory well-known in the case $\alpha=1$, see, e.g., \cite{He}, \cite{Pa}, \cite{Ta}. We also refer to the recent publications \cite{Al-R} and \cite{Lu} devoted to the comparison principles for solutions to the fractional differential inequalities with the general fractional derivatives and for solutions to the ordinary fractional differential equations, respectively.
The rest of this paper is organized as follows. In Section \ref{sec2}, some important results regarding the unique existence of solutions to the initial-boundary value problems for the linear time-fractional diffusion equations are presented. Section \ref{sec3} is devoted to a proof of a key lemma that is a basis for the proofs of the comparison principles for the linear and semilinear time-fractional diffusion equations. The lemma asserts that each solution to \eqref{(1.1)} is non-negative in $\Omega \times (0,T)$ if $a\ge 0$ and $F \ge 0$, provided that $u$ is assumed to satisfy some extra regularity. In Section \ref{sec4}, we prove a comparison principle that is our main result for the problem \eqref{(1.1)} for the linear time-fractional diffusion equation. Moreover, we establish the order-preserving properties for other problem data (the zeroth-order coefficient $c$ of the equation and the coefficient $\sigma$ of the Robin condition). Finally, a detailed proof of an important auxiliary statement is presented in an Appendix.
\section{Well-posedness results} \label{sec2}
\setcounter{section}{2} \setcounter{equation}{0}
For $x \in \Omega, \thinspace 0<t<T$, we define an operator \begin{equation} \label{(2.1)} -Av(x,t) := \sum_{i,j=1}^d \partial_i(a_{ij}(x)\partial_j v(x,t) + \sum_{j=1}^d b_j(x,t)\partial_jv(x,t) + c(x,t)v(x,t) \end{equation} and assume that the conditions \eqref{(1.2)} for the coefficients $a_{ij}, b_j, c$ are satisfied.
In this section, we deal with the following initial-boundary value problem for the linear time-fractional diffusion equation \eqref{(1.1)} with the time-fractional derivative of order $\alpha\in (0,1)$ \begin{equation} \label{(2.2)} \left\{ \begin{array}{rl} & \partial_t^{\alpha} (u(x,t)-a(x)) + Au(x,t) = F(x,t), \quad x \in \Omega, \thinspace 0<t<T, \\ & \ppp_{\nu_A} u + \sigma(x)u(x,t) = 0, \quad x\in \partial\Omega, \, 0<t<T, \end{array}\right. \end{equation} along with the initial condition \eqref{incon} formulated below.
To appropriately define the Caputo fractional derivative $d_t^{\alpha} w(t)$, $0<\alpha<1$, we start with its definition on the space $$ {_{0}C^1[0,T]} := \{ u \in C^1[0,T];\thinspace u(0) = 0\} $$ that reads as follows: $$ d_t^{\alpha} w(t) = \frac{1}{\Gamma(1-\alpha)}\int^t_0 (t-s)^{-\alpha}\frac{dw}{ds}(s) ds,\ w\in {_{0}C^1[0,T]}. $$ Then we extend this operator from the domain $\mathcal{D}(d_t^{\alpha}) := {_{0}C^1[0,T]}$ to $L^2(0,T)$ taking into account its closability (\cite{Yo}). As have been shown in \cite{KRY}, there exists a unique minimum closed extension of $d_t^{\alpha}$ with the domain $\mathcal{D}(d_t^{\alpha}) = {_{0}C^1[0,T]}$. Moreover, the domain of this extension is the closure of ${_{0}C^1[0,T]}$ in the Sobolev-Slobodeckij space $H^{\alpha}(0,T)$. Let us recall that the norm $\Vert \cdot\Vert_{H^{\alpha} (0,T)}$ of the Sobolev-Slobodeckij space $H^{\alpha}(0,T)$ is defined as follows (\cite{Ad}): $$ \Vert v\Vert_{H^{\alpha}(0,T)}:= \left( \Vert v\Vert^2_{L^2(0,T)} + \int^T_0\int^T_0 \frac{\vert v(t)-v(s)\vert^2}{\vert t-s\vert^{1+2\alpha}} dtds \right)^{\hhalf}. $$ By setting $$ H_{\alpha}(0,T):= \overline{{_{0}C^1[0,T]}}^{H^{\alpha}(0,T)}, $$ we obtain (\cite{KRY}) $$ H_{\alpha}(0,T) = \left\{ \begin{array}{rl} &H^{\alpha}(0,T), \quad 0<\alpha<\hhalf, \\ &\left\{ v \in H^{\hhalf}(0,T);\, \int^T_0 \frac{\vert v(t)\vert^2}{t} dt < \infty \right\}, \quad \alpha=\hhalf, \\ & \{ v \in H^{\alpha}(0,T);\, v(0) = 0\}, \quad \hhalf < \alpha < 1, \end{array}\right. $$ and $$ \Vert v\Vert_{H_{\alpha}(0,T)} = \left\{ \begin{array}{rl} &\Vert v\Vert_{H^{\alpha}(0,T)}, \quad \alpha \ne \hhalf, \\ &\left( \Vert v\Vert_{H^{\hhalf}(0,T)}^2 + \int^T_0 \frac{\vert v(t)\vert^2}{t}dt\right)^{\hhalf}, \quad \alpha=\hhalf. \end{array}\right. $$ In what follows, we also use the Riemann-Liouville fractional integral operator $J^{\beta}$, $\beta > 0$ defined by $$ (J^{\beta}f)(t) := \frac{1}{\Gamma(\beta)}\int^t_0 (t-s)^{\beta-1}f(s) ds, \quad 0<t<T. $$ Then, according to \cite{GLY} and \cite{KRY}, $$ H_{\alpha}(0,T) = J^{\alpha}L^2(0,T),\quad 0<\alpha<1. $$ Next we define $$ \partial_t^{\alpha} = (J^{\alpha})^{-1} \quad \mbox{with $\mathcal{D}(\partial_t^{\alpha}) = H_{\alpha}(0,T)$}. $$ As have been shown in \cite{GLY} and \cite{KRY}, there exists a constant $C>0$ depending only on $\alpha$ such that $$ C^{-1}\Vert v\Vert_{H_{\alpha}(0,T)} \le \Vert \partial_t^{\alpha} v\Vert_{L^2(0,T)} \le C\Vert v\Vert_{H_{\alpha}(0,T)} \quad \mbox{for all } v\in H_{\alpha}(0,T). $$
Now we can introduce a suitable form of initial condition for the problem \eqref{(2.2)} as follows \begin{equation} \label{incon} u(x, \cdot) - a(x) \in H_{\alpha}(0,T) \quad \mbox{for almost all } x\in \Omega \end{equation} and write down a complete formulation of an initial-boundary value problem for the linear time-fractional diffusion equation \eqref{(1.1)}: \begin{equation} \label{(2.3)} \left\{ \begin{array}{rl} & \partial_t^{\alpha} (u(x,t)-a(x)) + Au(x,t) = F(x,t), \quad x \in \Omega, \thinspace 0<t<T, \\ & \ppp_{\nu_A} u(x,t) + \sigma(x)u(x,t) = 0, \quad x\in \partial\Omega, \, 0<t<T,\\ & u(x, \cdot) - a(x) \in H_{\alpha}(0,T) \quad \mbox{for almost all }x\in \Omega. \end{array}\right. \end{equation}
It is worth mentioning that the term $\partial_t^{\alpha} (u(x,t) - a(x))$ in the first line of \eqref{(2.3)} is well-defined due to inclusion formulated in the third line of \eqref{(2.3)}. In particular, for $\frac{1}{2} < \alpha < 1$, the Sobolev embedding leads to the inclusions $H_{\alpha}(0,T) \subset H^{\alpha}(0,T)\subset C[0,T]$. This means that $u\in H_{\alpha}(0,T;L^2(\Omega))$ implies $u \in C([0,T];L^2(\Omega))$ and thus in this case the initial condition can be formulated as $u(\cdot,0) = a$ in $L^2$-sense. Moreover, for sufficiently smooth functions $a$ and $F$, the solution to \eqref{(2.3)} can be proved to satisfy the initial condition in a usual sense: $\lim_{t\to 0} u(\cdot,t) = a$ in $L^2(\Omega)$ (see Lemma 4 in Section 3). Consequently, the third line of \eqref{(2.3)} can be interpreted as a generalized initial condition.
In the following theorem, a fundamental result regarding the unique existence of the solution to the initial-boundary value problem \eqref{(2.3)} is presented.
\begin{theorem} \label{t2.1} For $a\in H^1(\Omega)$ and $F \in L^2(0,T;L^2(\Omega))$, there exists a unique solution $u(F,a) = u(F,a)(x,t) \in L^2(0,T;H^2(\Omega))$ to the initial-boundary value problem \eqref{(2.3)} such that $u(F,a)-a \in H_{\alpha}(0,T;L^2(\Omega))$.
Moreover, there exists a constant $C>0$ such that \begin{align*} & \Vert u(F,a)-a\Vert_{H_{\alpha}(0,T;L^2(\Omega))} + \Vert u(F,a)\Vert_{L^2(0,T;H^2(\Omega))} \\ \le &C(\Vert a\Vert_{H^1(\Omega)} + \Vert F\Vert_{L^2(0,T;L^2(\Omega))}). \end{align*} \end{theorem}
Before starting with a proof of Theorem \ref{t2.1}, we introduce some notations and derive several helpful results needed for the proof.
For an arbitrary constant $c_0>0$, we define an elliptic operator $A_0$
as follows: \begin{equation} \label{(3.2)} \left\{ \begin{array}{rl} & (-A_0v)(x) := \sum_{i,j=1}^d \partial_i(a_{ij}(x)\partial_jv(x)) - c_0v(x), \quad x\in \Omega, \\ & \mathcal{D}(A_0) = \left\{ v\in H^2(\Omega);\, \ppp_{\nu_A} v + \sigma v = 0 \quad \mbox{on } \partial\Omega \right\}. \end{array}\right. \end{equation}
We recall that in the definition \eqref{(3.2)}, $\sigma$ is a smooth function, the inequality $\sigma(x)\ge 0,\ x\in \partial\Omega$ holds true, and the coefficients $a_{ij}$ satisfy the conditions \eqref{(1.2)}.
Henceforth, by $\Vert \cdot\Vert$ and $(\cdot,\cdot)$ we denote the standard norm and the scalar product in $L^2(\Omega)$, respectively. It is well-known that the operator $A_0$ is self-adjoint and its resolvent is a compact operator. Moreover, for a sufficiently large constant $c_0>0$, by Lemma \ref{lem1} in Section \ref{sec8}, we can verify that $A_0$ is positive definite. Therefore, by choosing the constant $c_0>0$ large enough, the spectrum of $A_0$ consists entirely of discrete positive eigenvalues $0 < \lambda_1 \le \lambda_2 \le \cdots$, which are numbered according to their multiplicities and $\lambda_n \to \infty$ as $n\to \infty$. Let $\varphi_n$ be an eigenvector corresponding to the eigenvalue $\lambda_n$ such that $A\varphi_n = \lambda_n\varphi_n$ and $(\varphi_n, \varphi_m) = 0$ if $n \ne m$ and $(\varphi_n,\varphi_n) = 1$. Then the system $\{ \varphi_n\}_{n\in \mathbb{N}}$ of the eigenvectors forms an orthonormal basis in $L^2(\Omega)$ and for any $\gamma\ge 0$ we can define the fractional powers $A_0^{\gamma}$ of the operator $A_0$ by the following relation (see, e.g., \cite{Pa}): $$ A_0^{\gamma}v = \sum_{n=1}^{\infty} \lambda_n^{\gamma} (v,\varphi_n)\varphi_n, $$ where $$ v \in \mathcal{D}(A_0^{\gamma}) := \left\{ v\in L^2(\Omega): \thinspace \sum_{n=1}^{\infty} \lambda_n^{2\gamma} (v,\varphi_n)^2 < \infty\right\} $$ and $$ \Vert A_0^{\gamma}v\Vert = \left( \sum_{n=1}^{\infty} \lambda_n^{2\gamma} (v,\varphi_n)^2 \right)^{\frac{1}{2}}. $$ We note that $\mathcal{D}(A_0^{\gamma}) \subset H^{2\gamma}(\Omega)$.
Our proof of Theorem \ref{t2.1} is similar to the one presented in \cite{GLY}, \cite{KRY} for the case of the homogeneous Dirichlet boundary condition. In particular, we employ the operators $S(t)$ and $K(t)$ defined by (\cite{GLY}, \cite{KRY}) \begin{equation} \label{(5.1)} S(t)a = \sum_{n=1}^{\infty} E_{\alpha,1}(-\lambda_n t^{\alpha}) (a,\varphi_n)\varphi_n, \quad a\in L^2(\Omega), \thinspace t>0 \end{equation} and \begin{equation} \label{(5.2)} K(t)a = -A_0^{-1}S'(t)a = \sum_{n=1}^{\infty} t^{\alpha-1}E_{\alpha,\alpha}(-\lambda_n t^{\alpha}) (a,\varphi_n)\varphi_n, \quad a\in L^2(\Omega), \thinspace t>0. \end{equation} In the above formulas, $E_{\alpha,\beta}(z)$ denotes the Mittag-Leffler function defined by a convergent series as follows: $$ E_{\alpha,\beta}(z) = \sum_{k=0}^\infty \frac{z^k}{\Gamma(\alpha\, k + \beta)}, \ \alpha >0,\ \beta \in \mathbb{C},\ z \in \mathbb{C}. $$ It follows directly from the definitions given above that $A_0^{\gamma}K(t)a = K(t)A_0^{\gamma}a$ and $A_0^{\gamma}S(t)a = S(t)A_0^{\gamma}a$ for $a \in \mathcal{D} (A_0^{\gamma})$. Moreover, the inequality (see, e.g., Theorem 1.6 (p. 35) in \cite{Po}) $$ \max \{ \vert E_{\alpha,1}(-\lambda_nt^{\alpha})\vert, \, \vert E_{\alpha,\alpha}(-\lambda_nt^{\alpha})\vert \} \le \frac{C}{1+\lambda_nt^{\alpha}} \quad \mbox{for all $t>0$} $$ implicates the estimations (\cite{GLY}) \begin{equation} \label{(5.3)} \left\{ \begin{array}{l} \Vert A_0^{\gamma}S(t)a\Vert \le Ct^{-\alpha\gamma}\Vert a\Vert, \\ \Vert A_0^{\gamma}K(t)a\Vert \le Ct^{\alpha(1-\gamma)-1} \Vert a\Vert, \quad a \in L^2(\Omega), \thinspace t > 0, \thinspace 0 \le \gamma \le 1. \end{array}\right. \end{equation} In order to shorten the notations and to focus on the dependence on the time variable $t$, henceforth we sometimes omit the variable $x$ in the functions of two variables $x$ and $t$ and write, say, $u(t)$ instead of $u(\cdot,t)$.
Due to the inequalities \eqref{(5.3)}, the estimations provided in the formulation of Theorem \ref{t2.1} can be derived as in the case of the fractional powers of generators of the analytic semigroups (\cite{He}). To do this, we first formulate and prove the following lemma:
\begin{lemma} \label{l5.1} Under the conditions formulated above, the following estimates hold true for $F\in L^2(0,T;L^2(\Omega))$ and $a \in L^2(\Omega)$:
\noindent (i) $$ \left\Vert \int^t_0 A_0K(t-s)F(s) ds \right\Vert_{L^2(0,T;L^2(\Omega))} \le C\Vert F\Vert_{L^2(0,T;L^2(\Omega))}, $$ \noindent (ii) $$ \left\Vert \int^t_0 K(t-s)F(s) ds \right\Vert_{H_{\alpha}(0,T;L^2(\Omega))} \le C\Vert F\Vert_{L^2(0,T;L^2(\Omega))}, $$ \noindent (iii) $$ \Vert S(t)a - a\Vert_{H_{\alpha}(0,T;L^2(\Omega))} + \Vert S(t)a\Vert_{L^2(0,T;H^2(\Omega))} \le C\Vert a\Vert. $$ \end{lemma}
\begin{proof} We start with proving the estimate (i). By \eqref{(5.2)}, we have \begin{align*} & \left\Vert \int^t_0 A_0 K(t-s)F(s) ds \right\Vert^2\\ =& \left\Vert \sum_{n=1}^{\infty} \left(\int^t_0 \lambda_n(t-s)^{\alpha-1} E_{\alpha,\alpha}(-\lambda_n(t-s)^{\alpha}) (F(s), \varphi_n) ds\right) \varphi_n\right\Vert^2\\ =& \sum_{n=1}^{\infty} \left\vert \int^t_0 \lambda_n(t-s)^{\alpha-1}E_{\alpha,\alpha}(-\lambda_n(t-s)^{\alpha}) (F(s), \varphi_n) ds \right\vert^2. \end{align*} Therefore, using the Parseval equality and the Young inequality for the convolution, we obtain \begin{align*} & \left\Vert \int^t_0 A_0K(t-s)F(s) ds \right\Vert^2_{L^2(0,T;L^2(\Omega))}\\ = & \sum_{n=1}^{\infty} \int^T_0 \vert (\lambda_ns^{\alpha-1}E_{\alpha,\alpha}(-\lambda_ns^{\alpha}) \, *\, (F(s), \varphi_n) \vert^2 ds\\ = & \sum_{n=1}^{\infty} \Vert \lambda_ns^{\alpha-1}E_{\alpha,\alpha}(-\lambda_ns^{\alpha}) \, * \, (F(s), \varphi_n) \Vert^2_{L^2(0,T)}\\ \le & \sum_{n=1}^{\infty} \left( \lambda_n\int^t_0 \vert t^{\alpha-1}E_{\alpha,\alpha}(-\lambda_nt^{\alpha}) \vert dt \right)^2 \Vert (F(t),\varphi_n)\Vert^2_{L^2(0,T)}. \end{align*} Then we employ the representation \begin{equation}\label{(2.9a)} \frac{d}{dt}E_{\alpha,1}(-\lambda_nt^{\alpha}) = -\lambda_nt^{\alpha-1}E_{\alpha,\alpha}(-\lambda_nt^{\alpha}), \end{equation} and the complete monotonicity of the Mittag-Leffler function (\cite{GKMR}) $$ E_{\alpha,1}(-\lambda_nt^{\alpha}) > 0, \quad \frac{d}{dt}E_{\alpha,1}(-\lambda_nt^{\alpha}) \le 0, \quad t\ge 0, \quad 0<\alpha\le 1 $$ to get the inequality \begin{equation} \label{(5.4)} \int^T_0 \vert \lambda_nt^{\alpha-1}E_{\alpha,\alpha}(-\lambda_nt^{\alpha})\vert dt = -\int^T_0 \frac{d}{dt}E_{\alpha,1}(-\lambda_nt^{\alpha})dt \end{equation} $$ = 1 - E_{\alpha,1}(-\lambda_nT^{\alpha}) \le 1 \quad \mbox{for all $n\in \mathbb{N}$}. $$ Hence, \begin{align*} & \left\Vert \int^t_0 A_0K(t-s)F(s) ds \right\Vert^2_{L^2(0,T;L^2(\Omega))} \le \sum_{n=1}^{\infty} \Vert (F(t), \varphi_n)\Vert^2_{L^2(0,T)}\\ =& \int^T_0 \sum_{n=1}^{\infty} \vert (F(t), \varphi_n) \vert^2 dt = \int^T_0 \Vert F(\cdot,t)\Vert^2 dt = \Vert F\Vert_{L^2(0,T;L^2(\Omega))}^2. \end{align*}
Now we proceed with proving the estimate (ii). For $0<t<T, \, n\in \mathbb{N}$ and $f\in L^2(0,T)$, we set $$ (L_nf)(t) := \int^t_0 (t-s)^{\alpha-1}E_{\alpha,\alpha}(-\lambda_n(t-s)^{\alpha}) f(s) ds. $$ Then $$ \int^t_0 K(t-s)F(s) ds = \sum_{n=1}^{\infty} (L_nf)(t)\varphi_n $$ in $L^2(\Omega)$ for any fixed $t \in [0,T]$.
First we prove that \begin{equation} \label{(5.5)} \left\{ \begin{array}{rl} & L_nf \in H_{\alpha}(0,T), \\ & \partial_t^{\alpha}(L_nf)(t) = -\lambda_nL_nf(t) + f(t), \quad 0<t<T, \\ & \Vert L_nf\Vert_{H_{\alpha}(0,T)} \le C\Vert f\Vert_{L^2(0,T)}, \quad n\in \mathbb{N} \quad \mbox{for each } f\in L^2(0,T). \end{array}\right. \end{equation} In order to prove this, we apply the Riemann-Liouville fractional integral $J^{\alpha}$ to $L_nf$ and get the representation \begin{align*} & J^{\alpha}(L_nf)(t) = \frac{1}{\Gamma(\alpha)}\int^t_0 (t-s)^{\alpha-1}(L_nf)(s) ds\\ =& \frac{1}{\Gamma(\alpha)} \int^t_0 (t-s)^{\alpha-1} \left( \int^s_0 (s-\xi)^{\alpha-1}E_{\alpha,\alpha}(-\lambda_n (s-\xi)^{\alpha})f(\xi) d\xi \right) ds\\ =& \frac{1}{\Gamma(\alpha)}\int^t_0 f(\xi) \left( \int^t_{\xi} (t-s)^{\alpha-1}(s-\xi)^{\alpha-1}E_{\alpha,\alpha}(-\lambda_n (s-\xi)^{\alpha}) ds \right) d\xi. \end{align*} By direct calculations, using \eqref{(2.9a)}, we obtain the formula \begin{align*} & \frac{1}{\Gamma(\alpha)}\int^t_{\xi} (t-s)^{\alpha-1}(s-\xi)^{\alpha-1} E_{\alpha,\alpha}(-\lambda_n (s-\xi)^{\alpha}) ds\\ =& -\frac{1}{\lambda_n}(t-\xi)^{\alpha-1}\left(E_{\alpha,\alpha}(-\lambda_n t^{\alpha}) - \frac{1}{\Gamma(\alpha)}\right). \end{align*}
Therefore, we have the relation \begin{align*} & J^{\alpha}(L_nf)(t) = -\frac{1}{\lambda_n}(L_nf)(t) + \frac{1}{\lambda_n}\int^t_0 (t-\xi)^{\alpha-1}\frac{1}{\Gamma(\alpha)} f(\xi) d\xi\\ =& -\frac{1}{\lambda_n}(L_nf)(t) + \frac{1}{\lambda_n}(J^{\alpha}f)(t), \quad n\in \mathbb{N}, \end{align*} that is, $$ (L_nf)(t) = -\lambda_n J^{\alpha}(L_nf)(t) + (J^{\alpha}f)(t), \quad 0<t<T. $$ Hence, $L_nf \in H_{\alpha}(0,T) = J^{\alpha}L^2(0,T)$. By definition, $\partial_t^{\alpha} = (J^{\alpha})^{-1}$ (\cite{KRY}) and thus the last formula can be rewritten in the form $$ \partial_t^{\alpha} (L_nf) = -\lambda_n L_nf + f \quad \mbox{in }(0,T). $$
Using the inequality \eqref{(5.4)}, we obtain $$ \lambda_n\Vert L_nf\Vert_{L^2(0,T)} \le \lambda_n\Vert s^{\alpha-1} E_{\alpha,\alpha}(-\lambda_ns^{\alpha})\Vert_{L^1(0,T)}\Vert f\Vert_{L^2(0,T)} \le \Vert f\Vert_{L^2(0,T)}. $$ Therefore, \begin{align*} & \Vert L_nf\Vert_{H_{\alpha}(0,T)} \le C\Vert \partial_t^{\alpha}(L_nf)\Vert_{L^2(0,T)} \le C(\Vert -\lambda_nL_nf\Vert_{L^2(0,T)} + \Vert f\Vert_{L^2(0,T)})\\ \le& C\Vert f\Vert_{L^2(0,T)}, \quad n\in \mathbb{N}, \quad f\in L^2(0,T). \end{align*} Thus, the estimate \eqref{(5.5)} is proved.
Now we set $f_n(s) := (F(s), \, \varphi_n)$ for $0<s<T$ and $n\in \mathbb{N}$. Since $$ \partial_t^{\alpha} \int^t_0 K(t-s)F(s) ds = \sum_{n=1}^{\infty} \partial_t^{\alpha}(L_nf_n)(t)\varphi_n, $$ we obtain $$ \left\Vert \partial_t^{\alpha} \int^t_0 K(t-s)F(s) ds\right\Vert^2_{L^2(\Omega)} = \sum_{n=1}^{\infty} \vert \partial_t^{\alpha}(L_nf_n)(t)\vert^2. $$ By applying \eqref{(5.5)}, we get the following chain of inequalities and equations: \begin{align*} \left\Vert \partial_t^{\alpha} \int^t_0 K(t-s)F(s) ds\right\Vert^2 _{H_{\alpha}(0,T;L^2(\Omega))} \le C\left\Vert \partial_t^{\alpha}\int^t_0 K(t-s)F(s) ds\right\Vert^2 _{L^2(0,T;L^2(\Omega))}\\ = C\sum_{n=1}^{\infty} \Vert \partial_t^{\alpha}(L_nf_n)\Vert^2_{L^2(0,T)} \le C\sum_{n=1}^{\infty} \Vert L_nf_n\Vert^2_{H_{\alpha}(0,T)} \le C\sum_{n=1}^{\infty} \Vert f_n\Vert^2_{L^2(0,T)}\\ = C\int^T_0 \sum_{n=1}^{\infty} \vert (F(s),\varphi_n) \vert^2 ds = C\int^T_0 \Vert F(s)\Vert_{L^2(\Omega)}^2 ds = C\Vert F\Vert^2_{L^2(0,T;L^2(\Omega))}. \end{align*} Thus, the proof of the estimate (ii) is completed.
The estimate (iii) from Lemma \ref{l5.1} follows from the standard estimates of the operator $S(t)$. It can be derived by the same arguments as those that were employed in Section 6 of Chapter 4 in \cite{KRY} for the case of the homogeneous Dirichlet boundary condition and we omit here the technical details. \end{proof}
Now we proceed to the proof of Theorem \ref{t2.1}.
\begin{proof}
In the first line of the problem \eqref{(2.3)}, we regard the expressions $\sum_{j=1}^d b_j(x,t)\partial_ju$ and $c(x,t)u$ as some non-homogeneous terms. Then this problem can be rewritten in terms of the operator $A_0$ as follows \begin{equation} \label{(5.6)} \left\{ \begin{array}{rl} & \partial_t^{\alpha} (u-a) + A_0u(x,t) = F(x,t)\\ +& \sum_{j=1}^d b_j(x,t)\partial_ju + (c_0+c(x,t))u, \quad x\in \Omega,\, 0<t<T,\\ & \ppp_{\nu_A} u + \sigma(x) u = 0 \quad \mbox{on }\partial\Omega \times (0,T),\\ & u(x,\cdot) - a(x) \in H_{\alpha}(0,T) \quad \mbox{for almost all }x\in \Omega. \end{array}\right. \end{equation} In its turn, the first line of \eqref{(5.6)} can be represented in the form (\cite{GLY}, \cite{KRY}) $$ u(t) = S(t)a + \int^t_0 K(t-s)F(s) ds $$ \begin{equation} \label{(5.7)} + \int^t_0 K(t-s) \left(\sum_{j=1}^d b_j(s)\partial_ju(s) + (c_0+c(s))u(s) \right) ds, \quad 0<t<T. \end{equation} Moreover, it is known that if $u\in L^2(0,T;H^2(\Omega))$ satisfies the initial condition $u-a \in H_{\alpha}(0,T;L^2(\Omega))$ and the equation \eqref{(5.7)}, then $u$ is a solution to the problem \eqref{(5.6)}. With the notations \begin{equation} \label{(5.8)} \left\{ \begin{array}{rl} & G(t):= \int^t_0 K(t-s)F(s) ds + S(t)a, \\ & Qv(t) = Q(t)v(t) := \sum_{j=1}^d b_j(\cdot,t)\partial_jv(t) + (c_0+c(\cdot,t))v(t), \\ & Rv(t):= \int^t_0 K(t-s) \left(\sum_{j=1}^d b_j(\cdot,s)\partial_jv(s) + (c_0+c(\cdot,s))v(s) \right) ds,\\ &\qquad \qquad \qquad \mbox{for $0<t<T$}, \end{array}\right. \end{equation} the equation \eqref{(5.7)} can be represented in form of a fixed point equation $u = Ru + G$ on the space $L^2(0,T;H^2(\Omega))$.
Lemma \ref{l5.1} yields the inclusion $G \in L^2(0,T;H^2(\Omega))$. Moreover, since $\Vert A_0^{\hhalf}a\Vert \le C\Vert a\Vert_{H^1(\Omega)}$ and $\mathcal{D}(A_0^{\hhalf}) = H^1(\Omega)$ (see, e.g., \cite{Fu}), the estimate \eqref{(5.3)} implies $$ \Vert S(t)a\Vert_{H^2(\Omega)} \le C\Vert A_0S(t)a\Vert = C\Vert A_0^{\hhalf}S(t)A_0^{\hhalf}a\Vert \le Ct^{-\hhalf\alpha}\Vert a\Vert_{H^1(\Omega)} $$ and thus $$ \Vert S(t)a\Vert^2_{L^2(0,T;H^2(\Omega))} \le C\left(\int^T_0 t^{-\alpha}dt \right)\Vert a\Vert^2_{H^1(\Omega)} \le \frac{CT^{1-\alpha}}{1-\alpha}\Vert a\Vert^2_{H^1(\Omega)}. $$ Consequently, the inclusion $S(t)a \in L^2(0,T;H^2(\Omega))$ holds valid.
For $0<t<T$, we next estimate $\Vert Rv(\cdot,t)\Vert_{H^2(\Omega)}$ for $v(\cdot,t) \in \mathcal{D}(A_0)$ as follows: \begin{align*} & \Vert Rv(\cdot,t)\Vert_{H^2(\Omega)} \le C\Vert A_0Rv(\cdot,t)\Vert_{L^2(\Omega)}\\ \le & \int^t_0 \left\Vert A_0^{\hhalf}K(t-s)A_0^{\hhalf} \left(\sum_{j=1}^d b_j(s)\partial_jv(s) + (c_0+c(s))v(s) \right) \right\Vert ds\\ \le & C\int^t_0 \Vert A_0^{\hhalf}K(t-s)\Vert \left\Vert A_0^{\hhalf} \left(\sum_{j=1}^d b_j(s)\partial_jv(s) + (c_0+c(s))v(s) \right) \right\Vert ds\\ \le& C\int^t_0 (t-s)^{\hhalf\alpha -1}\Vert v(s)\Vert_{H^2(\Omega)}ds = C\left( \Gamma\left(\hhalf\alpha\right)J^{\hhalf\alpha}\Vert v\Vert _{H^2(\Omega)}\right)(t). \end{align*} For derivation of this estimate, we employed the inequalities $$ \Vert A_0^{\hhalf}b_j(s)\partial_jv(t)\Vert \le C\Vert b_j(s)\partial_jv(s)\Vert_{H^1(\Omega)} \le C\Vert v(s)\Vert_{H^2(\Omega)} $$ and $\Vert (c(s)+c_0)v(s)\Vert_{H^1(\Omega)} \le C\Vert v(s)\Vert_{H^2(\Omega)}$ that are valid because of the inclusions $b_j \in C^1(\overline{\Omega}\times [0,T])$) and $c+c_0\in C([0,T];C^1(\overline{\Omega}))$.
Since $(J^{\hhalf\alpha}w_1)(t) \ge (J^{\hhalf\alpha}w_2)(t)$ if $w_1(t) \ge w_2(t)$ for $0\le t\le T$, and $J^{\hhalf\alpha}J^{\hhalf\alpha}w = J^{\alpha}w$ for $w_1, w_2, w \in L^2(0,T)$, we have \begin{align*} &\Vert R^2v(t)\Vert_{H^2(\Omega)} = \Vert R(Rv)(t)\Vert_{H^2(\Omega)}\\ \le & C\left( \Gamma\left(\hhalf\alpha\right)J^{\hhalf\alpha} \left(C\Gamma\left(\hhalf\alpha\right)J^{\hhalf\alpha} \Vert v\Vert_{H^2(\Omega)}\right) \right)(t)\\ = & \left( C\Gamma\left(\hhalf\alpha\right)\right)^2 (J^{\alpha}\Vert v\Vert_{H^2(\Omega)})(t). \end{align*} Repeating this argumentation $m$-times, we obtain \begin{align*} & \Vert R^mv(t)\Vert_{H^2(\Omega)} \le \left( C\Gamma\left(\hhalf\alpha\right)\right)^m \left( J^{\hhalf\alpha m}\Vert v\Vert_{H^2(\Omega)}\right)(t)\\ \le & \frac{\left( C\Gamma\left(\hhalf\alpha\right)\right)^m} {\Gamma\left( \hhalf\alpha m\right)} \int^t_0 (t-s)^{\frac{m}{2}\alpha -1} \Vert v(\xi)\Vert_{H^2(\Omega)}ds, \quad 0<t<T. \end{align*} Applying the Young inequality to the integral at the right-hand side of the last estimate, we arrive to the inequality \begin{align*} & \Vert R^mv(t)\Vert_{L^2(0,T; H^2(\Omega))}^2 \le \left( \frac{\left( C\Gamma\left(\hhalf\alpha\right) \right)^m} {\Gamma\left( \hhalf\alpha m\right)}\right)^2 \Vert t^{\frac{\alpha m}{2}-1}\Vert_{L^1(0,T)}^2 \Vert v\Vert_{L^2(0,T;H^2(\Omega))}^2\\ =& \frac{\left( CT^{\frac{\alpha}{2}} \Gamma\left(\hhalf\alpha \right)\right)^{2m}} {\Gamma\left( \hhalf\alpha m +1\right)^2} \Vert v\Vert_{L^2(0,T;H^2(\Omega))}^2. \end{align*} Employing the known asymptotic behavior of the gamma function, we obtain the relation $$ \lim_{m\to\infty} \frac{\left( CT^{\frac{\alpha}{2}} \Gamma\left(\hhalf\alpha \right)\right)^m} {\Gamma\left( \hhalf\alpha m +1\right)} = 0 $$ that means that for sufficiently large $m\in \mathbb{N}$, the mapping $$ R^m: L^2(0,T;H^2(\Omega))\, \longrightarrow \, L^2(0,T;H^2(\Omega)) $$ is a contraction. Hence, by the Banach fixed point theorem, the equation \eqref{(5.7)} possesses a unique fixed point. Therefore, by the first equation in \eqref{(2.3)}, we obtain the inclusion $\partial_t^{\alpha} (u-a) \in L^2(0,T;L^2(\Omega))$. Since $\Vert \eta\Vert_{H_{\alpha}(0,T)} \sim \Vert \partial_t^{\alpha} \eta\Vert_{L^2(0,T)}$ for $\eta \in H_{\alpha}(0,T)$ (\cite{KRY}), we finally obtain the estimate $$ \Vert u-a\Vert_{H_{\alpha}(0,T;L^2(\Omega))} + \Vert u\Vert_{L^2(0,T;H^2(\Omega))} \le C(\Vert a\Vert_{H^1(\Omega)} + \Vert F\Vert_{L^2(0,T;L^2(\Omega))}). $$ The proof of Theorem \ref{t2.1} is completed.
\end{proof}
\section{Key lemma} \label{sec3}
\setcounter{section}{3} \setcounter{equation}{0}
For derivation of the comparison principles for solutions to the initial-boundary value problems for the linear and semilinear time-fractional diffusion equations, we need some auxiliary results that are formulated and proved in this section.
In addition to the operator $-A_0$ defined by \eqref{(3.2)}, we define an elliptic operator $-A_1$ with a positive zeroth-order coefficient: \begin{equation}\label{(3.1a)} (-A_1(t)v)(x):= (-A_1v)(x) \end{equation} $$ := \sum_{i,j=1}^d \partial_i(a_{ij}(x)\partial_jv(x)) + \sum_{j=1}^d b_j(x,t)\partial_jv - b_0(x,t)v, $$ where $b_0 \in C^1([0,T];C^1(\overline{\Omega})) \cap C([0,T];C^2(\overline{\Omega}))$, $b_0(x,t) > 0,\ (x,t)\in \overline{\Omega}\times [0,T]$, and $\min_{(x,t)\in \overline{\Omega}\times [0,T]} b_0(x,t)$ is sufficiently large.
We also recall that for $y\in W^{1,1}(0,T)$, the pointwise Caputo derivative $d_t^{\alpha}$ is defined by \begin{equation} \label{(4.1)} d_t^{\alpha} y(t) = \frac{1}{\Gamma(1-\alpha)} \int^t_0 (t-s)^{-\alpha}\frac{dy}{ds}(s) ds. \end{equation}
In what follows, we employ an extremum principle for the Caputo fractional derivative formulated below.
\begin{lemma}[\cite{Lu1}] \label{l4.1} Let the inclusions $y\in C[0,T]$ and $t^{1-\alpha}y' \in C[0,T]$ hold true.
If the function $y=y(t)$ attains its minimum over the interval $[0,T]$ at the point $t_0 \in (0, \,T]$, then $$ d_t^{\alpha} y(t_0) \le 0. $$ \end{lemma}
In Lemma \ref{l4.1}, the assumption $t_0>0$ is essential. This lemma was formulated and proved in \cite{Lu1} under a weaker regularity condition posed on the function $y$, but for our arguments we can assume that $y\in C[0,T]$ and $t^{1-\alpha}y' \in C[0,T]$.
Employing Lemma \ref{l4.1}, we now formulate and prove our key lemma that is a basis for further derivations in this paper.
\begin{lemma}[Positivity of a smooth solution] \label{l4.2} For $F\in L^2(0,T;L^2(\Omega))$ and $a\in H^1(\Omega)$, let $F(x,t) \ge 0,\ (x,t)\in \Omega\times (0,T)$, $a(x)\ge 0,\ x\in \Omega$, and $\min_{(x,t)\in \overline{\Omega}\times [0,T]} b_0(x,t)$ be a sufficiently large positive constant. Furthermore, we assume that there exists a solution $u\in C([0,T];C^2(\overline{\Omega}))$ to the initial-boundary value problem \begin{equation} \label{(4.2)} \left\{ \begin{array}{rl} & \partial_t^{\alpha} (u-a) + A_1u = F(x,t), \quad x\in \Omega,\, 0<t<T, \\ & \ppp_{\nu_A} u + \sigma(x)u = 0 \quad \mbox{on } \partial\Omega\times (0,T),\\ & u(x,\cdot) - a(x) \in H_{\alpha}(0,T) \quad \mbox{for almost all } x\in \Omega \end{array} \right. \end{equation} and $u$ satisfies the condition $t^{1-\alpha}\partial_tu \in C([0,T];C(\overline{\Omega}))$.
Then the solution $u$ is non-negative: $$ u (x,t)\ge 0,\ (x,t)\in \Omega \times (0,T). $$ \end{lemma}
For the partial differential equations of parabolic type with the Robin boundary condition ($\alpha=1$ in \eqref{(4.2)}), a similar positivity property is well-known. However, it is worth mentioning that the regularity of the solution to the problem \eqref{(4.2)} at the point $t=0$ is a more delicate question compared to the one in the case $\alpha=1$. In particular, we cannot expect the inclusion $u(x,\cdot) \in C^1[0,T]$. This can be illustrated by a simple example of the equation $\partial_t^{\alpha} y(t) = y(t)$ with $y(t)-1 \in H_{\alpha}(0,T)$ whose unique solution $y(t) = E_{\alpha,1}(t^{\alpha})$ does not belong to the space $C^1[0,T]$.
\begin{proof} First we introduce an auxiliary function $\psi \in C^1([0,T];C^2(\overline{\Omega}))$ that satisfies the conditions \begin{equation} \label{(4.3)} \left\{ \begin{array}{rl} & A_1\psi(x,t) = 1, \quad (x,t) \in \Omega\times [0,T], \\ & \ppp_{\nu_A} \psi + \sigma \psi = 1 \quad \mbox{on } \partial\Omega\times [0,T]. \end{array}\right. \end{equation} Proving existence of such function $\psi$ is non-trivial. In this section, we focus on the proof of the lemma and then come back to the problem \eqref{(4.3)} in Appendix.
Now, choosing $M>0$ sufficiently large and $\varepsilon>0$ sufficiently small, we set \begin{equation} \label{(w_u)} w(x,t) := u(x,t) + \varepsilon(M + \psi(x,t) + t^{\alpha}), \quad x\in \Omega,\, 0<t<T. \end{equation}
For a fixed $x\in \Omega$, by the assumption on the regularity of $u$, we have the inclusion \begin{equation}\label{(3.6)} t^{1-\alpha}\partial_tu(x,\cdot) \in C[0,T]. \end{equation} Then, $\partial_tu(x,\cdot) \in L^1(0,T)$, that is, $u(x,\cdot) \in W^{1,1}(0,T)$. Moreover, \begin{equation}\label{(3.7)} u(x,0) - a(x) = 0, \quad x\in \Omega. \end{equation}
On the other hand, for $w\in H_{\alpha}(0,T) \cap W^{1,1}(0,T)$ and $w(0) = 0$, the equality $$ \partial_t^{\alpha} w = d_t^{\alpha} w = d_t^{\alpha} (w+c) $$ holds true with any constant $c$ (see, e.g., Theorem 2.4 of Chapter 2 in \cite{KRY}).
Since $u(x,\cdot) - a \in H_{\alpha}(0,T)$ and $u(x,\cdot) \in W^{1,1}(0,T)$, by \eqref{(3.7)}, the relations $\partial_t^{\alpha} (u-a) = d_t^{\alpha} (u-a) = d_t^{\alpha} u$ hold true for almost all $x\in \Omega$.
Furthermore, since $\varepsilon(M+\psi(\cdot,t)+t^{\alpha}) \in W^{1,1}(0,T)$, we obtain \begin{align*} & d_t^{\alpha} w = d_t^{\alpha} (u + \varepsilon(M+\psi(x,t)+t^{\alpha}))) = d_t^{\alpha} u + \varepsilond_t^{\alpha} (M+\psi(x,t)+t^{\alpha})\\ =& \partial_t^{\alpha} (u-a) + \varepsilon(d_t^{\alpha} (\psi + t^{\alpha})) = \partial_t^{\alpha} (u-a) + \varepsilon(d_t^{\alpha}\psi + \Gamma(\alpha+1)) \end{align*} and \begin{align*} & A_1w = A_1u + \varepsilon A_1\psi + \varepsilon A_1t^{\alpha} + \varepsilon A_1M\\ =& A_1u + \varepsilon + \varepsilon b_0(x,t)t^{\alpha} + b_0(x,t)\varepsilon M. \end{align*}
Now we choose a constant $M>0$ such that $M + \psi(x,t) \ge 0$ and $d_t^{\alpha} \psi(x,t) + b_0(x,t)M > 0$ for $(x,t) \in \overline{\Omega} \times [0,T]$, so that \begin{equation} \label{(4.4)} d_t^{\alpha} w + A_1w \end{equation} $$ = F + \varepsilon(\Gamma(\alpha+1) + d_t^{\alpha} \psi + 1 + b_0(x,t)t^{\alpha} + b_0(x,t) M) > 0 \quad \mbox{in } \Omega\times (0,T). $$ Moreover, because of the relation $\ppp_{\nu_A} w = \ppp_{\nu_A} u + \varepsilon \ppp_{\nu_A} \psi$, we obtain the following estimate: $$ \ppp_{\nu_A} w + \sigma w = \ppp_{\nu_A} u + \sigma u + \varepsilon + \sigma \varepsilon t^{\alpha} + \varepsilon\sigma M $$ \begin{equation} \label{(4.5)} \ge \varepsilon + \sigma\varepsilon t^{\alpha} + \varepsilon\sigma M \ge \varepsilon \quad \mbox{on } \partial\Omega\times (0,T). \end{equation} Evaluation of the representation \eqref{(w_u)} at the point $t=0$ immediately leads to the formula $$
w(x,0) = u(x,0) + \varepsilon(\psi(x,0) + M), \quad x\in \Omega. $$
Let us assume that the inequality $$ \min_{(x,t)\in \overline{\Omega}\times [0,T]} w(x,t) \ge 0 $$ does not hold valid, that is, there exists a point $(x_0,t_0) \in \overline{\Omega}\times [0,T]$ such that \begin{equation} \label{(4.7)} w(x_0,t_0):= \min_{(x,t)\in \overline{\Omega}\times [0,T]} w(x,t) < 0. \end{equation} Since $M>0$ is sufficiently large and $u(x,0)$ is non-negative, we obtain the inequality $$ w(x,0) = u(x,0) + \varepsilon(\psi(x,0) + M) \ge u(x,0) \ge 0, \quad x\in \overline{\Omega}, $$ and thus $t_0$ cannot be zero.
Next, we show that $x_0 \not\in \partial\Omega$. Indeed, let us assume that $x_0 \in \partial\Omega$. Then the estimate \eqref{(4.5)} yields that $\ppp_{\nu_A} w(x_0,t_0) + \sigma(x_0)w(x_0,t_0) \ge \varepsilon$. By \eqref{(4.7)} and $\sigma(x_0)\ge 0$, we obtain $$ \ppp_{\nu_A} w(x_0,t_0) \ge -\sigma(x_0)w(x_0,t_0) + \varepsilon \ge \varepsilon > 0, $$ which implies $$ \partial_{\nu_A}w(x_0,t_0) = \sum_{i,j=1}^d a_{ij}(x_0)\nu_j(x_0)\partial_iw(x_0,t_0) = \nabla w(x_0,t_0) \cdot \mathcal{A}(x_0)\nu(x_0) $$ \begin{equation} \label{(4.8)} = \sum_{i=1}^d (\partial_iw)(x_0,t_0)[\mathcal{A}(x_0)\nu(x_0)]_i > 0. \end{equation} Here $\mathcal{A}(x) = (a_{ij}(x))_{1\le i,j\le d}$ and $[b]_i$ means the $i$-th element of a vector $b$.
For sufficiently small $\varepsilon_0>0$ and $x_0\in \partial\Omega$, we now verify the inclusion \begin{equation} \label{(4.9)} x_0 - \varepsilon_0\mathcal{A}(x_0)\nu(x_0) \in \Omega. \end{equation} Indeed, since the matrix $\mathcal{A}(x_0)$ is positive-definite, the inequality $$ (\nu(x_0)\, \cdot \, -\varepsilon_0\mathcal{A}(x_0)\nu(x_0)) = -\varepsilon_0(\mathcal{A}(x_0)\nu(x_0)\, \cdot \, \nu(x_0)) < 0 $$ holds true. In other words, the inequality $$ \angle (\nu(x_0), \, (x_0 - \varepsilon_0\mathcal{A}(x_0)\nu(x_0)) - x_0) > \frac{\pi}{2} $$ is satisfied. Because the boundary $\partial\Omega$ is smooth, the domain $\Omega$ is locally located on one side of $\partial\Omega$. In a small neighborhood of the point $x_0\in \partial\Omega$, the boundary $\partial\Omega$ can be described in the local coordinates composed of its tangential component in $\mathbb{R}^{d-1}$ and the normal component along $\nu(x_0)$. Consequently, if $y \in \mathbb{R}^d$ satisfies the inequality $\angle (\nu(x_0), y-x_0) > \frac{\pi}{2}$, then $y\in \Omega$. Therefore, for a sufficiently small $\varepsilon_0>0$, the point $x_0-\varepsilon_0\mathcal{A}(x_0)\nu(x_0)$ is located in $\Omega$ and we have proved the inclusion \eqref{(4.9)}.
Moreover, for sufficiently small $\varepsilon_0>0$, we can prove that \begin{equation}\label{(3.11a)} w(x_0 - \varepsilon_0\mathcal{A}(x_0)\nu(x_0),\,t_0) < w(x_0,t_0). \end{equation}
Indeed, the inequality \eqref{(4.7)} yields $$ \sum_{i=1}^d (\partial_iw)(x_0-\eta\mathcal{A}(x_0)\nu(x_0),\,t_0) [\mathcal{A}(x_0)\nu(x_0)]_i > 0 \quad\mbox{if }\vert \eta\vert < \varepsilon_0. $$ Then, by the mean value theorem, we obtain the inequality \begin{align*} &w(x_0 - \xi\mathcal{A}(x_0)\nu(x_0),\,t_0) - w(x_0,t_0)\\ = & \xi\sum_{i=1}^d \partial_iw(x_0 - \theta\mathcal{A}(x_0)\nu(x_0),\,t_0) (-[\mathcal{A}(x_0)\nu(x_0)]_i) < 0, \end{align*} where $\theta$ is a number between $0$ and $\xi\in (0,\varepsilon_0)$. Thus, the inequality \eqref{(3.11a)} is verified.
By combining \eqref{(3.11a)} with \eqref{(4.9)}, we conclude that there exists a point $\widetilde{x_0} \in \Omega$ such that the inequality $w(\widetilde{x_0},t_0) < w(x_0,t_0)$ holds true, which contradicts the assumption \eqref{(4.7)}. Thus, we have proved that $x_0 \not\in \partial\Omega$.
According to \eqref{(4.7)}, the function $w$ attains its minimum at the point $(x_0,t_0)$. Because $0 < t_0 \le T$, Lemma \ref{l4.1} yields the inequality \begin{equation} \label{(4.10)} d_t^{\alpha} w(x_0,t_0) \le 0. \end{equation} Since $x_0 \in \Omega$, the necessary condition for an extremum point leads to the equality \begin{equation} \label{(4.11)} \nabla w(x_0,t_0) = 0. \end{equation} Moreover, because the function $w$ attains its minimum at the point $x_0 \in \Omega$, in view of the sign of the Hessian, the inequality \begin{equation} \label{(4.12)} \sum_{i,j=1}^d a_{ij}(x_0)\partial_i\partial_j w(x_0,t_0) \ge 0 \end{equation} holds true (see, e.g., the proof of Lemma 1 in Section 1 of Chapter 2 in \cite{Fr}).
The inequalities $b(x_0,t_0)>0$, $w(x_0,t_0) < 0$, and \eqref{(4.10)}-\eqref{(4.12)} lead to the estimate \begin{align*} & d_t^{\alpha} w(x_0,t_0) + A_1w(x_0,t_0)\\ = & d_t^{\alpha} w (x_0,t_0) - \sum_{i,j=1}^d a_{ij}(x_0)\partial_i\partial_jw(x_0,t_0) - \sum_{i=1}^d (\partial_ia_{ij})(x_0)\partial_jw(x_0,t_0)\\ -& \sum_{i=1}^d b_i(x_0,t_0)\partial_iw(x_0,t_0) + b(x_0,t_0)w(x_0,t_0) < 0, \end{align*} which contradicts the inequality \eqref{(4.4)}.
Thus, we have proved that $$ u(x,t) + \varepsilon(M+\psi(x,t)+t^{\alpha}) = w(x,t) \ge 0, \quad (x,t) \in \Omega\times (0,T). $$ Since $\varepsilon>0$ is arbitrary, we let $\varepsilon \downarrow 0$ to obtain the inequality $u(x,t) \ge 0$ for $(x,t) \in \Omega\times (0,T)$ and
the proof of Lemma \ref{l4.2} is completed. \end{proof}
Let us finally mention that the positivity of the function $b_0$ from the definition of the operator $-A_1$ is an essential condition for validity of our proof of Lemma \ref{l4.2}. However, in the next section, we remove this condition while deriving the comparison principles for the solutions to the initial-boundary value problem \eqref{(2.3)}.
\section{Comparison principles} \label{sec4}
\setcounter{section}{4} \setcounter{equation}{0}
According to the results formulated in Theorem \ref{t2.1}, in this section, we consider the solutions to the initial-boundary value problem \eqref{(2.3)} that belong to the following space of functions: \begin{equation}\label{(4.1a)} \mathcal{Y}_\alpha := \{ u; \, u-a\in H_{\alpha}(0,T;L^2(\Omega)), \, u\in L^2(0,T;H^2(\Omega))\}. \end{equation} In what follows, by $u(F,a)$ we denote the solution to the problem \eqref{(2.3)} with the initial data $a$ and the source function $F$.
Our first result concerning the comparison principles for the solutions to the initial-boundary value problems for the linear time-fractional diffusion equation is presented in the next theorem.
\begin{theorem} \label{t2.2} Let the functions $a \in H^1(\Omega)$ and $F \in L^2(\Omega\times (0,T))$ satisfy the inequalities $F(x,t) \ge 0,\ (x,t)\in \Omega\times (0,T)$ and $a(x) \ge 0,\ x\in \Omega$, respectively.
Then the solution $u(F,a) \in \mathcal{Y}_\alpha$ to the initial-boundary value problem \eqref{(2.3)} is non-negative, e.g., the inequality $$ u(F,a)(x,t) \ge 0,\ (x,t)\in \Omega\times (0,T) $$ holds true. \end{theorem}
Let us emphasize that the non-negativity of the solution $u$ to the problem \eqref{(2.3)} holds true for the space $ \mathcal{Y}_\alpha$ and thus $u$ does not necessarily satisfy the inclusions $u \in C([0,T];C^2(\overline{\Omega}))$ and $t^{1-\alpha}\partial_tu \in C([0,T]; C(\overline{\Omega}))$. Therefore, Theorem \ref{t2.2} is widely applicable. Before presenting its proof, let us discuss one of its corollaries in form of a comparison property:
\begin{corollary} \label{c2.1} Let $a_1, a_2 \in H^1(\Omega)$ and $F_1, F_2 \in L^2(\Omega\times (0,T))$ satisfy the inequalities $a_1(x) \ge a_2(x),\ x\in \Omega$ and $F_1(x,t) \ge F_2(x,t), \ (x,t)\in \Omega\times (0,T)$, respectively.
Then the inequality $$ u(F_1, a_1)(x,t) \ge u(F_2,a_2)(x,t), \ (x,t)\in \Omega\times (0,T) $$ holds true. \end{corollary}
\begin{proof} Setting $a:= a_1-a_2$, $F:= F_1 - F_2$ and $u:= u(F_1,a_1) - u(F_2,a_2)$, we immediately obtain the inequalities $a(x)\ge 0,\ x\in \Omega$ and $F(x,t)\ge 0, \ (x,t)\in \Omega \times (0,T)$ and $$ \left\{ \begin{array}{rl} & \partial_t^{\alpha} (u-a) + Au = F \ge 0 \quad \mbox{in } \Omega \times (0,T), \\ & \ppp_{\nu_A} u + \sigma u = 0 \quad \mbox{on } \partial\Omega. \end{array}\right. $$ Therefore, Theorem \ref{t2.2} implies that $u(x,t)\ge 0, \ (x,t)\in \Omega\times (0,T)$, that is, $u(F_1, a_1)(x,t) \ge u(F_2,a_2)(x,t), \ (x,t)\in \Omega\times (0,T)$. \end{proof}
In its turn, Corollary \ref{c2.1} can be applied for derivation of the lower and upper bounds for the solutions to the initial-boundary value problem \eqref{(2.3)} by suitably choosing the initial values and the source functions. Let us demonstrate this technique on an example.
\begin{example} \label{ex1} Let the coefficients $a_{ij}, b_j$, $1\le i,j\le d$ of the operator $$ -Av(x) = \sum_{i,j=1}^d \partial_i(a_{ij}(x)\partial_jv(x)) + \sum_{j=1}^d b_j(x,t)\partial_jv(x) $$ from the initial-boundary value problem \eqref{(2.3)} satisfy the conditions \eqref{(1.2)}. Now we consider the homogeneous initial condition $a(x)=0,\ x\in \Omega$ and assume that the source function $F \in L^2(0,T;L^2(\Omega))$ satisfies the inequality $$ F(x,t) \ge \delta t^{\beta}, \quad x\in \Omega,\, 0<t<T $$ with certain constants $\beta \ge 0$ and $\delta>0$.
Then the solution $u(F,0)$ can be estimated from below as follows: \begin{equation}\label{(4.2a)} u(F,0)(x,t) \ge \frac{\delta\Gamma(\beta+1)}{\Gamma(\alpha+\beta+1)} t^{\alpha+\beta}, \quad x\in \Omega, \, 0\le t \le T. \end{equation} Indeed, it is easy to verify that the function $$ \underline{u}(x,t):= \frac{\delta\Gamma(\beta+1)}{\Gamma(\alpha+\beta+1)} t^{\alpha+\beta}, \quad x\in \Omega, \, t>0 $$ is a solution to the following problem: $$ \left\{\begin{array}{rl} & \partial_t^{\alpha} \underline{u} + A\underline{u} = \delta t^{\beta} \quad \mbox{in $\Omega \times (0,T)$}, \\ & \partial_{\nu_A}\underline{u} = 0 \quad \mbox{on $\partial\Omega \times (0,T)$}, \\ & \underline{u}(x,\cdot) \in H_{\alpha}(0,T). \end{array}\right. $$ Due to the inequality $F(x,t) \ge \delta t^{\beta},\ (x,t) \in \Omega \times (0,T)$, we can apply Corollary \ref{c2.1} to the solutions $u$ and $\underline{u}$ and the inequality \eqref{(4.2a)} immediately follows.
In particular, for the spatial dimensions $d \le 3$, the Sobolev embedding theorem leads to the inclusion $u \in L^2(0,T;H^2(\Omega)) \subset L^2(0,T;C(\overline{\Omega}))$ and thus the strict inequality $u(F,0)(x,t) > 0$ holds true for almost all $t>0$ and all $x\in \overline{\Omega}$. \end{example}
Now we proceed to the proof of Theorem \ref{t2.2}.
\begin{proof} In the proof, we employ the operators $Qv(t)$ and $G(t)$ defined by \eqref{(5.8)}. In terms of these operators, the solution $u(t):= u(F,a)(t)$ to the initial-boundary problem \eqref{(2.3)} satisfies the integral equation \begin{equation} \label{(6.1)} u(F,a)(t) = G(t) + \int^t_0 K(t-s)Qu(s) ds, \quad 0<t<T. \end{equation}
For readers' convenience, we split the proof into three parts.
\noindent I. First part of the proof: existence of a smoother solution.
In the formulation of Lemma \ref{l4.2}, we assumed existence of a solution $u\in C([0,T];C^2(\overline{\Omega}))$ to the initial-boundary value problem \eqref{(4.2)} satisfying the inclusion $t^{1-\alpha}\partial_tu \in C([0,T];C(\overline{\Omega}))$. On the other hand, Theorem \ref{t2.1} asserts the unique existence of solution $u$ to the initial-boundary value problem \eqref{(2.3)} from the space $\mathcal{Y}_\alpha$, i.e., of the solution $u$ that satisfies the inclusions $u\in L^2(0,T;H^2(\Omega))$ and $u - a \in H_{\alpha}(0,T;L^2(\Omega))$.
In this part of the proof, we show that for $a \in C^{\infty}_0(\Omega)$ and $F \in C^{\infty}_0(\Omega\times (0,T))$, the solution to the problem \eqref{(2.3)} satisfies the regularity assumptions formulated in Lemma 3.
More precisely, we first prove the following lemma:
\begin{lemma} \label{l6.1} Let $a_{ij}$, $b_j$, $c$ satisfy the conditions \eqref{(1.2)} and the inclusions $a\in C^{\infty}_0(\Omega)$, $F \in C^{\infty}_0(\Omega\times (0,T))$ hold true.
Then the solution $u=u(F,a)$ to the problem \eqref{(2.3)} satisfies the inclusions $$ u \in C([0,T];C^2(\overline{\Omega})), \quad t^{1-\alpha}\partial_tu \in C([0,T];C(\overline{\Omega})) $$ and $\lim_{t\to 0} \Vert u(t) - a\Vert_{L^2(\Omega)} = 0$. \end{lemma}
\begin{proof} We recall that $c_0>0$ is a positive fixed constant and $$ -A_0v = \sum_{i,j=1}^d \partial_i(a_{ij}(x)\partial_jv) - c_0v,\ \mathcal{D}(A_0) = \{ v \in H^2(\Omega);\, \ppp_{\nu_A} v + \sigma v = 0 \,\, \mbox{on } \partial\Omega\}. $$ Then $\mathcal{D}(A_0^{\hhalf}) = H^1(\Omega)$ and $\Vert A_0^{\hhalf}v\Vert \sim \Vert v\Vert_{H^1(\Omega)}$ (\cite{Fu}). Moreover, for the operators $S(t)$ and $K(t)$ defined by \eqref{(5.1)} and \eqref{(5.2)}, the estimates \eqref{(5.3)} hold true.
In what follows, we denote $ \frac{\partial u}{\partial t}(\cdot,t)$ by $u'(t) = \frac{du}{dt}(t)$ if there is no fear of confusion.
The solution $u$ to the integral equation \eqref{(6.1)} can be constructed as a fixed point of the equation \begin{equation} \label{(6.2)} A_0u(t) = A_0G(t) + \int^t_0 A_0^{\hhalf}K(t-s)A_0^{\hhalf}Qu(s) ds, \quad 0<t<T. \end{equation}
As already proved, this fixed point satisfies the inclusion $u\in L^2(0,T;H^2(\Omega))$ $ \cap $ $ (H_{\alpha}(0,T;L^2(\Omega)) + \{ a\}).$
Now we derive some estimates for the norms $\Vert A_0^{\kappa}u(t)\Vert$, $\kappa=1,2$ and $\Vert A_0u'(t)\Vert$ for $0<t<T$. First we set $$
D:= \sup_{0<t<T} (\Vert A_0F(t)\Vert + \Vert A_0F'(t)\Vert + \Vert A_0^2F(t)\Vert) + \Vert a\Vert_{H^4(\Omega)}. $$
Since $F\in C^{\infty}_0(\Omega\times (0,T))$, we obtain the inclusion $F\in L^{\infty}(0,T;\mathcal{D}(A_0^2))$ and the inequality $D < +\infty$. Moreover, in view of \eqref{(5.3)}, for $\kappa=1,2$, we get the estimates \begin{align*} & \left\Vert A_0^{\kappa}\int^t_0 K(t-s)F(s) ds \right\Vert \le C\int^t_0 \Vert K(t-s)\Vert \Vert A_0^{\kappa}F(s)\Vert ds\\ \le& C\left( \int^t_0 (t-s)^{\alpha-1}ds \right) \sup_{0<s<T} \Vert A_0^{\kappa}F(s)\Vert \le CD, \end{align*} \begin{align*} & \left\Vert A_0\frac{d}{dt}\int^t_0 K(t-s)F(s) ds \right\Vert = \left\Vert A_0\frac{d}{dt}\int^t_0 K(s)F(t-s) ds \right\Vert\\ =& \left\Vert A_0K(t)F(0) + A_0\int^t_0 K(s)F'(t-s) ds \right\Vert\\ \le& C\left\Vert A_0\int^t_0 K(s)F'(t-s) ds \right\Vert \le C \int^t_0 s^{\alpha-1} \Vert A_0F'(t-s) \Vert ds < CD. \end{align*} The regularity conditions \eqref{(1.2)} lead to the estimates $$ \Vert A_0^{\hhalf}Q(s)u(s)\Vert \le C\Vert Q(s)u(s)\Vert_{H^1(\Omega)} = C\left\Vert \sum_{j=1}^d b_j(s)\partial_ju(s) + (c_0+c(s))u(s) \right\Vert_{H^1(\Omega)} $$ \begin{equation} \label{(6.4)} \le C\Vert u(s)\Vert_{H^2(\Omega)} \le C\Vert A_0u(s)\Vert,\quad 0<s<T. \end{equation} Moreover, $$ \Vert A_0S(t)a\Vert = \Vert S(t)A_0a\Vert \le C\Vert a\Vert_{H^2(\Omega)} \le CD $$ by using the inequalities \eqref{(5.3)}. Then \begin{align*} & \Vert A_0u(t)\Vert \le CD + \int^t_0 \Vert A_0^{\hhalf}K(t-s)\Vert \Vert A_0^{\hhalf}Q(s)u(s)\Vert ds\\ \le& CD + C\int^t_0 (t-s)^{\hhalf\alpha -1}\Vert A_0u(s)\Vert ds, \quad 0<s<T. \end{align*} The generalized Gronwall inequality yields the estimate $$ \Vert A_0u(t)\Vert \le CD + C\int^t_0 (t-s)^{\hhalf\alpha -1}D ds \le CD, \quad 0<t<T, $$ which implies the inequality $$ \Vert A_0u\Vert_{L^{\infty}(0,T;H^2(\Omega))} \le CD. $$ Next, for the space $C([0,T]; L^2(\Omega))$, we can repeat the same arguments as the ones employed for the iterations $R^n$ of the operator $R$ in the proof of Theorem \ref{t2.1} and apply the fixed point theorem to the equation \eqref{(6.1)} that leads to the inclusion $A_0u \in C([0,T];L^2(\Omega))$. The obtained results implicate \begin{equation} \label{(6.5)} u\in C([0,T];H^2(\Omega)), \quad \Vert u\Vert_{C([0,T];H^2(\Omega))} \le CD. \end{equation} Choosing $\varepsilon_0 > 0$ sufficiently small, we have the equation \begin{equation} \label{(6.6)} A_0^{\frac{3}{2}}u(t) = A_0^{\frac{3}{2}}G(t) + \int^t_0 A_0^{\frac{3}{4}+\varepsilon_0} K(t-s)A_0^{\frac{3}{4}-\varepsilon_0}Q(s)u(s)ds, \quad 0<t<T. \end{equation}
Next, according to \cite{Fu}, the inclusion $$ \mathcal{D}(A_0^{\frac{3}{4}-\varepsilon_0}) \subset H^{\frac{3}{2}-2\varepsilon_0}(\Omega) $$ holds true. Now we proceed to the proof of the inclusion $Q(s)u(s) \in \mathcal{D}(A_0^{\frac{3}{4}-\varepsilon_0})$. By \eqref{(5.3)}, we obtain the inequality $$ \Vert A_0^{\frac{3}{2}}u(t)\Vert \le CD + \int^t_0 (t-s)^{(\frac{1}{4}-\varepsilon_0)\alpha-1} \Vert A_0^{\frac{3}{4}-\varepsilon_0}Q(s)u(s)\Vert ds, $$ which leads to the estimate \begin{equation} \label{(6.7a)} \Vert u(t)\Vert_{H^3(\Omega)} \le CD + \int^t_0 (t-s)^{(\frac{1}{4}-\varepsilon_0)\alpha-1} \Vert u(s)\Vert_{H^3(\Omega)} ds, \quad 0<t<T \end{equation} because of the inequality $$ \Vert A_0^{\frac{3}{4}-\varepsilon_0}Q(s)u(s)\Vert \le C\Vert Q(s)u(s)\Vert_{H^{\frac{3}{2}}(\Omega)} \le C\Vert Q(s)u(s)\Vert_{H^2(\Omega)} \le C\Vert u(s)\Vert_{H^3(\Omega)}, $$ which follows from the regularity conditions \eqref{(1.2)} posed on the coefficients $b_j, c$.
For $0<t<T$, the generalized Gronwall inequality applied to the integral inequality \eqref{(6.7a)} yields the estimate $$ \Vert u(t)\Vert_{H^3(\Omega)} \le C\left(1 + t^{\alpha\left(\frac{1}{4}-\varepsilon_0\right)} \right)D. $$
For the relation \eqref{(6.6)}, we repeat the same arguments as the ones employed in the proof of Theorem \ref{t2.1} to estimate $A_0^{\frac{3}{2}}u(t)$ in the norm $C([0,T];L^2(\Omega))$ by the fixed point theorem arguments and thus we obtain the inclusion
$A_0^{\frac{3}{2}}u \in C([0,T];L^2(\Omega))$.
Summarising the estimates derived above, we have shown that \begin{equation} \label{(6.7)} \left\{ \begin{array}{rl} & u \in C([0,T];\mathcal{D}(A_0^{\frac{3}{2}})) \subset C([0,T];H^3(\Omega)), \\ & \Vert u(t)\Vert_{H^3(\Omega)} \le C\left( 1 + t^{\alpha\left(\frac{1}{4}-\varepsilon_0\right)} \right)D, \quad 0<t<T. \end{array}\right. \end{equation}
Next we estimate the norm $\Vert Au'(t)\Vert$. First, $u'(t)$ is represented in the form \begin{align*} & u'(t) = G'(t) + \frac{d}{dt}\int^t_0 K(t-s)Q(s)u(s) ds\\ = & G'(t) + \frac{d}{dt}\int^t_0 K(s)Q(t-s)u(t-s) ds \, = \, G'(t) + K(t)Q(0)u(0) \\ + & \int^t_0 K(s) (Q(t-s)u'(t-s) + Q'(t-s)u(t-s)) ds, \quad 0<t<T, \end{align*} so that \begin{equation} \label{(6.8)} A_0u'(t) = A_0G'(t) + A_0K(t)Q(0)u(0) \end{equation} $$ + \int^t_0 A_0^{\hhalf}K(s) A_0^{\hhalf}(Q(t-s)u'(t-s) + Q'(t-s)u(t-s)) ds, \quad 0<t<T. $$ Similarly to the arguments applied for derivation of \eqref{(6.4)}, we obtain the inequality $$ \Vert A_0^{\hhalf}(Q(t-s)u'(t-s) + Q'(t-s)u(t-s))\Vert \le C\Vert A_0u'(t-s)\Vert, \quad 0<t<T. $$ The inclusion $Q(0)u(0) = Q(0)a \in C^2_0(\Omega) \subset \mathcal{D}(A_0)$ follows from the regularity conditions \eqref{(1.2)} and the inclusion $a \in C^{\infty}_0(\Omega)$. Furthermore, by \eqref{(5.2)} and \eqref{(5.3)}, we obtain $$ \Vert A_0S'(t)a\Vert = \Vert A_0^2K(t)a\Vert = \Vert K(t)A_0^2a\Vert \le Ct^{\alpha-1}\Vert A_0^2a\Vert \le Ct^{\alpha-1}\Vert a\Vert_{H^4(\Omega)} $$ and $$ \Vert K(t)A_0(Q(0)a)\Vert \le Ct^{1-\alpha}\Vert A_0(Q(0)a)\Vert \le Ct^{\alpha-1}\Vert a\Vert_{H^3(\Omega)}. $$ Hence, the representation \eqref{(6.8)} leads to the estimate $$ \Vert A_0u'(t)\Vert \le Ct^{\alpha-1}D + C\int^t_0 s^{\hhalf\alpha -1}\Vert A_0u'(t-s)\Vert ds, \quad 0<t<T. $$ Now we consider a vector space $$ \widetilde{X}:= \{v\in C([0,T];L^2(\Omega)) \cap C^1((0,T];L^2(\Omega));\, t^{1-\alpha}\partial_tv \in C([0,T];L^2(\Omega))\} $$ with the norm $$ \Vert v\Vert_{\widetilde{X}}:= \max_{0\le t\le T} \Vert t^{1-\alpha}\partial_tv (\cdot,t)\Vert_{L^2(\Omega)} + \max_{0\le t\le T} \Vert v(\cdot,t)\Vert_{L^2(\Omega)}. $$ It is easy to verify that $\widetilde{X}$ with the norm $\Vert v\Vert_{\widetilde{X}}$ defined above is a Banach space.
Arguing similarly to the proof of Theorem \ref{t2.1} and applying the fixed point theorem in the Banach space $\widetilde{X}$, we conclude that $A_0u \in \widetilde{X}$, that is, $t^{1-\alpha}A_0u' \in C([0,T];L^2(\Omega))$. Using the inclusion $\mathcal{D}(A_0) \subset C(\overline{\Omega})$ in the spatial dimensions $d=1,2,3$, the Sobolev embedding theorem yields \begin{equation} \label{(6.9)} u' \in C(\overline{\Omega} \times (0,T]), \quad \Vert A_0u'(t)\Vert \le CDt^{\alpha-1}, \quad 0\le t\le T. \end{equation}
Now we proceed to the estimation of $A_0^2u(t)$. Since $\frac{d}{ds}(-A_0^{-1}S(s)) = K(s)$ for $0<s<T$ by \eqref{(5.2)}, the integration by parts yields \begin{align*} & \int^t_0 K(t-s)Q(s)u(s) ds = \int^t_0 K(s)Q(t-s)u(t-s) ds \\ = & \left[ -A_0^{-1}S(s)Q(t-s)u(t-s)\right]^{s=t}_{s=0}\\ - & \int^t_0 A_0^{-1}S(s)(Q'(t-s)u(t-s)+Q(t-s)u'(t-s)) ds\\ = & A_0^{-1}Q(t)u(t) - A_0^{-1}S(t)Q(0)u(0) \end{align*} \begin{equation} \label{(6.10)} - \int^t_0 A_0^{-1}S(s)(Q'(t-s)u(t-s)+Q(t-s)u'(t-s)) ds, \quad 0<t<T. \end{equation} Applying the Lebesgue convergence theorem and the estimate $\vert E_{\alpha,1}(\eta)\vert \le \frac{C}{1+\eta},\ \eta>0$ (Theorem 1.6 in \cite{Po}), we readily reach $$ \Vert S(t)a - a\Vert^2 = \sum_{n=1}^{\infty} \vert (a,\varphi_n)\vert^2 (E_{\alpha,1}(-\lambda_nt^{\alpha}) - 1)^2 \, \longrightarrow\, 0 $$ as $t \to \infty$ for $a \in L^2(\Omega)$. \\ Hence, $u \in C([0,T];L^2(\Omega))$ and $\lim_{t\downarrow 0} \Vert (S(t)-1)a\Vert = 0$ and thus $$ \lim_{s\downarrow 0} S(s)Q(t-s)u(t-s) = S(0)Q(t)u(t) \quad \mbox{in } L^2(\Omega) $$ and $$ \lim_{s\uparrow t} S(s)Q(t-s)u(t-s) = S(t)Q(0)u(0) \quad \mbox{in }L^2(\Omega), $$ which justify the last equality in the formula \eqref{(6.10)}.
Thus, in terms of \eqref{(6.10)}, the representation \eqref{(5.7)} can be rewritten in the form $$ A_0^2(u(t) - A_0^{-1}Q(t)u(t)) = A_0^2G(t) -A_0S(t)Q(0)u(0) $$ \begin{equation} \label{(6.11)} - \int^t_0 A_0^{\hhalf}S(s)A_0^{\hhalf} (Q'(t-s)u(t-s) + Q(t-s)u'(t-s)) ds, \quad 0<t<T. \end{equation} Since $u(0) = a \in C^{\infty}_0(\Omega)$ and $F \in C^{\infty}_0(\Omega \times (0,T))$, in view of \eqref{(1.2)} we have the inclusions $$ A_0^2G(\cdot) \in C([0,T];L^2(\Omega)), \ A_0S(t)Q(0)u(0) = S(t)(A_0Q(0)a) \in C([0,T];L^2(\Omega)). $$ Now we use the conditions \eqref{(1.2)} and \eqref{(5.3)} and repeat the arguments employed for derivation of \eqref{(6.4)} by means of \eqref{(6.5)} and \eqref{(6.9)} to obtain the estimates \begin{align*} & \left\Vert \int^t_0 A_0^{\hhalf}S(s)A_0^{\hhalf} (Q'(t-s)u(t-s) + Q(t-s)u'(t-s)) ds \right\Vert\\ \le &C\int^t_0 s^{-\hhalf\alpha}\Vert Q'(t-s)u(t-s) + Q(t-s)u'(t-s) \Vert_{H^1(\Omega)} ds\\ \le& C\int^t_0 s^{-\hhalf\alpha}(\Vert A_0u'(t-s)\Vert + \Vert A_0u(t-s)\Vert) ds \le Ct^{\hhalf\alpha}D \end{align*} and the inclusion $$ -\int^t_0 A_0^{\hhalf}S(s)A_0^{\hhalf}(Q'(t-s)u(t-s) + Q(t-s)u'(t-s)) ds \in C([0,T];L^2(\Omega)). $$ Therefore, $$ A_0^2(u(t) - A_0^{-1}Q(t)u(t)) = A_0(A_0u(t) - Q(t)u(t)) \in C([0,T];L^2(\Omega)), $$ that is, $$ A_0u(t) - Q(t)u(t) \in C([0,T]; \mathcal{D}(A_0)) \subset C([0,T];H^2(\Omega)). $$ On the other hand, the estimate \eqref{(6.7)} implies $Q(t)u(t) \in C([0,T];H^2(\Omega))$ and we obtain \begin{equation} \label{(6.12a)} A_0u(t) \in C([0,T];H^2(\Omega)). \end{equation}
For further arguments, we define the Schauder spaces $C^{\theta}(\overline{\Omega})$ and $C^{2+\theta}(\overline{\Omega})$ with $0<\theta<1$ (see, e.g., \cite{GT}, \cite{LU}) as follows: A function $w$ is said to belong to the space $C^{\theta}(\overline{\Omega})$ if $$ \sup_{x, x'\in \Omega, \, x \ne x'} \frac{\vert w(x) - w(x')\vert}{\vert x-x'\vert^{\theta}} < \infty. $$ For $w \in C^{\theta}(\overline{\Omega})$, we define the norm $$ \Vert w\Vert_{C^{\theta}(\overline{\Omega})} := \Vert w\Vert_{C(\overline{\Omega})} + \sup_{x, x'\in \Omega, \, x \ne x'} \frac{\vert w(x) - w(x')\vert}{\vert x-x'\vert^{\theta}} $$ and for $w\in C^{2+\theta}(\overline{\Omega})$, the norm is given by $$ \Vert w\Vert_{C^{2+\theta}(\overline{\Omega})} := \Vert w\Vert_{C^2(\overline{\Omega})} + \sum_{\vert \tau\vert=2} \sup_{x, x'\in \Omega, \, x \ne x'} \frac{\vert \partial_x^{\tau}w(x) - \partial_x^{\tau}w(x')\vert} {\vert x-x'\vert^{\theta}}. $$
In the last formula, the notations
$\tau := (\tau_1, ..., \tau_d) \in (\mathbb{N} \cup \{0\})^d$, $\partial_x^{\tau}:= \partial_1^{\tau_1}\cdots \partial_d^{\tau_d}$, and $\vert \tau\vert:= \tau_1 + \cdots + \tau_d$ are employed.
For $d=1,2,3$, the Sobolev embedding theorem says that $H^2(\Omega) \subset C^{\theta}(\overline{\Omega})$ with some $\theta \in (0,1)$ (\cite{Ad}).
Therefore, in view of \eqref{(6.12a)}, we obtain the inclusion $h:= A_0u(\cdot,t) \in C^{\theta}(\overline{\Omega})$ for each $t \in [0,T]$. Now we apply the Schauder estimate (see, e.g., \cite{GT} or \cite{LU}) for solutions to the elliptic boundary value problem $$ A_0u(\cdot,t) = h\in C^{\theta}(\overline{\Omega}) \quad \mbox{in } \Omega $$ with the boundary condition $\ppp_{\nu_A} u(\cdot,t) + \sigma(\cdot)u(\cdot,t) = 0$ on $\partial\Omega$ to reach the inclusion $$
u \in C([0,T]; C^{2+\theta}(\overline{\Omega})). $$ This inclusion and \eqref{(6.9)} yield the conclusion $u \in C([0,T];C^2(\overline{\Omega}))$ and $t^{1-\alpha}\partial_tu \in C([0,T];C(\overline{\Omega}))$ of the lemma.
Finally we prove that $\lim_{t\to 0} \Vert u(t) - a \Vert = 0$. By \eqref{(5.3)}, we have \begin{align*} & \left\Vert \int^t_0 K(t-s)h(s) ds\right\Vert \le \int^t_0 \Vert K(t-s)h(s) \Vert ds \le C\int^t_0 (t-s)^{\alpha-1} \Vert h(s)\Vert ds\\ \le & \frac{Ct^{\alpha}}{\alpha}\Vert h\Vert_{L^{\infty}(0,T;L^2(\Omega))}, \end{align*} and so \begin{equation}\label{(6.13)} \lim_{t\to 0} \int^t_0 K(t-s)h(s) ds = 0 \quad \mbox{in $L^2(\Omega)$} \end{equation} for each $h \in L^{\infty}(0,T;L^2(\Omega))$. Therefore by the regularity $u \in C([0,T];C^2(\overline{\Omega}))$, we see that $$ \lim_{t\to 0} \left( \int^t_0 K(t-s)F(s) ds + Ru(t) \right) = 0 \quad \mbox{in $L^2(\Omega)$}, $$ where $R$ is defined in \eqref{(5.8)}. Moreover, for justifying \eqref{(6.10)}, we have already proved $\lim_{t\to 0} \Vert S(t)a - a\Vert = 0$ for $a \in L^2(\Omega)$. Thus the proof of Lemma \ref{l6.1} is complete. \end{proof}
\noindent II. Second part of the proof.
In this part, we weaken the regularity conditions posed on the solution $u$ to \eqref{(4.2)} in Lemma \ref{l4.2} and prove the same results provided that $u\in L^2(0,T;H^2(\Omega))$ and $u-a \in H_{\alpha}(0,T;L^2(\Omega))$, under the assumption that $\min\limits_{(x,t)\in \overline{\Omega}\times [0,T]} b_0(x,t) > 0$ is sufficiently large.
Let $F \in L^2(0,T;L^2(\Omega))$ and $a\in H^1(\Omega)$ satisfy the inequalities
$F(x,t)\ge 0,\ (x,t)\in \Omega\times (0,T)$ and $a(x)\ge 0,\ x\in \Omega$.
Now we apply the standard mollification procedure (see, e.g., \cite{Ad}) and construct the sequences $F_n \in C^{\infty}_0(\Omega\times (0,T))$ and $a_n \in C^{\infty}_0(\Omega)$, $n\in \mathbb{N}$ such that $F_n(x,t)\ge 0,\ (x,t)\in \Omega\times (0,T)$ and $a_n(x)\ge 0,\ x\in \Omega$, $n\in \mathbb{N}$ and $\lim_{n\to\infty} \Vert F_n-F\Vert_{L^2(0,T;L^2(\Omega))} = 0$ and $\lim_{n\to\infty}\Vert a_n-a\Vert_{H^1(\Omega)} = 0$. Then Lemma \ref{l6.1} yields the inclusion $$ u(F_n,a_n) \in C([0,T];C^2(\overline{\Omega})), \quad t^{1-\alpha}\partial_tu(F_n,a_n) \in C([0,T];C(\overline{\Omega})), \quad n\in \mathbb{N} $$ and thus Lemma \ref{l4.2} ensures the inequalities \begin{equation} \label{(6.14)} u(F_n,a_n)(x,t) \ge 0 ,\ \ (x,t)\in \Omega\times (0,T), \, n\in \mathbb{N}. \end{equation} Since Theorem \ref{t2.1} holds true for the initial-boundary value problem \eqref{(4.2)} with $F$ and $a$ replaced by $F-F_n$ and $a-a_n$, respectively, we have $$ \Vert u(F,a) - u(F_n,a_n)\Vert_{L^2(0,T; H^2(\Omega))} $$ $$ \le C(\Vert a-a_n\Vert_{H^1(\Omega)} + \Vert F-F_n\Vert_{L^2(0,T;L^2(\Omega))}) \, \to \, 0 $$ as $n\to \infty$. Therefore, we can choose a subsequence $m(n)\in \mathbb{N}$ such that $u(F,a)(x,t) = \lim_{m(n)\to \infty} u(F_{m(n)},a_{m(n)})(x,t)$ for almost all $(x,t) \in \Omega\times (0,T)$. Then the inequality \eqref{(6.14)} leads to the desired result, namely, to the inequality $u(F,a)(x,t) \ge 0$ for almost all $(x,t) \in \Omega\times (0,T)$.
\noindent III. Third part of the proof.
Let the inequalities $a(x)\ge 0,\ x\in \Omega$ and $F(x,t)\ge 0,\ (x,t)\in \Omega\times (0,T)$ hold true for $a\in H^1(\Omega)$ and $F \in L^2(0,T;L^2(\Omega))$ and let $u=u(F,a) \in L^2(0,T;H^2(\Omega))$ is a solution to the problem \eqref{(2.3)}. In order to complete the proof of Theorem 2, we have to demonstrate the non-negativity of the solution without any assumptions on the sign of the zeroth-order coefficient.
First, the zeroth-order coefficient $b_0(x,t)$ in the definition \eqref{(3.1a)} of the operator $-A_1$ is set to a constant $b_0>0$ that is assumed to be sufficiently large. In this case, the initial-boundary value problem \eqref{(2.3)} can be rewritten as follows: \begin{equation} \label{(6.15)} \left\{ \begin{array}{rl} & \partial_t^{\alpha} (u-a) + A_1u = (b_0+c(x,t))u + F(x,t), \quad (x,t) \in \Omega\times (0,T), \\ & \ppp_{\nu_A} u + \sigma u = 0 \quad \mbox{on $\partial\Omega\times (0,T)$}. \end{array}\right. \end{equation} In what follows, we choose sufficiently large $b_0>0$ such that $b_0 \ge \Vert c\Vert_{C(\overline{\Omega} \times [0,T])}$.
In the previous parts of the proof, we already interpreted the solution $u$ as a unique fixed point for the equation \eqref{(6.1)}. Now let us construct an appropriate approximating sequence $u_n$, $n\in \mathbb{N}$ for the fixed point $u$. First we set $u_0(x,t) := 0$ for $(x,t) \in \Omega\times (0,T)$ and $u_1(x,t) = a(x) \ge 0, \ (x,t) \in \Omega\times (0,T)$. Then we define a sequence $u_{n+1},\ n\in \mathbb{N}$ of solutions to the following initial-boundary value problems with the given $u_n$: \begin{equation} \label{(6.16)} \left\{ \begin{array}{rl} &\partial_t^{\alpha} (u_{n+1}-a) + A_1u_{n+1} = (b_0+c(x,t))u_n + F(x,t) \quad \mbox{in } \Omega\times (0,T),\\ & \ppp_{\nu_A} u_{n+1} + \sigma u_{n+1} = 0 \quad \mbox{on } \partial\Omega\times (0,T),\\ & u_{n+1} - a \in H_{\alpha}(0,T;L^2(\Omega)), \quad n\in \mathbb{N}. \end{array}\right. \end{equation}
First we show that \begin{equation} \label{(6.17)} u_n(x,t) \ge 0, \quad (x,t) \in \Omega\times (0,T), \quad n\in \mathbb{N}. \end{equation} Indeed, the inequality \eqref{(6.17)} holds for $n=1$. Now we assume that $u_n(x,t) \ge 0,\ (x,t)\in \Omega\times (0,T)$. Then $(b_0+c(x,t))u_n(x,t) + F(x,t) \ge 0,\ (x,t)\in \Omega \times (0,T)$, and thus by the results established in the second part of the proof of Theorem \ref{t2.2}, we obtain the inequality $u_{n+1}(x,t) \ge 0,\ (x,t)\in \Omega \times (0,T)$. By the principle of mathematical induction, the inequality \eqref{(6.17)} holds true for all $n\in \mathbb{N}$.
Now we rewrite the problem \eqref{(6.16)} as $$ \partial_t^{\alpha} (u_{n+1}(t) - a) + A_0u_{n+1}(t) = (Q(t)u_{n+1}-(c(t)+b_0)u_{n+1}) + (b_0+c(t))u_n + F, $$ where $A_0$ and $Q(t)$ are defined by \eqref{(3.2)} and \eqref{(5.8)}, respectively. Next we estimate $w_{n+1}:= u_{n+1} - u_n$. By the relation \eqref{(6.16)}, $w_{n+1}$ is a solution to the problem $$ \left\{ \begin{array}{rl} &\partial_t^{\alpha} w_{n+1} + A_0w_{n+1} = (Q(t)w_{n+1} - (c(t)+b_0)w_{n+1}) + (b_0+c(x,t))w_n \\ & \qquad \qquad \quad \mbox{in } \Omega\times (0,T),\\ & \ppp_{\nu_A} w_{n+1} + \sigma w_{n+1} = 0 \quad \mbox{on } \partial\Omega\times (0,T),\\ & w_{n+1} \in H_{\alpha}(0,T;L^2(\Omega)), \quad n\in \mathbb{N}. \end{array}\right. $$ In terms of the operator $K(t)$ defined by \eqref{(5.2)}, acting similarly to our analysis of the fixed point equation \eqref{(6.1)}, we obtain the integral equation \begin{align*} & w_{n+1}(t) = \int^t_0 K(t-s)(Qw_{n+1})(s) ds - \int^t_0 K(t-s)(c(s)+b_0)w_{n+1}(s) ds\\ + & \int^t_0 K(t-s)(b_0+c(s))w_n(s) ds, \quad 0<t<T, \end{align*} which leads to the inequalities \begin{align*} & \Vert A_0^{\hhalf}w_{n+1}(t)\Vert \le \int^t_0 \Vert A_0^{\hhalf}K(t-s)\Vert \Vert Q(s)w_{n+1}(s)\Vert ds\\ +& \int^t_0 \Vert A_0^{\frac{1}{2}}K(t-s)\Vert \Vert (c(s)+b_0)w_{n+1}(s)\Vert ds + \int^t_0 \Vert A_0^{\hhalf}K(t-s)\Vert \Vert (b_0+c(s))w_n(s)\Vert ds\\ \le& C\int^t_0 (t-s)^{\hhalf\alpha-1} \Vert A_0^{\hhalf}w_{n+1}(s)\Vert ds + C\int^t_0 (t-s)^{\hhalf\alpha-1}\Vert A_0^{\hhalf}w_n(s)\Vert ds \quad \mbox{for $0<t<T$.} \end{align*} For their derivation, we used the norm estimates $$ \Vert Q(s)w_{n+1}(s)\Vert \le C\Vert w_{n+1}(s)\Vert_{H^1(\Omega)} \le C\Vert A_0^{\hhalf}w_{n+1}(s)\Vert $$ and $$ \Vert (c(s)+b_0)w_\ell(s)\Vert \le C\Vert w_\ell(s)\Vert_{H^1(\Omega)} \le C\Vert A_0^{\hhalf}w_\ell(s)\Vert, \quad \ell=n, n+1 $$ that hold true under the conditions \eqref{(1.2)}. Thus we arrive at the integral inequality \begin{align*} & \Vert A_0^{\hhalf}w_{n+1}(t)\Vert \le C\int^t_0 (t-s)^{\frac{1}{2}\alpha-1} \Vert A_0^{\hhalf}w_{n+1}(s)\Vert ds\\ + & C\int^t_0 (t-s)^{\frac{1}{2}\alpha-1} \Vert A_0^{\hhalf}w_n(s)\Vert ds, \quad 0<t<T. \end{align*} The generalized Gronwall inequality yields now the estimate \begin{align*} &\Vert \Ahalf w_{n+1}(t)\Vert \le C\int^t_0 (t-s)^{\hhalf\alpha-1}\Vert \Ahalf w_n(s)\Vert ds\\ +& C\int^t_0 (t-s)^{\hhalf\alpha-1} \left( \int^s_0 (s-\xi)^{\hhalf\alpha-1} \Vert \Ahalf w_n(\xi)\Vert d\xi\right)ds. \end{align*} The second term at the right-hand side of the last inequality can be represented as follows: \begin{align*} & \int^t_0 (t-s)^{\hhalf\alpha-1} \left( \int^s_0 (s-\xi)^{\hhalf\alpha-1}\Vert \Ahalf w_n(\xi)\Vert d\xi\right) ds\\ =& \int^t_0 \Vert \Ahalf w_n(\xi)\Vert \left( \int^t_{\xi} (t-s)^{\hhalf\alpha-1} (s-\xi)^{\hhalf\alpha-1} ds \right) d\xi\\ =& \frac{\Gamma\left( \hhalf\alpha\right)\Gamma\left( \hhalf\alpha\right)} {\Gamma(\alpha)}\int^t_0 (t-\xi)^{\alpha-1}\Vert \Ahalf w_n(\xi)\Vert d\xi\\ =& \frac{\Gamma\left( \hhalf\alpha\right)^2}{\Gamma(\alpha)}T^{\hhalf\alpha} \int^t_0 (t-s)^{\hhalf\alpha-1}\Vert \Ahalf w_n(s)\Vert ds. \end{align*} Thus, we can choose a constant $C>0$ depending on $\alpha$ and $T$, such that \begin{equation} \label{(ineq1)} \Vert \Ahalf w_{n+1}(t)\Vert \le C\int^t_0 (t-\xi)^{\hhalf\alpha-1}\Vert \Ahalf w_n(s)\Vert ds, \quad 0<t<T,\, n\in \mathbb{N}. \end{equation} Recalling that $$ \int^t_0 (t-s)^{\hhalf\alpha -1}\eta(s)ds = \Gamma\left( \hhalf\alpha\right) (J^{\hhalf\alpha}\eta)(t), \quad t>0, $$ and setting $\eta_n(t):= \Vert \Ahalf w_n(t)\Vert$, we can rewrite \eqref{(ineq1)} in the form \begin{equation} \label{(6.18)} \eta_{n+1}(t) \le C\Gamma\left( \hhalf\alpha\right)(J^{\hhalf\alpha}\eta_n)(t), \quad 0<t<T, \, n\in \mathbb{N}. \end{equation}
Since the Riemann-Liouville integral $J^{\hhalf\alpha}$ preserves the sign and the semi-group property $J^{\beta_1}(J^{\beta_2}\eta)(t) = J^{\beta_1+\beta_2}\eta(t)$ is valid for any $\beta_1, \beta_2 > 0$, applying the inequality \eqref{(6.18)} repeatedly, we obtain the estimates \begin{align*} & \eta_n(t) \le \left(C\Gamma\left( \hhalf\alpha\right)\right)^{n-1} (J^{(n-1)\frac{\alpha}{2}}\eta_1)(t) \\ = & \frac{\left(C\Gamma\left( \hhalf\alpha\right)\right)^{n-1}} {\Gamma\left( \frac{\alpha}{2}(n-1)\right)} \left( \int^t_0 (t-s)^{(n-1)\hhalf\alpha-1} ds\right) \Vert \Ahalf a\Vert\\ = & \frac{\left(C\Gamma\left( \hhalf\alpha\right)\right)^{n-1}} {\Gamma\left( \frac{\alpha}{2}(n-1)\right)} \frac{t^{(n-1)\hhalf\alpha}}{(n-1)\hhalf\alpha} \Vert \Ahalf a\Vert \le C_1\frac{\left(C\Gamma\left( \hhalf\alpha\right) T^{\frac{\alpha}{2}} \right)^{n-1}} {\Gamma\left( \frac{\alpha}{2}(n-1)\right)}. \end{align*} The known asymptotic behavior of the gamma function justifies the relation $$ \lim_{n\to \infty} \frac{\left(C\Gamma\left( \hhalf\alpha\right) T^{\frac{\alpha}{2}} \right)^{n-1}} {\Gamma\left( \frac{\alpha}{2}(n-1)\right)} = 0. $$ Thus we have proved that the sequence $u_N = w_0 + \cdots + w_N$ converges to the solution $u$ in $L^{\infty}(0,T;H^1(\Omega))$ as $N \to \infty$. Therefore, we can choose a subsequence $m(n)\in \mathbb{N}$ such that $\lim_{m(n)\to\infty} u_{m(n)}(x,t) = u(x,t)$ for almost all $(x,t) \in \Omega \times (0,T)$. This statement in combination with the inequality \eqref{(6.17)} means that $u(x,t) \ge 0$ for almost all $(x,t) \in \Omega \times (0,T)$. The proof of Theorem \ref{t2.2} is completed. \end{proof}
Now let us fix a source function $F = F(x,t) \ge 0,\ (x,t)\in \Omega \times (0,T)$ and an initial value $a \in H^1(\Omega)$ in the initial-boundary value problem \eqref{(2.3)} and denote
by $u(c,\sigma) = u(c,\sigma)(x,t)$ the solution to the problem \eqref{(2.3)} with the functions $c=c(x,t)$ and $\sigma = \sigma(x)$. Then the following comparison property regarding the coefficients $c$ and $\sigma$ is valid:
\begin{theorem} \label{t2.3} Let $a\in H^1(\Omega)$ and $F \in L^2(\Omega\times (0,T))$ and the inequalities $a(x)\ge 0,\ x\in \Omega$ and $F(x,t)\ge 0,\ (x,t)\in \Omega \times (0,T)$ hold true. \\ (i) Let $c_1, c_2 \in C^1([0,T]; C^1(\overline{\Omega})) \cap C([0,T];C^2(\overline{\Omega}))$ and $c_1(x,t) \ge c_2(x,t)$ for $(x,t)\in \Omega$. Then $u(c_1,\sigma)(x,t) \ge u(c_2,\sigma)(x,t)$ in $\Omega \times (0,T)$. \\ (ii) Let $c(x,t) < 0,\ (x,t) \in \Omega \times (0,T)$ and a constant $\sigma_0>0$ be arbitrary and fixed. If the smooth functions $\sigma_1, \sigma_2$ on $\partial\Omega$ satisfy the conditions $$ \sigma_2(x) \ge \sigma_1(x) \ge \sigma_0,\ x\in \partial\Omega, $$ then the inequality $u(c,\sigma_1) \ge u(c, \sigma_2),\ x\in \Omega \times (0,T)$ holds true. \end{theorem}
\begin{proof} We start with a proof of the statement (i). Because $a(x)\ge 0,\ x\in\Omega$ and $F(x,t)\ge 0,\ (x,t)\in \Omega \times (0,T)$, Theorem \ref{t2.2} yields the inequality $u(c_2,\sigma)(x,t)\ge 0,\ (x,t)\in \Omega \times (0,T)$. Setting $u(x,t):= u(c_1,\sigma)(x,t) - u(c_2,\sigma)(x,t)$ for $(x,t) \in \Omega \times (0,T)$, we obtain $$ \left\{ \begin{array}{rl} & \partial_t^{\alpha} u - \sum_{i,j=1}^d \partial_i(a_{ij}\partial_ju) - \sum_{j=1}^d b_j \partial_ju\\ -& c_1(x,t)u = (c_1-c_2)u(c_2,\sigma)(x,t) \quad \mbox{in }\Omega \times (0,T), \\ & \ppp_{\nu_A} u + \sigma u = 0 \quad \mbox{on } \partial\Omega,\\ & u \in H_{\alpha}(0,T;L^2(\Omega)). \end{array}\right. $$ Since $u(c_2,\sigma)(x,t) \ge 0$ and $(c_1-c_2)(x,t) \ge 0$ for $(x,t) \in \Omega \times (0,T)$,
Theorem \ref{t2.2} leads to the estimate $u(x,t)\ge 0$ for $(x,t) \in \Omega \times (0,T)$, which is equivalent to the inequality $u(c_1,\sigma)(x,t) \ge u(c_2,\sigma)(x,t)$ for $(x,t) \in \Omega \times (0,T)$ and the statement (i) is proved.
Now we proceed to the proof of the statement (ii). Similarly to the procedure applied for the second part of the proof of Theorem \ref{t2.2}, we choose the sequences $F_n \ge 0$, $F_n \in C^{\infty}_0(\Omega \times (0,T))$ and $a_n \ge 0$, $a_n \in C^{\infty}_0(\Omega)$, $n\in \mathbb{N}$ such that $F_n \to F$ in $L^2(\Omega \times (0,T))$ and $a_n \to a$ in $H^1(\Omega)$. Let $u_n$, $v_n$ be the solutions to the initial-boundary value problem \eqref{(2.3)} with $F=F_n$, $a=a_n$ and with the coefficients $\sigma_1$ and $\sigma_2$ in the boundary condition, respectively. According to Lemma \ref{l6.1}, the inclusions $v_n, u_n \in C(\overline{\Omega} \times [0,T])$ and $t^{1-\alpha}\partial_tv_n, \, t^{1-\alpha}\partial_tu_n \in C([0,T];C(\overline{\Omega}))$, $n\in \mathbb{N}$ hold true and thus Theorem \ref{t2.2} yields \begin{equation}\label{(4.22a)} v_n(x,t) \ge 0, \quad (x,t) \in \partial\Omega\times (0,T). \end{equation}
Moreover, the relation \begin{equation} \label{(6.19)} \lim_{n\to\infty}\Vert u_n - u(c,\sigma_1)\Vert_{L^2(0,T;L^2(\Omega))} = \lim_{n\to\infty}\Vert v_n - u(c,\sigma_2)\Vert_{L^2(0,T;L^2(\Omega))} = 0 \end{equation} follows from Theorem \ref{t2.1}. Let us now define an auxiliary function $w_n:= u_n - v_n$. For this function, the inclusions \begin{equation}\label{(4.23)} t^{1-\alpha}\partial_tw_n \in C([0,T];C(\overline{\Omega})), \quad w_n \in C([0,T];C^2(\overline{\Omega})), \quad n\in \mathbb{N} \end{equation} hold true. Furthermore, it is a solution to the initial-boundary value problem \begin{equation}\label{(4.24)} \left\{ \begin{array}{rl} & \partial_t^{\alpha} w_n + Aw_n = 0 \quad \mbox{in } \Omega \times (0,T),\\ & \ppp_{\nu_A} w_n + \sigma_1w_n = (\sigma_2-\sigma_1)v_n
\quad \mbox{on } \partial\Omega\times (0,T),\\ & w_n(x,\cdot) \in H_{\alpha}(0,T) \quad \mbox{for almost all } x\in \Omega. \end{array}\right. \end{equation} The inequalities \eqref{(4.22a)} and $\sigma_2(x) \ge \sigma_1(x), \ x\in \partial\Omega$ lead to the estimate \begin{equation}\label{(4.23a)} \ppp_{\nu_A} w_n + \sigma_1w_n \ge 0 \quad \mbox{on $\partial\Omega\times (0,T)$}. \end{equation}
To finalize the proof of the theorem, a variant of Lemma \ref{l4.2} formulated below will be employed.
\begin{lemma} \label{l4.2a} Let the elliptic operator $-A$ be defined by \eqref{(2.1)} and the conditions
\eqref{(1.2)} be satisfied. Moreover, let the inequality $c(x,t) < 0$ for $x \in \overline{\Omega}$ and $0\le t \le T$ hold true and there exist a constant $\sigma_0>0$ such that $$ \sigma(x) \ge \sigma_0 \quad \mbox{for all $x\in \partial\Omega$}. $$ For $a \in H^1(\Omega)$ and $F\in L^2(\Omega\times (0,T))$, we further assume that there exists a solution $u\in C([0,T];C^2(\overline{\Omega}))$
to the initial-boundary value problem
$$ \left\{ \begin{array}{rl} & \partial_t^{\alpha} (u-a) + Au = F \quad \mbox{in $\Omega\times (0,T)$}, \\ & \partial_{\nu_A}u + \sigma(x)u \ge 0 \quad \mbox{on $\partial\Omega \times (0,T)$}, \\ & u(x,\cdot) - a\in H_{\alpha}(0,T) \quad \mbox{for almost all $x\in \Omega$} \end{array}\right. $$ that satisfies the inclusion $t^{1-\alpha}\partial_tu \in C([0,T];C(\overline{\Omega}))$.
Then the inequalities $F(x,t) \ge 0,\ (x,t)\in \Omega \times (0,T)$ and $a(x)\ge 0, \ \Omega$ implicate the inequality $u(x,t) \ge 0,\ (x,t)\in \Omega\times (0,T)$. \end{lemma}
In the formulation of this lemma, at the expense of the extra condition $\sigma(x) > 0$ on $\partial\Omega$, we do not assume that $\min\limits_{(x,t)\in \overline{\Omega}\times [0,T]} (-c(x,t))$ is sufficiently large. This is the main difference between the conditions supposed in Lemma \ref{l4.2a} and in Lemma \ref{l4.2}. The proof of Lemma \ref{l4.2a} is much simpler compared to the one of Lemma \ref{l4.2}; it will be presented at the end of this section.
Now we complete the proof of Theorem \ref{t2.3}. Since $c(x,t) < 0$ for $(x,t)\in \Omega \times (0,T)$ and $\sigma_1(x) \ge \sigma_0 > 0$ on $\partial\Omega$ and taking into account the conditions \eqref{(4.23)} and \eqref{(4.23a)}, we can apply Lemma \ref{l4.2a} to the initial-boundary value problem \eqref{(4.24)} and deduce the inequality $w_n(x,t) \ge 0,\ (x,t)\in \Omega \times (0,T)$, that is, $u_n(x,t) \ge v_n(x,t),\ (x,t)\in \Omega \times (0,T)$ for $n\in \mathbb{N}$. Due to the relation \eqref{(6.19)}, we can choose a suitable subsequence of $w_n,\ n\in \mathbb{N}$ and pass to the limit as $n$ tends to infinity thus arriving at the inequality $u(c,\sigma_1)(x,t) \ge u(c,\sigma_2)(x,t)$ in $\Omega \times (0,T)$. The proof of Theorem \ref{t2.3} is completed. \end{proof}
At this point, let us mention a direction for further research in connection with the results formulated and proved in this sections. In order to remove the negativity condition posed on the coefficient $c=c(x,t)$ in Theorem \ref{t2.3} (ii), one needs a unique existence result for solutions to the initial-boundary value problems of type \eqref{(2.3)} with non-zero Robin boundary condition similar to the one formulated in Theorem \ref{t2.1}. There are several works that treat the case of the initial-boundary value problems with non-homogeneous Dirichlet boundary conditions (see, e.g., \cite{Ya18} and the references therein). However, to the best of the authors' knowledge, analogous results are not available for the initial-boundary value problems with the non-homogeneous Neumann or Robin boundary conditions. Thus, in Theorem \ref{t2.3} (ii), we assumed the condition $c(x,t)<0,\ (x,t)\in \Omega \times (0,T)$, although our conjecture is that this result holds true for an arbitrary coefficient $c=c(x,t)$.
We conclude this section with a proof of Lemma \ref{l4.2a} that is simple because in this case we do not need the function $\psi$ defined as in \eqref{(4.3)}.
\begin{proof} First we introduce an auxiliary function as follows: $$ \widetilde{w}(x,t):= u(x,t) + \varepsilon(1+t^{\alpha}), \quad x\in \Omega,\, 0<t<T. $$
The inequalities $c(x,t)<0,\ (x,t)\in \overline{\Omega} \times [0,T]$ and $\sigma(x) \ge \sigma_0>0,\ x\in \partial\Omega$ and the calculations similar to the ones done in the proof of Lemma \ref{l4.2} implicate the inequalities $$ d_t^{\alpha} \widetilde{w} + A\widetilde{w} = F + \varepsilon\Gamma(\alpha+1) - c(x,t)\varepsilon(1+t^{\alpha}) > 0 \quad \mbox{in $\Omega\times (0,T)$}, $$ $$ \partial_{\nu_A}\widetilde{w} + \sigma \widetilde{w} = \partial_{\nu_A}u + \sigma u + \sigma\varepsilon(1+t^{\alpha}) \ge \sigma_0\varepsilon \quad \mbox{on $\partial\Omega \times (0,T)$} $$ and $$ \widetilde{w}(x,0) = a(x) + \varepsilon \ge \varepsilon \quad \mbox{in $\Omega$}. $$ Based on these inequalities, the same arguments that were employed after the formula \eqref{(4.7)} in the proof of Lemma \ref{l4.2} readily complete the proof of Lemma \ref{l4.2a}. \end{proof}
\section{Appendix} \label{sec8} \setcounter{section}{5} \setcounter{equation}{0}
In the proof of Lemma \ref{l4.2} that is a basis for all other derivations presented in this paper, we essentially used an auxiliary function that satisfies the conditions \eqref{(4.3)}. Thus, ensuring existence of such function is an important problem worth for detailed considerations. In this Appendix, we present a solution to this problem.
For the readers' convenience, we split our existence proof into three parts.
\noindent I. First part of the proof.
In this part, we prove the following lemma:
\begin{lemma} \label{lem1} Let the conditions \eqref{(1.2)} be satisfied and the constant $$ M:= \min_{(x,t)\in \overline{\Omega}\times [0,T]} b_0(x,t)>0 $$ is sufficiently large.
Then there exists a constant $\kappa_1>0$ such that \begin{equation} \label{1} (A_1(t)v,\, v) \ge \kappa_1\Vert v\Vert_{H^1(\Omega)}^2 \end{equation} for all $v \in H^2(\Omega)$ satisfying $\partial_{\nu_A}v + \sigma v = 0$ on $\partial\Omega$ for each $t \in [0,T]$. \end{lemma}
In particular, Lemma \ref{lem1} implies that all of the eigenvalues of the operator $A_0$ defined by \eqref{(3.2)} are positive if the constant $c_0>0$ is sufficiently large. Henceforth we employ the notation $b=(b_1,..., b_d)$.
\begin{proof} By using the conditions \eqref{(1.2)} and the boundary condition $\ppp_{\nu_A} v + \sigma v = 0$ on $\partial\Omega$, integration by parts yields \begin{align*} & (A_1(t)v,v)\\ = &-\int_{\Omega} \sum_{i,j=1}^d \partial_i(a_{ij}(x)\partial_jv)v dx
- \frac{1}{2}\int_{\Omega} \sum_{j=1}^d b_j(x,t)\partial_j(\vert v\vert^2) dx + \int_{\Omega} b_0(x,t) \vert v\vert^2 dx\\ =& \int_{\Omega} \sum_{i,j=1}^d a_{ij}(x)(\partial_iv)(\partial_jv) dx - \int_{\partial\Omega} (\partial_{\nu_A}v)v dS\\ + & \frac{1}{2}\int_{\Omega} (\mbox{div}\, b)\vert v\vert^2 dx - \frac{1}{2}\int_{\partial\Omega} (b\cdot \nu)\vert v\vert^2 dS + \int_{\Omega} b_0(x,t) \vert v\vert^2 dx\\ \ge & \kappa \int_{\Omega} \vert \nabla v\vert^2 dx + \int_{\Omega} \left(\min_{(x,t)\in\overline{\Omega}\times [0,T]} b_0(x,t) - \frac{1}{2}\vert \mbox{div}\, b\vert \right)\vert v\vert^2 dx\\ +& \int_{\partial\Omega} \left( \sigma - \frac{1}{2}(b\cdot \nu)\right) \vert v\vert^2 dS \end{align*} \begin{equation}\label{(2)} \ge \kappa \int_{\Omega} \vert \nabla v\vert^2 dx + \left( M - \frac{1}{2}\Vert \mbox{div}\, b\Vert_{C(\overline{\Omega}\times [0,T])} \right)\int_{\Omega} \vert v\vert^2 dx - C\int_{\Omega} \vert v\vert^2 dS. \end{equation} Here and henceforth $C>0$, $C_{\varepsilon}, C_{\delta} > 0$, etc. denote generic constants which are independent of the function $v$.
By the trace theorem (Theorem 9.4 (pp. 41-42) in \cite{LM}), for $\delta \in (0, \frac{1}{2}]$, there exists a constant $C_{\delta}>0$ such that $$ \Vert v\Vert_{L^2(\partial\Omega)} \le C_{\delta}\Vert v\Vert _{H^{\delta+\frac{1}{2}}(\Omega)} \quad \mbox{for all $v \in H^1(\Omega)$.} $$ Now we fix $\delta \in \left(0, \, \frac{1}{2}\right)$. The interpolation inequality for the Sobolev spaces implicates that for any $\varepsilon>0$ there exists a constant $C_{\varepsilon,\delta} > 0$ such that the following inequality holds true (see, e.g., Chapter IV in \cite{Ad} or Sections 2.5 and 11 of Chapter 1 in \cite{LM}): $$ \Vert v\Vert_{H^{\delta+\frac{1}{2}}(\Omega)} \le \varepsilon\Vert \nabla v\Vert_{L^2(\Omega)} + C_{\varepsilon,\delta}\Vert v\Vert_{L^2(\Omega)} \quad \mbox{for all $v \in H^1(\Omega)$}. $$ Therefore, we obtain the estimate \begin{align*} & \Vert v\Vert^2_{L^2(\partial\Omega)} \le (\varepsilon C_{\delta}\Vert \nabla v\Vert_{L^2(\Omega)} + C_{\delta}C_{\varepsilon,\delta}\Vert v\Vert_{L^2(\Omega)})^2\\ \le& 2(\varepsilon C_{\delta})^2\Vert \nabla v\Vert_{L^2(\Omega)}^2 + 2(C_{\delta}C_{\varepsilon,\delta})^2\Vert v\Vert_{L^2(\Omega)}^2 \end{align*} for all $v \in H^1(\Omega)$. Substituting this inequality into \eqref{(2)}, we obtain \begin{align*} &(\kappa - 2C(\varepsilon C_{\delta})^2) \Vert \nabla v\Vert_{L^2(\Omega)}^2\\ +& ( M - \frac{1}{2}\Vert \mbox{div}\, b\Vert_{C(\overline{\Omega}\times [0,T])} - 2(CC_{\delta}C_{\varepsilon,\delta})^2)\Vert v\Vert_{L^2(\Omega)}^2 \le (A_1(t)v,v). \end{align*} Choosing a sufficiently small $\varepsilon > 0$ such that $\kappa - 2C(\varepsilon C_{\delta})^2> 0$ and a sufficiently large $M>0$ such that $$ M > \frac{1}{2}\Vert \mbox{div}\, b\Vert_{C(\overline{\Omega}\times [0,T])} + 2(CC_{\delta}C_{\varepsilon,\delta})^2 $$ completes the proof of Lemma \ref{lem1}. \end{proof}
\noindent II. Second part of the proof.
Due to the estimate \eqref{1}, we can apply Theorem 3.2 (p. 137) in \cite{LU} that implicates existence of a constant $\theta \in (0,1)$ such that for each $t \in [0,T]$, a solution $\psi(\cdot,t) \in C^{2+\theta}(\overline{\Omega})$ to the problem \eqref{(4.3)} exists uniquely, where $C^{2+\theta}(\overline{\Omega})$ is the Schauder space defined in the proof of Lemma \ref{l6.1} in Section \ref{sec4}.
Now we introduce an auxiliary function \begin{equation}\label{(8.3a)} \eta(t):= \Vert \psi(\cdot,t)\Vert_{C^{2+\theta}(\overline{\Omega})}, \quad 0\le t \le T. \end{equation} Because of the inclusion $\psi(\cdot,t) \in C^{2+\theta}(\overline{\Omega})$, the value of the function $\eta(t)$ is finite for each $t \in [0,T]$.
Furthermore, for an arbitrary $G \in C^{\theta}(\overline{\Omega})$, there exists a unique solution $w=w(\cdot,t)$ to the problem \begin{equation}\label{(3)} \left\{ \begin{array}{rl} & A_1(t)w = G \quad \mbox{in $\Omega$}, \\ & \partial_{\nu_A}w + \sigma w = 0 \quad \mbox{on $\partial\Omega$} \end{array}\right. \end{equation} for each $t \in [0,T]$.
Now we prove that for each $t\in [0,T]$ there exists a constant $C_t > 0$ such that \begin{equation}\label{(4)} \Vert w(\cdot,t)\Vert_{C^{2+\theta}(\overline{\Omega})} \le C_t\Vert G\Vert_{C^{\theta}(\overline{\Omega})} \end{equation} for all solutions $w$ of the problem \eqref{(3)}. In the inequality \eqref{(4)}, the constant $C_t>0$ depends on the norms $\Vert a_{ij}\Vert_{C^1(\overline{\Omega})}$, $1\le i,j\le d$, $\Vert b_k\Vert_{C([0,T];C^2(\overline{\Omega}))}$, $0\le k\le d$ of the coefficients, but not on the coefficients by themselves.
Indeed, for each $t \in [0,T]$, the inequality \begin{equation}\label{(5)}
\Vert w(\cdot,t)\Vert_{C^{2+\theta}(\overline{\Omega})} \le C_t(\Vert G\Vert_{C^{\theta}(\overline{\Omega})} + \Vert w(\cdot,t)\Vert_{C(\overline{\Omega})}) \end{equation} holds true (see, e.g., the formula (3.7) on p. 137 in \cite{LU}). To obtain the desired estimate we have to eliminate the term $\Vert w(\cdot,t)\Vert_{C(\overline{\Omega})}$ on the right-hand side of the last inequality. This can be done by the standard compactness-uniqueness arguments. More precisely, let us assume that \eqref{(4)} does not hold. Then there exist the sequences $w_n\in C^{2+\theta}(\overline{\Omega}),\ n\in \mathbb{N}$ and $G_n\in C^{\theta}(\overline{\Omega}),\ n\in \mathbb{N}$ such that $\Vert w_n\Vert_{C^{2+\theta}(\overline{\Omega})} = 1$ and $\lim_{n\to\infty}\Vert G_n\Vert_{C^{\theta}(\overline{\Omega})} = 0$. By the Ascoli-Arzel\`a theorem, we can extract a subsequence $w_{k(n)}$ from the sequence $w_n$ such that $w_{k(n)} \longrightarrow \widetilde{w}$ in $C(\overline{\Omega})$ as $n\to \infty$. Applying the estimation \eqref{(5)} to the equation $$ A_1(t)(w_{k(n)} - w_{k(m)}) = G_{k(n)} - G_{k(m)} \quad \mbox{in $\Omega$} $$ equipped with the homogeneous boundary condition $\partial_{\nu_A}(w_{k(n)} - w_{k(m)}) + \sigma(w_{k(n)} - w_{k(m)}) = 0$ on $\partial\Omega$, we we arrive at the relation \begin{align*} & \Vert w_{k(n)} - w_{k(m)}\Vert_{C^{2+\theta}(\overline{\Omega})}\\ \le & C_t(\Vert G_{k(n)} - G_{k(m)}\Vert_{C^{\theta}(\overline{\Omega})} + \Vert w_{k(n)} - w_{k(m)}\Vert_{C(\overline{\Omega})}) \,\to 0 \end{align*} as $n,m \to \infty$. Hence, there exists a function $w_0 \in C^{2+\theta}(\overline{\Omega})$ such that $w_{k(n)} \to w_0$ in $C^{2+\theta}(\overline{\Omega})$. Moreover, we obtain the relations $$ \Vert w_0\Vert_{C^{2+\theta}(\overline{\Omega})} = \lim_{n\to\infty} \Vert w_{k(n)}\Vert_{C^{2+\theta}(\overline{\Omega})} = 1 $$ and $G_{k(n)} = A_1(t)w_{k(n)} \to A_1(t)w_0$ in $C^{\theta}(\overline{\Omega})$.
Since $\lim_{n\to\infty} \Vert G_{k(n)}\Vert_{C^{\theta}(\overline{\Omega})} = 0$, we obtain $A_1(t)w_0 = 0$ in $\Omega$ with $\partial_{\nu_A}w_0 + \sigma w_0 = 0$ on $\partial\Omega$. Then Lemma \ref{lem1} yields $w_0(x,t)=0$ in $\Omega$ that contradicts the relation $\Vert w_0\Vert_{C^{2+\theta}(\overline{\Omega})} = 1$ that we established above. The obtained contradiction implicates the desired norm estimate \eqref{(4)}.
\noindent III. Third part of the proof.
The last missing detail of the proof is the inclusion $\psi \in C^1([0,T];C^2(\overline{\Omega}))$ for the function $\psi$ constructed in the previous part of the proof.
To show this inclusion, we first verify that for an arbitrarily but fixed $t\in [0,T]$ the function $d(x,s):= \psi(x,t) - \psi(x,s)$ satisfies the equations \begin{equation}\label{(6)} \left\{ \begin{array}{rl} & -A_1(t)d(\cdot,s) = (b_0(t) - b_0(s))\psi(\cdot,s)\\ - & \sum_{j=1}^d (b_j(t) - b_j(s))\partial_j\psi(\cdot,s) \quad \mbox{in $\Omega$ for $0\le s, t \le T$},\\ & \partial_{\nu_A}d + \sigma d = 0 \quad \mbox{on $\partial\Omega$, $\,\,$ $0 \le s,t \le T$}. \end{array}\right. \end{equation}
For an arbitrarily but fixed $\delta>0$ we set $I_{\delta,t}:= [0,T] \cap \{s;\, \vert t-s\vert \le \delta\}$.
Applying the relation \eqref{(8.3a)} and the estimate \eqref{(4)} to the solution $d$ of \eqref{(6)} yields \begin{align*} & \Vert d(\cdot,s)\Vert_{C^{2+\theta}(\overline{\Omega})}\\ \le & C\left(\left\Vert \sum_{j=1}^d (b_j(t)-b_j(s))\partial_j\psi(\cdot,s) \right\Vert_{C^{\theta}(\overline{\Omega})} + \Vert (b_0(t)-b_0(s))\psi(\cdot,s)\Vert_{C^{\theta}(\overline{\Omega})}\right)\\ \end{align*} \begin{equation}\label{(8.8a)} \le C\sum_{j=0}^d \Vert b_j(s) - b_j(t)\Vert_{C^1(\overline{\Omega})}\eta(s) \le C\max_{0\le j \le d} \Vert b_j(s) - b_j(t)\Vert_{C^1(\overline{\Omega})}\, \sup_{s \in I_{\delta,t}}\eta(s). \end{equation}
For the function $$ h(\delta):= \max_{0\le j\le d} \sup_{\vert s-t\vert \le \delta} \Vert b_j(s) - b_j(t)\Vert_{C^1(\overline{\Omega})}, $$ the inclusions $b_j \in C([0,T];C^1(\overline{\Omega}))$, $0\le j \le d$ imply
$\lim_{\delta\downarrow 0} h(\delta) = 0$.
Now we rewrite the estimate \eqref{(8.8a)} in terms of the function $\eta$ defined by \eqref{(8.3a)} as $$ \vert \eta(s) - \eta(t)\vert \le Ch(\delta)\sup_{s\in I_{\delta,t}} \eta(s) \quad \mbox{for $s \in I_{\delta,t}$}, $$ and thus obtain the inequality $$ \eta(s) \le \eta(t) + Ch(\delta)\sup_{s\in I_{\delta,t}} \eta(s) \quad \mbox{for $s \in I_{\delta,t}$}. $$ Choosing $\delta: =\delta(t)>0$ sufficiently small, for a given $t \in [0,T]$, the estimate $\sup_{s\in I_{\delta(t),t}} \eta(s) \le C_1\eta(t)$ holds true. Varying $t \in [0,T]$, we now choose a finite number of the intervals $I_{\delta(t),t}$ that cover the whole interval $[0,T]$ and thus obtain the norm estimate \begin{equation}\label{(8.8)} \Vert \psi\Vert_{L^{\infty}(0,T;C^{2+\theta}(\overline{\Omega}))} \le C_2 \end{equation} with some constant $C_2>0$.
For $s \in I_{\delta(t),t}$, substitution of \eqref{(8.8)} into \eqref{(8.8a)} yields the estimate $$ \Vert d(\cdot,s)\Vert_{C^{2+\theta}(\overline{\Omega}))} = \Vert \psi(\cdot,t) - \psi(\cdot,s)\Vert_{C^{2+\theta}(\overline{\Omega}))} \le Ch(\delta)C_2. $$ Consequently, $\lim_{s\to t} \Vert \psi(\cdot,s) - \psi(\cdot,t)\Vert _{C^{2+\theta}(\overline{\Omega})} = 0$ and we have shown the inclusion \begin{equation}\label{(8)} \psi \in C([0,T];C^{2+\theta}(\overline{\Omega})). \end{equation}
To complete the proof, the now verify the inclusion $\psi \in C^1([0,T];C^{2+\theta}(\overline{\Omega}))$. Since $-A_1(\xi)\psi(x,\xi) = 1$ in $\Omega$,
differentiating this formula with respect to $\xi$ leads to the representation \begin{equation}\label{(9)} \sum_{j=1}^d \partial_i(a_{ij}(x)\partial_j\partial_{\xi}\psi(x,\xi)) + \sum_{j=1}^d b_j(\xi)\partial_j\partial_{\xi}\psi(x,\xi) - b_0(\xi)\partial_{\xi}\psi(x,\xi) \end{equation} $$ = -\sum_{j=1}^d \partial_{\xi}b_j(\xi)\partial_j\psi(x,\xi) + (\partial_{\xi}b_0)(\xi)\psi(x,\xi) \quad \mbox{in $\Omega$}. $$ By subtracting the equation \eqref{(9)} with $\xi=s$ from the one with $\xi=t$, we deduce that the function $d_1(x,s):= (\partial_t\psi)(x,t) - (\partial_t\psi)(x,s)$ satisfies the relation \begin{equation}\label{(10)} -A_1(t)d_1(x,s) \end{equation} \begin{align*} =& \biggl[ -\sum_{i,j=1}^d (b_j(t)-b_j(s))\partial_j\partial_s\psi(x,s) + (b_0(t)-b_0(s))\partial_s\psi(x,s)\\ - & \sum_{j=1}^d (\partial_tb_j(t)-\partial_sb_j(s))\partial_j\psi(x,s) + (\partial_tb_0(t)-\partial_sb_0(s))\psi(x,s)\biggr] \\
+ &\left[ -\sum_{j=1}^d \partial_tb_j(t)(\partial_j\psi(x,t) - \partial_j\psi(x,s))
+ \partial_tb_0(t)(\psi(x,t) - \psi(x,s))\right]\\ =:& H_1(x,t,s) + H_2(x,t,s) \quad \mbox{in $\Omega$} \end{align*} and the boundary condition $\partial_{\nu_A}d_1 + \sigma d_1 = 0$ on $\partial\Omega$ for all $s,t \in [0,T]$. Thus, the inclusion $\psi\in C^1([0,T];C^{2+\theta}(\overline{\Omega}))$ will follow from the relation $\lim_{s\to t} \Vert d_1(\cdot,s)\Vert _{C^{2+\theta}(\overline{\Omega})} = 0$ that we now prove. To this end, by applying Theorem 3.2 (p. 137) in \cite{LU} to the equation \eqref{(10)}, it suffices to prove that \begin{equation}\label{(11)} \lim_{s\to t} \Vert H_{\ell}(\cdot,t,s)\Vert_{C^{\theta}(\overline{\Omega})} = 0, \quad \ell=1,2. \end{equation}
Applying Theorem 3.2 in \cite{LU} to the equation \eqref{(9)}, in view of the regularity conditions \eqref{(1.2)} and the equation \eqref{(9)}, we obtain the estimates \begin{equation}\label{(12)} \Vert \partial_t\psi(\cdot,t)\Vert_{C^{2+\theta}(\overline{\Omega})} \end{equation} \begin{align*} \le& C\left( \left\Vert \sum_{j=1}^d (\partial_tb_j)(\cdot,t) \partial_j\psi(\cdot,t)\right\Vert_{C^{\theta}(\overline{\Omega})} + \Vert (\partial_tb_0)(\cdot,t)\psi(\cdot,t)\Vert_{C^{\theta}(\overline{\Omega})}\right)\\ \le& C\sum_{j=0}^d \Vert b_j(\cdot,t)\Vert_{C^1([0,T];C^1(\overline{\Omega}))} \Vert \psi(\cdot,t)\Vert_{C^{2+\theta}(\overline{\Omega})} \le C_3 \quad \mbox{for $0\le t\le T$.} \end{align*} The inequalities \eqref{(8.8)} and \eqref{(12)} lead to the norm estimate \begin{equation}\label{(13)} \Vert H_1(\cdot,t,s)\Vert_{C^{\theta}(\overline{\Omega})} \le C_4\sum_{k=0}^1\sum_{j=0}^d \Vert (\partial_t^kb_j)(\cdot,t) - (\partial_t^kb_j)(\cdot,s)\Vert _{C^1(\overline{\Omega})}. \end{equation} By employing the arguments similar to those used above, the estimate \begin{equation}\label{(13a)} \Vert H_2(\cdot,t,s)\Vert_{C^{\theta}(\overline{\Omega})} \le C_4\sum_{j=0}^d \Vert \partial_tb_j\Vert_{C([0,T];C^1(\overline{\Omega}))} \sum_{k=0}^1 \Vert \nabla^k\psi(\cdot,t) - \nabla^k\psi(\cdot,s) \Vert_{C^{\theta}(\overline{\Omega})} \end{equation} can be derived. Since $\partial_t^kb_j \in C([0,T];C^1(\overline{\Omega}))$ for $k=0,1$ and $0\le j \le d$ by the conditions \eqref{(1.2)} and $\nabla^k\psi \in C([0,T];C^{1+\theta} (\overline{\Omega}))$ with $k=0,1$ by the inclusion \eqref{(8)}, the relation $\lim_{s\to t} \Vert H_{\ell}(\cdot,t,s)\Vert_{C^{\theta}(\overline{\Omega})} = 0$ with $\ell=1,2$ immediately follows from the norm estimates \eqref{(13)} and \eqref{(13a)}. As mentioned above, this completes the proof
of existence of a function satisfying the conditions \eqref{(4.3)}.
\section*{\small
Conflict of interest}
{\small
The authors declare that they have no conflict of interest.}
\small \noindent {\bf Publisher's Note} Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
\end{document} | arXiv |
Choi's theorem on completely positive maps
In mathematics, Choi's theorem on completely positive maps is a result that classifies completely positive maps between finite-dimensional (matrix) C*-algebras. An infinite-dimensional algebraic generalization of Choi's theorem is known as Belavkin's "Radon–Nikodym" theorem for completely positive maps.
Statement
Choi's theorem. Let $\Phi :\mathbb {C} ^{n\times n}\to \mathbb {C} ^{m\times m}$ :\mathbb {C} ^{n\times n}\to \mathbb {C} ^{m\times m}} be a linear map. The following are equivalent:
(i) Φ is n-positive (i.e. $\left(\operatorname {id} _{n}\otimes \Phi \right)(A)\in \mathbb {C} ^{n\times n}\otimes \mathbb {C} ^{m\times m}$ is positive whenever $A\in \mathbb {C} ^{n\times n}\otimes \mathbb {C} ^{n\times n}$ is positive).
(ii) The matrix with operator entries
$C_{\Phi }=\left(\operatorname {id} _{n}\otimes \Phi \right)\left(\sum _{ij}E_{ij}\otimes E_{ij}\right)=\sum _{ij}E_{ij}\otimes \Phi (E_{ij})\in \mathbb {C} ^{nm\times nm}$
is positive, where $E_{ij}\in \mathbb {C} ^{n\times n}$ is the matrix with 1 in the ij-th entry and 0s elsewhere. (The matrix CΦ is sometimes called the Choi matrix of Φ.)
(iii) Φ is completely positive.
Proof
(i) implies (ii)
We observe that if
$E=\sum _{ij}E_{ij}\otimes E_{ij},$
then E=E* and E2=nE, so E=n−1EE* which is positive. Therefore CΦ =(In ⊗ Φ)(E) is positive by the n-positivity of Φ.
(iii) implies (i)
This holds trivially.
(ii) implies (iii)
This mainly involves chasing the different ways of looking at Cnm×nm:
$\mathbb {C} ^{nm\times nm}\cong \mathbb {C} ^{nm}\otimes (\mathbb {C} ^{nm})^{*}\cong \mathbb {C} ^{n}\otimes \mathbb {C} ^{m}\otimes (\mathbb {C} ^{n}\otimes \mathbb {C} ^{m})^{*}\cong \mathbb {C} ^{n}\otimes (\mathbb {C} ^{n})^{*}\otimes \mathbb {C} ^{m}\otimes (\mathbb {C} ^{m})^{*}\cong \mathbb {C} ^{n\times n}\otimes \mathbb {C} ^{m\times m}.$
Let the eigenvector decomposition of CΦ be
$C_{\Phi }=\sum _{i=1}^{nm}\lambda _{i}v_{i}v_{i}^{*},$
where the vectors $v_{i}$ lie in Cnm . By assumption, each eigenvalue $\lambda _{i}$ is non-negative so we can absorb the eigenvalues in the eigenvectors and redefine $v_{i}$ so that
$\;C_{\Phi }=\sum _{i=1}^{nm}v_{i}v_{i}^{*}.$
The vector space Cnm can be viewed as the direct sum $\textstyle \oplus _{i=1}^{n}\mathbb {C} ^{m}$ compatibly with the above identification $\textstyle \mathbb {C} ^{nm}\cong \mathbb {C} ^{n}\otimes \mathbb {C} ^{m}$ and the standard basis of Cn.
If Pk ∈ Cm × nm is projection onto the k-th copy of Cm, then Pk* ∈ Cnm×m is the inclusion of Cm as the k-th summand of the direct sum and
$\;\Phi (E_{kl})=P_{k}\cdot C_{\Phi }\cdot P_{l}^{*}=\sum _{i=1}^{nm}P_{k}v_{i}(P_{l}v_{i})^{*}.$
Now if the operators Vi ∈ Cm×n are defined on the k-th standard basis vector ek of Cn by
$\;V_{i}e_{k}=P_{k}v_{i},$
then
$\Phi (E_{kl})=\sum _{i=1}^{nm}P_{k}v_{i}(P_{l}v_{i})^{*}=\sum _{i=1}^{nm}V_{i}e_{k}e_{l}^{*}V_{i}^{*}=\sum _{i=1}^{nm}V_{i}E_{kl}V_{i}^{*}.$
Extending by linearity gives us
$\Phi (A)=\sum _{i=1}^{nm}V_{i}AV_{i}^{*}$
for any A ∈ Cn×n. Any map of this form is manifestly completely positive: the map $A\to V_{i}AV_{i}^{*}$ is completely positive, and the sum (across $i$) of completely positive operators is again completely positive. Thus $\Phi $ is completely positive, the desired result.
The above is essentially Choi's original proof. Alternative proofs have also been known.
Consequences
Kraus operators
In the context of quantum information theory, the operators {Vi} are called the Kraus operators (after Karl Kraus) of Φ. Notice, given a completely positive Φ, its Kraus operators need not be unique. For example, any "square root" factorization of the Choi matrix CΦ = B∗B gives a set of Kraus operators.
Let
$B^{*}=[b_{1},\ldots ,b_{nm}],$
where bi*'s are the row vectors of B, then
$C_{\Phi }=\sum _{i=1}^{nm}b_{i}b_{i}^{*}.$
The corresponding Kraus operators can be obtained by exactly the same argument from the proof.
When the Kraus operators are obtained from the eigenvector decomposition of the Choi matrix, because the eigenvectors form an orthogonal set, the corresponding Kraus operators are also orthogonal in the Hilbert–Schmidt inner product. This is not true in general for Kraus operators obtained from square root factorizations. (Positive semidefinite matrices do not generally have a unique square-root factorizations.)
If two sets of Kraus operators {Ai}1nm and {Bi}1nm represent the same completely positive map Φ, then there exists a unitary operator matrix
$\{U_{ij}\}_{ij}\in \mathbb {C} ^{nm^{2}\times nm^{2}}\quad {\text{such that}}\quad A_{i}=\sum _{j=1}U_{ij}B_{j}.$
This can be viewed as a special case of the result relating two minimal Stinespring representations.
Alternatively, there is an isometry scalar matrix {uij}ij ∈ Cnm × nm such that
$A_{i}=\sum _{j=1}u_{ij}B_{j}.$
This follows from the fact that for two square matrices M and N, M M* = N N* if and only if M = N U for some unitary U.
Completely copositive maps
It follows immediately from Choi's theorem that Φ is completely copositive if and only if it is of the form
$\Phi (A)=\sum _{i}V_{i}A^{T}V_{i}^{*}.$
Hermitian-preserving maps
Choi's technique can be used to obtain a similar result for a more general class of maps. Φ is said to be Hermitian-preserving if A is Hermitian implies Φ(A) is also Hermitian. One can show Φ is Hermitian-preserving if and only if it is of the form
$\Phi (A)=\sum _{i=1}^{nm}\lambda _{i}V_{i}AV_{i}^{*}$
where λi are real numbers, the eigenvalues of CΦ, and each Vi corresponds to an eigenvector of CΦ. Unlike the completely positive case, CΦ may fail to be positive. Since Hermitian matrices do not admit factorizations of the form B*B in general, the Kraus representation is no longer possible for a given Φ.
See also
• Stinespring factorization theorem
• Quantum operation
• Holevo's theorem
References
• M.-D. Choi, Completely Positive Linear Maps on Complex Matrices, Linear Algebra and its Applications, 10, 285–290 (1975).
• V. P. Belavkin, P. Staszewski, Radon-Nikodym Theorem for Completely Positive Maps, Reports on Mathematical Physics, v.24, No 1, 49–55 (1986).
• J. de Pillis, Linear Transformations Which Preserve Hermitian and Positive Semidefinite Operators, Pacific Journal of Mathematics, 23, 129–137 (1967).
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Purpose of outer key in HMAC
From what I know, the HMAC constructions has two strength:
It's resistant to length extensions
Since the key is consumed before the message, the attacker does not know the initial state, preventing simple collision attacks.
But the simple construction $ \mathrm{Hash}(\mathrm{Hash}(\mathrm{key} ∥ \mathrm{message})) $ would offer those properties too.
HMAC on the other hand uses the more complicated construction $ \mathrm{Hash}((\mathrm{key} ⊕ \mathrm{opad}) ∥ \mathrm{Hash}((\mathrm{key} ⊕ \mathrm{ipad}) ∥ \mathrm{message})) $. I assume the more complicated construction of HMAC is required for some security proof, but I don't immediately see why.
For SHA-3 candidates, Hash(key||message) is claimed to be secure, since they're resistant to length extensions, without consuming the key twice. I believe Skein even has some security proof for a very similar mode.
So why does HMAC need to inject the key twice?
mac hmac length-extension
CodesInChaosCodesInChaos
You're missing the most important strength of HMAC: it comes with a proof of security (under some plausible assumptions). The outer key plays an important role in the proofs. The best place to learn more is to read the HMAC papers:
Message authentication using hash functions: The HMAC construction, Mihir Bellare, Ran Canetti, Hugo Kawczyk, CryptoBytes Spring 1996.
Keying hash functions for message authentication, Mihir Bellare, Ran Canetti, Hugo Kawczyk, CRYPTO '96.
In particular, it is very important for their proof that the outer function be keyed. In your scheme, the outer function is not keyed, so their proof of security will not apply.
As they explain in the paper, the role of the inner function is to provide collision-resistance (to compress a long message down to a short fingerprint, in a way so that someone who does not know the key cannot find a pair of messages with the same fingerprint), and the role of the outer function is to act as a message authentication code on this fingerprint. Their security proofs show that if the inner and outer functions each correctly implement their roles, then the combination (HMAC) will be a secure message authentication code. Because the outer function needs to be a secure message authentication code, that means the outer function needs to be keyed for their proof methodology to apply.
Read the papers. They are surprisingly readable, for a theoretical piece of work.
D.W.D.W.
$\begingroup$ The strange part for me is that HMAC requires this key, but SHA-3 candidates used as MAC don't. I believe Skein has a security proof for a mode that's similar to H(k||m). "We prove that if Threefish is a tweakable PRP (pseudorandom permutation) then Skein is a PRF." $\endgroup$ – CodesInChaos Jul 30 '12 at 6:44
$\begingroup$ The explanation is that SHA-3 candidates are designed with "random-oracle-ness" in mind. $\hspace{0.75 in}$ $\endgroup$ – user991 Jul 30 '12 at 7:50
$\begingroup$ This newer reference New Proofs for NMAC and HMAC: Security without Collision-Resistance (M. Bellare, Crypto 2006) gives an argument that HMAC is secure even if the compression function in the underlying hash has properties insufficient to make the hash collision-resistant. Independently: I like this intuitive argument that the outer hash kind of re-enciphers the result of the previous one; much like adding rounds in a block cipher, that makes recovering the key or otherwise distinguishing results from random much harder. $\endgroup$ – fgrieu♦ Mar 18 '14 at 13:02
As a Skein co-author, one of the properties of the UBI chaining mode is to give you HMAC-like properties in one pass. Skein itself consists of the Threefish tweakable block cipher, the UBI chaining mode, and some proofs that extend tweakable block cipher theory into a tweakable hash function theory that reduces the security of the hash function to the security of the block cipher. (I will note in passing that one of the strengths of the Skein team is that we all have certain strengths that we brought to parts of the whole; Mihir took the lead on the proofs, and he's a co-author of HMAC.)
One of the features of that is that you get a one-pass MAC for free with Skein.
HMAC, in contrast is a wrapper around any hash functions that (often) amplifies its security. In the Introduction of the CryptoBytes article on HMAC, they say,
"One of the many difficulties [of constructing a MAC from a hash function, as opposed to a cipher] is that they are not even keyed primitives, i.e. do not accommodate naturally the notion of a secret key." Several constructions were proposed prior to HMAC, but they lacked a convincing security analysis.
HMAC creates a wrapper that goes around an arbitrary hash function and gives you some security properties that the hash function did not have. Skein designs a tweakable hash function, and it is that tweak that gives you a mechanism to put in a key securely.
Jon CallasJon Callas
I'm putting another answer in because as good as D.W.'s answer is (I up-voted it), it doesn't really answer your question.
But the simple construction Hash(Hash(key|message)) would offer those properties too.
But the construction you gave -- Hash(Hash(key|message)) -- has a weakness that HMAC does not.
One of those properties was resistance to simple collisions. What is a "simple" collision is undefined, and you might not consider what I'm going to say to be a simple collision, but it is nonetheless a weakness.
If I can construct a collision to key|message then that collision propagates out to the MAC as a whole. But the same attack does not work against HMAC.
HMAC, as you know, uses two keys. I'm going to call them K1 and K2, and they're constructed by XORing with the two different pads. Think of it as a simple key derivation function. HMAC gets added strength not only because it uses an inner key and an outer key, but that they are different keys. Consequently, an attacker who constructs an attack based upon a hash collision must collide against each of these keys, which is hard. That is also the essence of the proof, as well (summarized).
So the answer to your exact question is that it's not the key. It's using two different keys, each simply derived from the base key and you're forced to break them both.
$\begingroup$ I don't think this is correct. With HMAC, any collision in the inner function is immediately and automatically a collision for the full HMAC (same as for the construction that CodesInChaos asked about). So what you describe is not actually a benefit of HMAC. $\endgroup$ – D.W. Aug 1 '12 at 5:37
Not the answer you're looking for? Browse other questions tagged mac hmac length-extension or ask your own question.
Why concatenate the key a second time in HMAC?
Why is the salt used only once in PBKDF2, while the password is used often?
How does the secret key in an HMAC prevent modification of the HMAC?
HMAC construction based on the combination of two hash functions
How is HMAC(message,key) more secure than Hash(key1+message+key2)
How secure would HMAC-SHA3 be?
Counter based cipher using HMAC-SHA-256
Keys in HMAC and NMAC
Understanding WPA2 authentication in details
HMAC-SHA1-128 parameters | CommonCrawl |
\begin{document}
\title{Be Causal: De-biasing Social Network Confounding in Recommendation}
\author{Qian Li} \authornotemark[1] \authornote{Both authors contributed equally to this research.} \email{[email protected]} \affiliation{
\institution{University of Technology, Sydney}
\streetaddress{P.O. Box 123}
\city{Broadway}
\state{NSW 2007}
\country{Australia} }
\author{Xiangmeng Wang} \authornotemark[1] \email{[email protected]} \affiliation{
\institution{University of Technology, Sydney}
\streetaddress{P.O. Box 123}
\city{Broadway}
\state{NSW 2007}
\country{Australia} } \author{Guandong Xu} \authornotemark[1] \email{[email protected]} \affiliation{
\institution{University of Technology, Sydney}
\streetaddress{P.O. Box 123}
\city{Broadway}
\state{NSW 2007}
\country{Australia} }
\begin{abstract} In recommendation systems, the existence of the missing-not-at-random (MNAR) problem results in the selection bias issue, degrading the recommendation performance ultimately. A common practice to address MNAR is to treat missing entries from the so-called ``exposure'' perspective, i.e., modeling how an item is exposed (provided) to a user. Most of the existing approaches use heuristic models or re-weighting strategy on observed ratings to mimic the missing-at-random setting. However, little research has been done to reveal how the ratings are missing from a causal perspective. To bridge the gap, we propose an unbiased and robust method called DENC (\emph{De-bias Network Confounding in Recommendation}) inspired by confounder analysis in causal inference. In general, DENC provides a causal analysis on MNAR from both the inherent factors (e.g., latent user or item factors) and auxiliary network's perspective. Particularly, the proposed exposure model in DENC can control the social network confounder meanwhile preserves the observed exposure information. We also develop a deconfounding model through the balanced representation learning to retain the primary user and item features, which enables DENC generalize well on the rating prediction. Extensive experiments on three datasets validate that our proposed model outperforms the state-of-the-art baselines.
\end{abstract}
\begin{CCSXML} <ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization~Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization~Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization~Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks~Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept> </ccs2012> \end{CCSXML}
\ccsdesc[500]{Information systems} \ccsdesc[300]{Collaborative filtering} \ccsdesc{Computer systems organization~Robotics} \ccsdesc[100]{Networks~Network}
\keywords{Recommendation; Missing-Not-At-Random; Causal Inference; Bias; Propensity}
\maketitle
\section{Introduction}
Recommender systems aim to handle information explosion meanwhile to meet users' personalized interests, which have received extensive attention from both research communities and industries. The power of a Recommender system highly relies on whether the observed user feedback on items ``correctly'' reflects the users’ preference or not. However, such feedback data often contains only a small portion of observed feedback (e.g., explicit ratings), leaving a large number of missing ratings to be predicted. To handle the partially observed feedback, a common assumption for model building is that the feedback is missing at random (MAR), i.e., the probability of a rating to be missing is independent of the value. When the observed data follows the MAR, using only the observed data via statistical analysis methods can yield ``correct'' prediction without introducing bias~\cite{marlin2009collaborative,lim2015top}. However, this MAR assumption usually does not hold in reality and the missing pattern exhibits \emph{missing not at random} (MNAR) phenomenon. Generally, MNAR is related to selection bias. For instance, in movie recommendation, instead of randomly choosing movies to watch, users are prone to those that are highly recommended, while in advertisement recommendation, whether an advertisement is presented to a user is purely subject to the advertiser's provision, rather than at random. In these scenarios, the missing pattern of data mainly depends on whether the users are exposed to the items, and consequently, the ratings in fact are \emph{missing not at random} (MNAR)~\cite{he2016fast}. These findings shed light on the origination of selection bias from MNAR ~\cite{sportisse2020imputation}. Therefore the selection bias cannot be ignored in practice and it has to be modeled properly in order for reliable recommendation prediction. How to model the missing data mechanism and debias the rating performance forms up the main motivation of this research.
\noindent\textbf{Existing MNAR-aware Methods}
There are abundant methods for addressing the MNAR problem on the implicit or explicit feedback. For implicit feedback, traditional methods~\cite{hu2008collaborative} take the uniformity assumption that assigns a uniform weight to down-weight the missing data, assuming that each missing entry is equally likely to be negative feedback. This is a strong assumption and limits models’ flexibility for real applications. Recently, researchers tackle MNAR data directly through simulating the generation of the missing pattern under different heuristics~\cite{hernandez2014probabilistic}. Of these works, probabilistic models are presented as a proxy to relate missing feedback to various factors, e.g., item features. For explicit feedback, a widely adopted mechanism is to exploit the dependencies between rating missingness and the potential ratings (e.g., 1-5 star ratings)~\cite{koren2015advances}. That is, high ratings are less likely to be missing compared to items with low ratings. However, these paradigm methods involve heuristic alterations to the data, which are neither empirically verified nor theoretically proven~\cite{saito2020asymmetric}.
A couple of methods have recently been studied for addressing MNAR~\cite{hernandez2014probabilistic,liang2016causal,schnabel2016recommendations} by treating missing entries from the so-called ``exposure'' perspective, i.e., indicating whether or not an item is exposed (provided) to a user. For example, ExpoMF resorts modeling the probability of \emph{exposure}~\cite{hernandez2014probabilistic}, and up-weighting the loss of rating prediction with high \emph{exposure} probability. However, ExpoMF can lead to a poor prediction accuracy for rare items when compared with popular items. Likewise, recent works~\cite{liang2016causal,schnabel2016recommendations} resort to \emph{propensity score} to model \emph{exposure}. The \emph{propensity score} introduced in causal inference indicates \emph{the probability that a subject receiving the treatment or action}. Exposing a user to an item in a recommendation system is analogous to exposing a subject to a treatment. Accordingly, they adopt \emph{propensity score} to model the \emph{exposure} probability and re-weight the prediction error for each observed rating with the inverse \emph{propensity score}. The ultimate goal is to calibrate the MNAR feedbacks into missing-at-random ones that can be used to guide unbiased rating prediction.
Whilst the state-of-the-art propensity-based methods are validated to alleviate the MNAR problem for recommendation somehow, they still suffer from several major drawbacks: 1) they merely exploit the user/item latent vectors from the ratings for mitigating MNAR, but fail to disentangle different causes for MNAR from a causal perspective; 2) technically, they largely rely on propensity score estimation to mitigate MNAR problem; the performance is sensitive to the choice of propensity estimator~\cite{wang2019doubly}, which is notoriously difficult to tune.
\begin{figure}
\caption{The causal view for MNAR problem: \emph{treatment} and \emph{outcome} are terms in the theory of causal inference, which denote an action taken (e.g.,\emph{exposure}) and its result (e.g., \emph{rating}), respectively. The \emph{confounder} (e.g., \emph{social network}) is the common cause of treatment and outcome.}
\label{fig:example}
\end{figure}
\noindent\textbf{The proposed approach}
To overcome these obstacles, in contrast, we aim to address the fundamental MNAR issue in recommendation from a novel causal inference perspective, to attain a robust and unbiased rating prediction model. From a causal perspective, we argue that the selection bias (i.e., MNAR) in the recommendation system is attributed to the presence of \emph{confounders}. As explained in Figure~\ref{fig:example}, \emph{confounders} are factors (or variables) that affect both the treatment assignments (exposure) and the outcomes (rating). For example, friendships (or social network) can influence both users’ choice of movie watching and their further ratings. Users who choose to watch the movie are more likely to rate than those who do not. So, \emph{the social network is indeed a confounding factor that affects which movie the user is exposed to and how the user rates the movie}. The confounding factor results in a \emph{distribution discrepancy between the partially observed ratings and the complete ratings} as shown in Figure~\ref{fig:seb}. Without considering the distribution discrepancy, the rating model trained on the observed ratings fails to generalize well on the unobserved ratings. With this fact in mind, our idea is to analyze the confounder effect of social networks on rating and exposure, and in turn, fundamentally alleviate the MNAR problem to predict valid ratings.
\begin{figure}
\caption{The training space of conventional recommendation models is the observed rating space $\mathcal{O}$, whereas the inference space is the entire exposure space $\mathcal{D}$. The discrepancy of data distribution between $\mathcal{O}$ and $\mathcal{D}$ leads to selection bias in conventional recommendation models.}
\label{fig:seb}
\end{figure} In particular, we attempt to study the MNAR problem in recommendation from a causal view and propose an unbiased and robust method called DENC (\emph{De-bias Network Confounding in Recommendation}). To sufficiently consider the selection bias in MNAR, we model the underlying factors (i.e., inherent user-item information and social network) that can generate observed ratings. In light of this, as shown in Figure~\ref{fig:frame}, we construct a causal graph based recommendation framework by disentangling three determinants for the ratings, i.e., \emph{inherent factors}, \emph{confounder} and \emph{exposure}. Each determinant accordingly corresponds to one of three specific components in DENC: \emph{deconfonder model}, \emph{social network confounder} and \emph{exposure model}, all of which jointly determine the rating outcome.
In summary, the key contributions of this research are as follows:
\begin{itemize}
\item Fundamentally different from previous works, DENC is the first method for the unbiased rating prediction through disentangling determinants of selection bias from a causal view.
\item The proposed \emph{exposure model} is capable of revealing the exposure assignment and accounting for the confounder factors derived from the \emph{social network confounder}, which thus remedies selection bias in a principled manner.
\item We develop a \emph{deconfonder model} via the balanced representation learning that embeds inherent factors independent of the exposure, therefore mitigating the distribution discrepancy between the observed rating and inference space.
\item We conduct extensive experiments to show that our DENC method outperforms state-of-the-art methods.
The generalization ability of our DENC is also validated by verifying different degrees of confounders. \end{itemize}
\section{Related Work\label{related work}}
\subsection{MNAR-aware Methods} \subsubsection{Traditional Heuristic Models} Early works on explicit feedback formulate a recommendation a rating prediction problem in which the large volume of unobserved ratings (i.e., missing data) is assumed to be extraneous for user preference~\cite{hu2008collaborative}. Following this unreliable assumption, numerous recommenders have been developed including basic matrix factorization based-recommenders~\cite{rendle2008online} and sophisticated ones such as SVD++~\cite{koren2015advances}. As statistical analysis with missing data techniques, especially MNAR proposition, find widespread applications, there is much interest in understanding its impacts on the recommendation system. Previous research has shown that for explicit-feedback recommenders, users’ ratings are MNAR~\cite{marlin2009collaborative}. Marlin and Zemel~\cite{marlin2009collaborative} first study the effect of violating MNAR assumption in recommendation methods; they propose statistical models to address MNAR problem of missing ratings based on the heuristic that users are more likely to supply ratings for items that they do like. Another work of ~\cite{hernandez2014probabilistic} also has focused on addressing MNAR problem; they propose a probabilistic matrix factorization model (ExpoMF) for collaborative filtering that learns from observation data. However, these heuristic paradigm methods are neither empirically verified nor theoretically proven~\cite{schnabel2016recommendations,liang2016causal}. \subsubsection{Propensity-based Model} The basic idea of propensity scoring methods is to turn the outcomes of an observational study into a pseudo-randomized trial by re-weighting samples, similarly to importance sampling. Typically, using Inverse Propensity Weighting (IPW) estimator, Liang~\cite{liang2016causal} proposes a framework consisted of one exposure model and one preference model. The exposure model is estimated by a Poisson factorization, then preference model is fit with weighted click data, where each click is weighted by the inverse of exposure and be used to alleviate popularity bias. Based on Self Normalized Inverse Propensity Scoring (SNIPS) estimator, the model in \cite{schnabel2016recommendations} are developed either directly through observed ratings of a missing-completely-at-random sample estimated by SNIPS or indirectly through user and item covariates. These works re-weight the observational click data as though it came from an ``experiment'' where users are randomly shown items. Thus, the measurement is still adopting re-weighting strategies to mimic the missing-completely-at-random like most of the heuristic models do~\cite{yang2018unbiased}. Besides, these works are sensitive to the choice of propensity score estimators~\cite{wang2019doubly}. In contrast, our work relies solely on the observed ratings: we do not require ratings from a gold-standard randomized exposure estimation and nor do we use external covariates; moreover, we consider another important bias in the recommendation scenario, namely, social counfounding bias. \subsection{Social Network-based Methods} The effectiveness of social networks has been proved by a vast amount of social recommenders. Purushotham~\cite{purushotham2012collaborative} has explored how traditional factorization methods can exploit network connections; this brings the latent preferences of connected users closer to each other, reflecting that friends have similar tastes. Other research has included social information directly into various collaborative filtering methods. TrustMF~\cite{yang2016social} adopts collaborative filtering to map users into low-dimensional latent feature spaces in terms of their trust relationship; the remarkable performance of the proposed model reflects individuals among a social network will affect each other while reviewing items. SocialMF~\cite{jamali2010matrix} incorporates trust propagation into the matrix factorization model, which assumes the factors of every user are dependent on the factor vectors of his/her direct neighbors in the social network. However, despite the remarkable contribution of social network information in various recommendation methods, it has not been utilized in controlling for confounding bias of causal inference-based recommenders yet. \section{DENC Method}
\subsection{Notations} We first give some preliminaries of our method and used notation. Suppose we have $m \times n$ rating matrix $Y\in\mathbb{R}^{m \times n} =[\dot{y}_{ui}]$ describing the numerical rating of $m$ users on $n$ items. Let \(U = \{u_{1},u_{2},...,u_{n}\}\) and \(I = \{i_{1},i_{2},...,i_{m}\}\) be the set of users and items respectively. For each user-item pair, we use \(a_{ui}\) to indicate whether user \(u\) has been exposed to item \(i\) and $a_{ui}\in \{0,1\}$. We use \(y_{ui}\) to represent the rating given by \(u\) to item \(i\).
\iffalse
Recall that $do$-calculus on the treatment $t$
removes the dependence of $t$ on other variables,
which results in the bias $P(y|x,do(t=1))\neq P(y|x,t=1)$.
An unbiased estimation of $P(y|x,do(t=1))$ can only be obtained
by ``adjusting'' all confounders, i.e., conditioning on their various values and averaging the result.
Following most causal inference literature~\cite{imbens2000role},
two assumptions are sufficient to identify the $P(y|x,do(t=1))$.
\fi
\subsection{A Causal Inference Perspective} Viewing recommendation from a causal inference perspective, we argue that exposing a user to an item in recommendation is an intervention analogous to exposing a patient to a treatment in a medical study. Following the potential outcome framework in causal inference~\cite{rubin1974estimating}, we reformulate the rating prediction as follows. \begin{problem}[Causal View for Recommendation] For every user-item pair with a binary exposure $a_{ui}\in\{0,1\}$, there are two potential rating outcomes $y_{ui}(0)$ and $y_{ui}(1)$. We aim to \emph{estimating the ratings had all movies been seen by all users}, i.e., estimate $y_{ui}(1)$ (i.e., $y_{ui}$) for all $u$ and $i$. \label{th:pof} \end{problem}
\begin{figure}
\caption{The causal graph for recommendation.}
\label{fig:underlying}
\end{figure} As we can only observe the outcome $y_{ui}(1)$ when the user $u$ is exposed by the item $i$, i.e., $a_{ui}=1$, we target at the problem that \emph{what would happen to the unobserved rating $y_{ui}$ if we set exposure variable by setting $a_{ui}=1$}. In our settings, the confounder derived from the social network among users are denoted as a common cause that affects the exposure $a_{ui}$ and outcome $y_{ui}$. We aim to disentangle the underlying factors in observation ratings and social networks as shown in Figure~\ref{fig:underlying}. The intuition behind Figure~\ref{fig:underlying} is that the observed rating outcomes are generated as a result of both inherent and confounding factors. The inherent factors refer to the user preferences and inherent item properties, and auxiliary factors are the confounding factors from the social network. By disentangling determinants that cause the observed ratings, we can account for effects separately from the selection bias of confounders and the exposure, which ensures to attain an unbiased rating estimator with superior generalization ability.
Followed the causal graph in Figure~\ref{fig:underlying}, we now design our DENC method incorporates three determinants in Figure~\ref{fig:frame}. Each component accordingly corresponds to one of three specific determinants: \emph{social network confounder}, \emph{exposure model} and \emph{deconfonder model}, which jointly determine the rating outcome.
\begin{figure}
\caption{Our DENC method consists of four components: \emph{Social network confounder}, \emph{exposure model}, \emph{deconfonder model} and \emph{rating model}.}
\label{fig:frame}
\end{figure}
\subsection{Exposure Model} To cope with the selection bias caused by users or the external social relations, we build on the causal inference theory and propose an effective exposure model. Guided by the treatment assignment mechanism in causal inference, we propose a novel exposure model that computes the probability of exposure variable specific to the user-item pair. This model is beneficial to understand the generation of the \emph{Missing Not At Random} (MNAR) patterns in ratings, which thus remedies selection biases in a principled manner. For example, user A goes to watch the movie because of his friend's strong recommendation. Thus, we propose to mitigate the selection bias by exploiting the network connectivity information that indicating \emph{to which extent the exposure for a user will be affected by its neighbors}.
\subsubsection{Social Network Confounder} To control the selection bias arisen from the external social network, we propose a confounder representation model that quantifies the common biased factors affecting both the exposure and rating outcome.
We now discuss the method of choosing and learning exposure. Let $G$ present the social relationships among users $U$, where an edge denotes there is a friend relationship between users. We resort to node2vec~\cite{grover2016node2vec} method and learn network embedding from diverse connectivity provided by the social network. More details about node2vec method can be found in Section~\ref{sec:embedding} in the appendix. To mine the deep social structure from $G$, for every source user $u$, node2vec generates the network neighborhoods $N_s(u) \subset G$ of node $u$ through a sampling strategy to explore its neighborhoods in a breadth-first sampling as well as a depth-first sampling manner. The representation $Z_u$ for user $u$ can be learned by minimizing the negative likelihood of preserving network neighborhoods $N_s(u)$: \begin{equation} \begin{split}
\mathcal{L}_{z}&=-{\sum_{u \in G}{\log P(N_{s}(u)|Z_u)}}\\
&=\sum_{u \in G}\left[\log \sum_{v \in G} \exp (Z_v\cdot Z_u)-\sum_{u_{i} \in N_{s}(u)} Z_{u_{i}}\cdot Z_u\right]
\label{eq:l_z} \end{split} \end{equation} The final output $Z_u\in \mathbb{R}^d$ sufficiently explores diverse neighborhoods of each user, which thus represents to what extent the exposure for a user is influenced by his friends in graph $G$.
\subsubsection{Exposure Assignment Learning} The exposure under the recommendation scenario is not randomly assigned. Users in social networks often express their own preferences over the social network, which therefore will affect their friends' exposure policies. In this section, to characterize the \emph{Missing Not At Random} (MNAR) pattern in ratings, we resort to causal inference~\cite{pearl2009causality} to build the exposure mechanism influenced by social networks.
To begin with, we are interested in the binary exposure $a_{ui}$ that defines whether the item $i$ is exposed ($a_{ui}=1$) or unexposed ($a_{ui}=1$) to user $u$, i.e., $a_{ui}=1$. Based on the informative confounder learned from social network, we propose the notation of \emph{propensity} to capture the exposure from the causal inference language. \begin{definition}[Propensity] \label{df:pro} Given an observed rating $y_{ui}\in\text{rating}$ and confounder $Z_u$ in~\eqref{eq:l_z}, the propensity of the corresponding exposure for user–item pair $(u,i)$ is defined as \begin{equation}
\pi(a_{ui};Z_u)=P(a_{ui}=1|y_{ui}\in\text{rating};Z_u)
\label{eq:prp} \end{equation} \end{definition} In view of the foregoing, we model the exposure mechanism by the probability of $a_{ui}$ being assigned to 0 or 1. \begin{equation}
\begin{aligned} P(a_{ui}) &=\prod_{u,i} P\left(a_{ui} \right) =\prod_{(u,i)\in\mathcal{O}} P\left(a_{ui}=1 \right) \prod_{(u,i)\notin\mathcal{O}} P\left(a_{ui}=? \right) \end{aligned}
\label{eq:pa_all} \end{equation} where $\mathcal{O}$ is an index set for the observed ratings. The case of $a_{ui}=1$ can result in an observed rating or unobserved rating: 1) for the observed rating represented by $y_{ui}\in \text{rating}$, we definitely know the item $i$ is exposed, i.e., $a_{ui}=1$; 2) an unobserved rating $y_{ui}\notin \text{rating}$ may represent a negative feedback (i.e., the user is not reluctant to rating the item) on the exposed item $a_{ui}=1$. In light of this, based on~\eqref{eq:prp}, we have \begin{equation} \begin{split}
P(a_{ui}=1)&=P(a_{ui}=1, y_{ui}\in\text{rating})+P(a_{ui}=1,y_{ui}\notin\text{rating})\\
&=\pi(a_{ui};Z_u)P(y_{ui}\in\text{rating})+W_{ui}P(y_{ui}\notin\text{rating}) \end{split} \label{eq:pa_1} \end{equation}
where $W_{ui}=P(a_{ui}=1|y_{ui}\notin \text{rating})$. The exposure $a_{ui}$ that is unknown follows the distributions as \begin{equation} \begin{split}
P(a_{ui}=?)=1- P(a_{ui}=1) \end{split} \label{eq:pa_u} \end{equation}
By substituting Eq.~\eqref{eq:pa_1} and Eq.~\eqref{eq:pa_u} for Eq.~\eqref{eq:pa_all}, we attain the exposure assignment for the overall rating data as \begin{equation}
P(a_{ui})=\prod_{(u, i) \in \mathcal{O}} \pi(a_{ui};Z_u)
\prod_{(u, i)\notin \mathcal{O}} \left(1-W_{ui}\right)
\label{eq:obj_p} \end{equation} Inspired by~\cite{pan2008one}, we assume uniform scheme for $W_{ui}$ when no side information is available. According to most causal inference methods~\cite{shalit2017estimating,pearl2009causality}, a widely-adopted parameterization for $\pi(a_{ui};Z_u)$ is a logistic regression network parameterized by $\Theta=\{W_0,b_{0}\}$, i.e., \begin{equation}
\pi(a_{ui};Z_u,\Theta)=\mathbb{I}_{y\in\text{rating}}\cdot\left[1+e^{-(2 a_{ui}-1)\left(Z_u^{\top} \cdot W_0+b_{0}\right)}\right]^{-1}
\label{eq:pa} \end{equation} Based on Eq.~\eqref{eq:pa}, the overall exposure $P(a_{ui})$ in Eq.~\eqref{eq:obj_p} can be written as the function of parameters $\Theta=\{W_0,b_{0}\}$ and $Z_u$, i.e., \begin{equation}
\begin{split}
\mathcal{L}_{a}=\sum_{u,i}-\log P(a_{ui} ;Z_u, \Theta)
\end{split}
\label{eq:l_a} \end{equation} where social network confounder $Z_u$ is learned by the pre-trained node2vec algorithm. Similar to supervised learning, $\Theta$ can be optimized through minimization of the negative log-likelihood.
\subsection{Deconfounder Model} Traditional recommendation learns the latent factor representations for user and item by minimizing errors on the observed ratings, e.g., matrix factorization. Due to the existence of selection bias, such a learned representation may not necessarily minimize the errors on the unobserved rating prediction. Inspired by~\cite{shalit2017estimating}, we propose to learn a balanced representation that is independent of exposure assignment such that it represents inherent or invariant features in terms of users and items. The invariant features must also lie in the inference space shown in Figure~\ref{fig:seb}, which can be used to consistently infer unknown ratings using observed ratings. This makes sense in theory: if the learned representation is hard to distinguish across different exposure settings, it represents invariant features related to users and items.
According to Figure~\ref{fig:underlying}, we can define two latent vectors $U\in\mathbb{R}^{k_d}$ and $I\in\mathbb{R}^{k_d}$ to represent the inherent factor of a user and a item, respectively. Recall that different values for $W_{ui}$ in Eq.~\eqref{eq:obj_p} can generate different exposure assignments for the observed rating data. Following this intuition, we construct two different exposure assignments $a$ and $\hat{a}$ corresponding two settings of $W_{ui}$. Accordingly, $\Phi_{(a)}$ and $\Phi_{(\hat{a})}$ are defined to include inherent factors of users and items, i.e., $\Phi_{(a)}=\left[U_{1}^{(a)},\cdots, U_{M}^{(a)}, I_{1}^{(a)},\cdots,I_{M}^{(a)}\right] \in \mathbb{R}^{k_d\times 2M }$, $\Phi_{(\hat{a})}=\left[U_{1}^{(\hat{a})},\cdots, U_{M}^{(\hat{a})}, I_{1}^{(\hat{a})},\cdots,I_{M}^{(\hat{a})}\right] \in \mathbb{R}^{k_d \times 2M}$. Figure~\ref{fig:underlying} also indicates that the inherent factors of user and item would keep unchanged even if the exposure variable is altered from 0 to 1, and vice versa. That means $U\in\mathbb{R}^{k_d}$ and $I\in\mathbb{R}^{k_d}$ should be independent of the exposure assignment, i.e., $U^{(a)}\perp \!\!\! \perp U^{(\hat{a})}$ or $I^{(a)}\perp \!\!\! \perp I^{(\hat{a})}$. Accordingly, minimizing the discrepancy between $\Phi_{(a)}$ and $\Phi_{(\hat{a})}$ ensures that the learned factors embeds no information about the exposure variable and thus reduce selection bias. The penalty term for such a discrepancy is defined as \begin{equation} \begin{split}
\mathcal{L}_{d}=\text{disc}\left(\Phi_{(\hat{a})},\Phi_{(a)}\right)\\ \end{split} \label{eq:la} \end{equation}
Inspired by~\cite{muller1997integral}, we employ \emph{Integral Probability Metric} (IPM) to estimate the discrepancy between $\Phi_{(\hat{a})}$ and $\Phi_{(a)}$. $\text{IPM}_{\mathcal{F}}(\cdot,\cdot)$ is the (empirical) integral probability metric defined by the function family $\mathcal{F}$. Define two probability distributions $\mathbb{P}=P(\Phi_{(\hat{a})})$ and $\mathbb{Q}=P(\Phi_{(a)})$, the corresponding IPM is denoted as \begin{equation}
\text{IPM}_{\mathcal{F}}(\mathbb{P},\mathbb{Q})=\sup _{f \in \mathcal{F}}\left|\int_{S} f d \mathbb{P}-\int_{S} f d \mathbb{Q}\right| \end{equation} where $\mathcal{F}:S\rightarrow \mathbb{R}$ is a class of real-valued bounded measurable functions. We adopt $\mathcal{F}$ as 1-Lipschitz functions that lead IPM to the Wasserstein-1 distance, i.e., \begin{equation}
Wass(\mathbb{P},\mathbb{Q})=\inf _{f \in \mathcal{F}} \sum_{\mathbf{v}\in\text{col}_{i}(\Phi_{(\hat{a})})}\|f(\mathbf{v})-\mathbf{v}\| \mathbb{P}(\mathbf{v}) d \mathbf{v}
\label{eq:l_ind} \end{equation}
where $\mathbf{v}$ is the $i$-th column of $\Phi_{(\hat{a})}$ and the set of push-forward functions $\mathcal{F}=\left\{f \mid f: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d} \text { s.t. } \mathbb{Q}(f(\mathbf{v}))=\mathbb{P}(\mathbf{v})\right\}$ can transform the representation distribution of the exposed $\Phi_{(\hat{a})}$ to that of the unexposed $\Phi_{(a)}$. Thus, $\|f(\mathbf{v})-\mathbf{v}\|$ is a pairwise distance matrix between the exposed and unexposed user-item pairs. Based on the discrepancy defined in~\eqref{eq:l_ind}, we define $C(\Phi)=\|f(\mathbf{v})-\mathbf{v}\|$ and reformulate penalty term in~\eqref{eq:la} as \begin{equation}
\mathcal{L}_{d}=\inf_{\gamma \in \Pi\left(\mathbb{P}, \mathbb{Q}\right)} \mathbb{E}_{(\mathbf{v}, f(\mathbf{v})) \sim \gamma}
C(\Phi)
\label{eq:l_ind} \end{equation}
We adopt the efficient approximation algorithm proposed by~\cite{shalit2017estimating} to compute the gradient of~\eqref{eq:l_ind} for training the deconfounder model. In particular, a mini-batch with $l$ exposed and $l$ unexposed user-item pairs is sampled from $\Phi_{(\hat{a})}$ and $\Phi_{(a)}$, respectively. The element of distance matrix $C(\Phi)$ is calculated as $C_{ij}=\|\text{col}_{i}(\Phi_{(\hat{a})})-\text{col}_{j}(\Phi_{(a)})\|$. After computing $C(\Phi)$, we can approximate $f$ and the gradient against the model parameters ~\footnote{For a more detailed calculation, refer to Algorithm 2 in the appendix of prior work~\cite{shalit2017estimating}}. In conclusion, the learned latent factors generated by the deconfounder model embed no information about exposure variable. That means all the confounding factors are retained in social network confounder $Z_u$.
\subsection{Learning} \subsubsection{Rating prediction} Having obtained the final representations $U$ and $I$ by the deconfounder model, we use an inner product of $U^{\top}I$ as the inherent factors to estimate the rating. As shown in the causal structure in Figure~\ref{fig:frame}, another component affecting the rating prediction is the social network confounder. A simple way to incorporate these components into recommender systems is through a linear model as follows. \begin{equation}
\hat{y}_{ui}=\sum_{u,i\in \mathcal{O}} U^{\top}I+ {W_u}^{\top} Z_u+\epsilon_{ui},\quad \epsilon_{ui}\sim\mathcal{N}(0,1) \end{equation} where $W_u$ is coefficient that describes how much the confounder $Z_u$ contributes to the rating. To define the unbiased loss function for the biased observations $y_{ui}$, we leverage the IPS strategy~\cite{schnabel2016recommendations} to weight each observation with \emph{Propensity}. By Definition~\ref{df:pro}, the intuition of the inverse propensity is to down-weight the commonly observed ratings while up-weighting the rare ones. \begin{equation}
\mathcal{L}_{y}=\frac{1}{|\mathcal{O}|}\sum_{u,i\in \mathcal{O}}\frac{ \left(y_{ui}-\hat{y}_{ui}\right)^{2}}{\pi(a_{ui};Z_u)}
\label{eq:loss_y} \end{equation}
\subsubsection{Optimization}
To this end, the objective function of our DENC method to predict ratings could be derived as: \begin{equation} \begin{aligned} \mathcal{L}= \mathcal{L}_{y} +\lambda_a\mathcal{L}_{a}+\lambda_z\mathcal{L}_{z}+ \lambda_d\mathcal{L}_{d}+\mathcal{R}(\Omega) \end{aligned} \end{equation} where $\Omega$ represents the trainable parameters and $\mathcal{R}(\cdot)$ is a squared $l_2$ norm regularization term on $\Omega$ to alleviate the overfitting problem. $\lambda_a$, $\lambda_z$ and $\lambda_d$ are trade-off hyper-parameters. To optimize the objective function, we adopt Stochastic Gradient Descent(SGD)~\cite{bottou2010large} as the optimizer due to its efficiency.
\iffalse
\begin{algorithm}[t]
\caption{Causal Inference for Debaising Recommendation}
\label{alg:generating_strategy}
\begin{algorithmic}[1]
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\REQUIRE{a dataset of exposures and ratings $\{(a_{ui}, y_{ui}(a_{ui}))\}_{u,i},\,\, i=1, \ldots, I, u=1,\ldots,U$, social graph $G=(U,E,W)$, Dimensions $d$, Implicit data $M_{u,i}$, Walks per node $r$, Walk length $l$, Context size $k$, Return $p$, In-out $q$}
\ENSURE{the potential outcome given treatment $\hat{y}_{ui}(1)$}
\STATE Initialize the exposure factor $W_{ui}$
\STATE Set $\hat{P}(t)=\frac{1}{U\odt I}\sum_{u,i}\mathbf{1}(a_{ui}=t)$ for all $t$;\\
\textbf{// Confounder Representation}
\FOR{\textbf{all} nodes $u\in U$}
\STATE Compute $Z_u$ by maximizing $\mathcal{L}_z$ Eq.~\eqref{eq:l_z}
\ENDFOR
\STATE \\
\textbf{// Exposure Model}
\STATE Estimate propensity score $\pi(a_{ui})$ in Eq.~\eqref{eq:pa}
\STATE Compute the exposure probability $P(a_{ui}=1|\Theta)$ in Eq.~\eqref{eq:obj_p}
\STATE Update $\Theta$ by minimizing $\mathcal{L}_{a}$ in Eq.~\eqref{eq:l_a}\\
\textbf{// Bias Removing Model}
\STATE\\ Randomly sample $m$ exposed and $m$ unexposed entries
\STATE Update $\phi_{ui}$ by minimizing $\mathcal{L}_{d}$ in Eq.~\eqref{eq:l_ind}\\
\textbf{// Minimizing Rating Loss}
\STATE Estimate the rating $\hat{y}_{ui}$ for all entries by Eq.~\eqref{eq:hat_y}
\STATE Update parameters $W_u,U_u,V_i$ by fitting $\hat{y}_{ui}|m_{ui}=1$ to the observed data
$y_{ui}(m_{ui}=1)$, i.e.,
minimizing $\mathcal{L}_{y}$ of~\eqref{eq:loss_y}.
\end{algorithmic}
\end{algorithm} \fi
\section{Experiments} To more thoroughly understand the nature of MNAR issue and the proposed unbiased DENC, experiments are conducted to answer the following research questions: \begin{itemize}[leftmargin=*]
\item (\textbf{RQ1}) How confounder bias caused by the social network is manifested in real-world recommendation datasets?
\item (\textbf{RQ2}) Does our DENC method achieve the state-of-the-art performance in debiasing recommendation task?
\item (\textbf{RQ3}) How does the embedding size of each component (e.g., social network confounder and deconfounder model) in our DENC method impact the debiasing performance?
\item (\textbf{RQ4}) How do the missing social relations impact the debiasing performance of our DENC method? \end{itemize}
\subsection{Setup} \subsubsection{Evaluation Metrics} We adopt two popular metrics including Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) to evaluate the performance. Since improvements in MAE or RMSE terms can have a significant impact on the quality of the Top-$K$ recommendations~\cite{koren2008factorization}, we also evaluate our DENC with Precision@K and Recall@K for the ranking performance\footnote{We consider items with a rating greater than or equal to 3.5 as relevant}. \subsubsection{Datasets}\label{dataset}
We conduct experiments on three datasets including one semi-synthetic dataset and two benchmark datasets \texttt{Epinions} \footnote{http://www.cse.msu.edu/~tangjili/trust.html} and \texttt{Ciao}~\cite{tang-etal12a} \footnote{http://www.cse.msu.edu/~tangjili/trust.html}. We maintain all the user-item interaction records in the original datasets instead of discard items that have sparse interactions with users.\footnote{Models can benefit from the preprocessed datasets in which all items interact with at least a certain amount of users, for such preprocessing will reduce the dataset sparsity. } The semi-synthetic dataset is generated by incorporating the social network into \texttt{MovieLens}\footnote{https://grouplens.org/datasets/movielens} dataset. The details of these datasets are given in Section~\ref{sec:data} in the appendix.
\subsubsection{Baselines} We compare our DENC against three groups of methods for rating prediction: (1) \textbf{Traditional methods}, including NRT~\cite{li2017neural} and PMF~\cite{mnih2008probabilistic}. (2) \textbf{Social network-based methods}, including GraphRec~\cite{fan2019graph}, DeepFM+~\cite{guo1703factorization}, SocialMF~\cite{jamali2010matrix}, SREE~\cite{li2017social} and SoReg~\cite{ma2011recommender}. (3) \textbf{Propensity-based methods}, including CausE~\cite{bonner2018causal} and D-WMF~\cite{wang2018deconfounded}. More implementation details of baselines and parameter settings are included in Section~\ref{sec:baseline} in the appendix.
\begin{table}[!htb]
\caption{Statistics of Datasets. Density for rating (density-R) is $\#ratings/(\#users \cdot\#items)$, Density for social relations (density-SR) is $\#relations/(\#users \cdot\#users)$.}
\label{tab:2}
\resizebox{0.4\textwidth}{!}{
\begin{tabular}{cccc}
\toprule
&\texttt{Epinions} & \texttt{Ciao} & \texttt{MovieLens-1M}\\
\midrule
\# \texttt{users}& 22,164& 7,317& 6,040\\
\# \texttt{items} &296,277 & 104,975 &3,706 \\\hline
\# \texttt{ratings}& 922,267 &283,319 &1000,209\\
\texttt{density-R} (\%) & 0.0140 & 0.0368 &4.4683\\
\hline
\# \texttt{relations} & 355,754&111,781 & 9,606\\
\texttt{density-SR} (\%) & 0.0724 &0.2087 & 0.0263\\
\bottomrule
\end{tabular}
} \end{table} \subsubsection{Parameter Settings} We implement all baseline models on a Linux server with Tesla P100 PCI-E 16GB GPU.~\footnote{Our code is currently shared on Github, however, due to the double-blind submission policy requirement, we leave the link void now but promise to activate it after paper acceptance.} Datasets for all models except CausE~\footnote{ As in CausE, we sample 10\% of the training set to build an additional debiased dataset (mandatory in model training), where items are sampled to be uniformly exposed to users. } are split as training/test sets with a proportion of 80/20, and 20\% of the training set are validation set.
We optimize all models with Stochastic Gradient Descent(SGD)~\cite{bottou2010large}. For fair comparison, a grid search is conducted to choose the optimal parameter settings, e.g., dimension of user/item latent vector $k_{MF}$ for matrix factorization-based models and dimension of embedding vector $d$ for neural network-based models. The embedding size is initialized with the Xavier~\cite{glorot2010understanding} and searched in $[8, 16, 32, 64, 128, 256]$. The batch size and learning rate are searched in $[32, 64, 128, 512, 1024]$ and $[0.0005, 0.001, 0.005, 0.01, 0.05, 0.1]$, respectively. The maximum epoch $N_{epoch}$ is set as 2000, an early stopping strategy is performed. Moreover, we employ three hidden layers for the neural components of NRT, GraphRec and DeepFM+. Like our DENC method, DeepFM+ uses node2vec to train the social network embeddings. Hence, the embedding size of its node2vec is set as the same as in our DENC for a fair comparison.
Without specification, unique hyperparameters of DENC are set as: three coefficients $\lambda_n$, $\lambda_u$ and $\lambda_i$ are tuned in $\{10^{-3},10^{-4},10^{-5}\}$. The dimension of node2vec embedding size $k_a$ and the dimension of inherent factor $k_{d}$ are tuned in $[ 8, 16, 32, 64, 128, 256 ]$, and their influences are reported in Section~\ref{ablation}.
\begin{table*} \centering
\caption{
Performance comparison: bold numbers are the best results. Strongest baselines are highlighted with underlines.
}
\label{tab:4} \resizebox{0.8\textwidth}{!}{
\begin{tabular}{c||c||cc||ccccc||cc||c} \hline
& & \multicolumn{2}{c||}{\textbf{Traditional}} & \multicolumn{5}{c||}{\textbf{Social network-based}} & \multicolumn{2}{c||}{\textbf{Propensity-based}} & \textbf{Ours}
\\ [3pt]\hline \hline
Dataset & Metrics & PMF & NRT & SocialMF & SoReg & SREE & GraphRec & DeepFM+ & CausE & D-WMF& \textbf{DENC}\\\hhline{-||-||--||-----||--||-}
\multicolumn{1}{c||}{\texttt{Epinions}} & MAE & 0.9505 & 0.9294 & 0.8722 & 0.8851 & 0.8193 & 0.7309 & 0.5782 & 0.5321 & \underline{0.3710} &\textbf{0.2684} \\[3pt]
\multicolumn{1}{c||}{} & RMSE & 1.2169 & 1.1934 & 1.1655 & 1.1775 & 1.1247 & 0.9394 & 0.6728 & 0.7352 & \underline{0.6299} & \textbf{0.5826} \\ [3pt]\hline
\multicolumn{1}{c||}{\texttt{Ciao}} & MAE & 0.8868 & 0.8444 & 0.7614 & 0.7784 & 0.7286 &0.6972 & 0.3641 & 0.4209 & \underline{0.2808} &\textbf{0.2487} \\[3pt]
\multicolumn{1}{c||}{} & RMSE &1.1501 & 1.1495 & 1.0151 &1.0167 & 0.9690 &0.9021 & 0.5886 & 0.8850 & \underline{0.5822} &\textbf{0.5592} \\[3pt] \hline
\multicolumn{1}{c||}{\texttt{MovieLens-1M}} & MAE & 0.8551 & 0.8959 & 0.8674 & 0.9255 & 0.8408 & 0.7727 & 0.5786 &0.4683 & \underline{0.3751} & \textbf{0.2972} \\[3pt]
\multicolumn{1}{c||}{$\Delta(Z_u)=-0.35$} & RMSE & 1.0894 &1.1603 & 1.1161 & 1.1916 & 1.0748 & 0.9582 & 0.6730 &0.8920 & \underline{0.6387} &\textbf{0.5263} \\[3pt] \hline
\multicolumn{1}{c||}{\texttt{MovieLens-1M}} & MAE & 0.8086 & 0.8801 & 0.8182 & 0.8599 & 0.7737 & 0.7539 & 0.5281 & 0.4221 & \underline{0.3562} & \textbf{0.2883} \\[3pt]
\multicolumn{1}{c||}{$\Delta(Z_u)=0$} & RMSE & 1.0034 & 1.1518 & 1.0382 & 1.1005 & 0.9772 & 0.9454 & 0.6477 & 0.8333 & \underline{0.6152} &\textbf{0.5560} \\ [3pt]\hline
\multicolumn{1}{c||}{\texttt{MovieLens-1M}} & MAE & 0.7789 &0.7771 & 0.7969 & 0.8428 & 0.7657 & 0.7423 &0.3672 & 0.4042 & \underline{0.3151} & \textbf{0.2836} \\[3pt]
\multicolumn{1}{c||}{$\Delta(Z_u)=0.35$} & RMSE & 0.9854 &0.9779 & 1.0115 & 1.0792 & 0.9746 & 0.9344 &\underline{0.5854} & 0.8173 & 0.5962 &\textbf{0.5342} \\[3pt] \hline \end{tabular} } \end{table*}
\subsection{Understanding Social Confounder (RQ1)} We initially conduct an experiment to understand to what extent the confounding bias caused by social networks is manifested in real-world recommendation datasets. The social network as a confounder will bias the interactions between the user and items. We aim to verify two kinds of scenarios: (1) User in the social network interacts with more items than users outside the social network. (2) The pair of user-neighbor in the social network has more common interacted items than the pair of user-neighbor outside the social network. Intuitively, an unbiased platform should expect users to interact with items broadly, which indicates that interactions are likely to be evenly distributed. Thus, we investigate the social confounder bias by analyzing the statistics of interactions in these two scenarios in \texttt{Epinions} and \texttt{Ciao} dataset.
\begin{figure}
\caption{Scenario (1): the distribution of $x$ (the number of items interacted by a user). The smooth probability curves visualize how the number of items is distributed. }
\label{fig:prove_1_ciao}
\label{fig:prove_1_epinion}
\label{fig:prove_1}
\end{figure}
\begin{figure}
\caption{Scenario (2): the distribution of $x$ (the number of items commonly interacted by a user-pair). }
\label{fig:prove_2_ciao}
\label{fig:prove_2_epinion}
\label{fig:prove_2}
\end{figure}
For the first scenario, we construct two user sets within or outside the social network, i.e., $\mathcal{U}_{G}$ and $\mathcal{U}_{\backslash G}$. Specially, $\mathcal{U}_{G}$ is constructed by randomly sampling a set of users in social network $G$, and $\mathcal{U}_{\backslash G}$ is randomly sampled out of $G$. The size of $\mathcal{U}_{G}$ and $\mathcal{U}_{\backslash G}$ is the same and defined as $n$. Following the above guidelines, we sample $n=70$ users for $\mathcal{U}_{G}$ and $\mathcal{U}_{\backslash G}$. Figure~\ref{fig:prove_1} depicts the distributions of the interacted items by users in $\mathcal{U}_{G}$ and $\mathcal{U}_{\backslash G}$. The smooth curves are continuous distribution estimates produced by the kernel density estimation. Apparently, the distribution for $\mathcal{U}_{\backslash G}$ is significantly skewed: Most of the users interact with few items. For example, on \texttt{Ciao}, more than 90\% of users interact with fewer than 50 items. By contrast, most users in the social network tend to interact with items more frequently, which is also confirmed by the even distribution. In general, the distribution curve of $\mathcal{U}_{G}$ is quite different from $\mathcal{U}_{\backslash G}$, which reflects that the social network influences the interactions between users and items. In addition, the degree of bias varies across datasets: \texttt{Epinions} is less biased than \texttt{Ciao}.
For the second scenario, based on $\mathcal{U}_{G}$ and $\mathcal{U}_{\backslash G}$, we further analyze the number of commonly interacted items by the user-pair. Particularly, we randomly sample four one-hop neighbours for each user in $\mathcal{U}_{G}$ to construct user-pairs. Since users in $\mathcal{U}_{\backslash G}$ have no neighbours, for each of them, we randomly select another four users\footnote{According to the statistics, we discover that 90\% of users have at least four one-hop neighbours in \texttt{Ciao} and \texttt{Epinions}} in $\mathcal{U}_{\backslash G}$ to construct four user-pairs. Recall that $\mathcal{U}_{G}$ and $\mathcal{U}_{\backslash G}$ both have 70 users, then we totally have $4\times 70$ user-pairs for $\mathcal{U}_{\backslash G}$ and $\mathcal{U}_{\backslash G}$, respectively. Figure~\ref{fig:prove_2} represents the distribution of how many items are commonly interacted by the users in each pair.\footnote{For example, $\{user1, user2, user3, user4\}$ are one-hop neighbours of $user5$. If the number of commonly items interacted by $user1$ and $user5$ is 3, then $x=3$ in the $x$-axis of Figure~\ref{fig:prove_2} is nonzero.} Figure~\ref{fig:prove_2} indicates most user-neighbour pairs in the social network have fewer than 10 items in common. However the user-pairs outside the social network nearly have no items in common, i.e., less than 1. We can conclude that social networks can encourage users to share more items with their neighbours, compared with users who are not connected by any social networks.
\subsection{Performance Comparison (RQ2)} \label{sec:comp} We compare the rating prediction of DENC with nine recommendation baselines on three datasets including \texttt{Epinions}, \texttt{Ciao} and \texttt{MovieLens-1M}. Table~\ref{tab:4} demonstrates the performance comparison, where the confounder $\Delta{(Z_u)}$ in \texttt{MovieLens-1M} is assigned with three different settings, i.e., -0.35, 0 and 0.35. Analyzing Table~\ref{tab:4}, we have the following observations.
\begin{itemize}[leftmargin=*] \item Overall, our DENC consistently yields the best performance among all methods on five datasets. For instance, DENC improves over the best baseline model w.r.t. MAE/RMSE by 10.26/4.73\%, 3.21/2.3\%, and 7.79/11.24\% on \texttt{Epinions}, \texttt{Ciao} and \texttt{MovieLens-1M} ($\Delta{(Z_u)}$=-0.35) datasets, respectively. The results indicate the effectiveness of DENC on the task of rating prediction, which has adopted a principled causal inference way to leverage both the inherent factors and auxiliary social network information for improving recommendation performance. \item Among the three kinds of baselines, propensity-based methods serves as the strongest baselines in most cases. This justifies the effectiveness of exploring the missing pattern in rating data by estimating the propensity score, which offers better guidelines to identify the unobserved confounder effect from ratings. However, propensity-based methods perform worse than our DENC, as they ignore the social network information. It is reasonable that exploiting the social network is useful to alleviate the confounder bias to rating outcome. The importance of social networks can be further verified by the fact that most of the social network-based methods consistently outperform PMF on all datasets. \item All baseline methods perform better on \texttt{Ciao} than on \texttt{Epinions}, because \texttt{Epinions} is significantly sparser than \texttt{Ciao} with 0.0140\% and 0.0368\% density of ratings. Besides this, DENC still achieves satisfying performance on \texttt{Epinions} and its performance is competitive with the counterparts on \texttt{Ciao}. This demonstrates that its exposure model of DENC has an outstanding capability of identifying the missing pattern in rating prediction, in which biased user-item pairs in \texttt{Epinions} can be captured and then alleviated. In addition, the performance of DENC on three \texttt{Movielens-1M} datasets is stable w.r.t. different levels of confounder bias, which verifies the robust debiasing capability of DENC. \end{itemize}
\subsection{Ablation Study (RQ3)}\label{ablation}
In this section, we conduct experiments to evaluate the parameter sensitivity of our DENC method. We have two important hyperparameters $k_a$ and $k_{d}$ that correspond to the embedding size in loss function $\mathcal{L}_a$ and $\mathcal{L}_d$, respectively. Based on the hyperparameter setup in Section 4.1.4, we vary the value of one hyperparameter while keeping the others unchanged.
\begin{figure}
\caption{Our DENC: parameter sensitivity of $k_{a}$ and $k_{d}$ against (a) MAE (b) RMSE on \texttt{Ciao} and \texttt{Epinions} dataset.}
\label{fig:parameter_sens}
\end{figure}
Figure~\ref{fig:parameter_sens} lays out the performance of DENC with different parameter settings. For both datasets, the performance of our DENC is stable under different hyperparameters $k_a$ and $k_{d}$. The performance of DENC increases while the embedding size increase from approximately 0-15 for $k_{d}$; afterwards, its performance decreases. It is clear that when the embedding size is set to approximately $k_a$=45 and $k_{d}$=15, our DENC method achieves the optimal performance. Our DENC is less sensitive to the change of $k_a$ than $k_{d}$, since MAE/RMSE values change with a obvious concave curve along $k_{d}$=0 to 50 in Figure~\ref{fig:parameter_sens}, while MAE/RMSE values only change gently with a downwards trend along $k_{a}$=0 to 50. It is reasonable since $k_{d}$ controls the embedding size of disentangled user-item representation attained by the deconfounder model, i.e., the inherent factors, while social network embedding size $k_{a}$ serves as the controller for auxiliary social information, the former can influence the essential user-item interaction while the latter affects the auxiliary information.
\subsection{Case Study (RQ4)} We first investigate how the missing social relations affect the performance of DENC. We randomly mask a percentage of social relations to simulate the missing connections in social networks. For \texttt{Epinions}, \texttt{Ciao} and \texttt{MovieLens} dataset, we fix the social network confounder as $\Delta(Z_u)=0$. Meanwhile, we exploit different percentages of missing social relations including \{20\%, 50\%, 80\%\}. Note that we do not consider the missing percentage of $100\%$, i.e., the social network information is completely unobserved. Considering that the social network is viewed as a proxy variable of the confounder, the social network should provide partially known information. Following this guideline, we firstly investigate how the debias capability of our DENC method varies under the different missing percentages. Secondly, we also report the ranking performance of DENC (percentages of missing social relations is set to $0\%$) under Precision@K and Recall@K with $K=\{10,15,20,25,30,35,40\}$ to evaluate our model thoroughly.
\begin{figure}
\caption{Our DENC: debias performance w.r.t. different missing percentages of social relation.}
\label{fig:masking}
\end{figure}
Figure~\ref{fig:masking} illustrates our debias performance w.r.t. different missing percentages of social relations on three datasets. As shown in Figure~\ref{fig:masking}, the missing social relations can obviously degrade the debias performance of DENC method. The performance evaluated by four metrics in Figure~\ref{fig:masking} consistently degrades when the missing percentage increases from 0\% to 80\%, which is consistent with the common observation. This indicates that the underlying social network can play a significant role in a recommendation, we consider the because it can capture the preference correlations between users and their neighbours.
\begin{figure}
\caption{Performance of DENC in terms of Precision@K and RecallG@K under difference $K$}
\label{fig:pre_rec_k}
\end{figure}
Based on the evaluation on Precision@K and Recall@K, Figure~\ref{fig:pre_rec_k} show that DENC achieves stable performance on Top-$K$ recommendation when $K$ (i.e., the length of ranking list) varies from 10 to 40. Our DENC can recommend more relevant items within top $K$ positions when the ranking list length increases.
\section{Conclusion and Future Work} In this paper, we have researched the missing-not-at-random problem in the recommendation and addressed the confounding bias from a causal perspective. Instead of merely relying on inherent information to account for selection bias, we developed a novel social network embedding based de-bias recommender for unbiased rating, through correcting the confounder effect arising from social networks. We evaluate our DENC method on two real-world and one semi-synthetic recommendation datasets, with extensive experiments demonstrating the superiority of DENC in comparison to state-of-the-arts. In future work, we will explore the effect of different exposure policies on the recommendation system using the intervention analysis in causal inference. In addition, another promising further work is to explore the selection bias arisen from other confounder factors, e.g., user demographic features. This can be explained that a user's nationality affects which restaurant he is more likely to visit (i.e., exposure) and meanwhile affects how he will rate the restaurant (i.e., outcome).
\appendix
\section{APPENDIX}
\subsection{Datasets}\label{sec:data} The statistics of baseline datasets are given in Table~\ref{tab:2}. In \texttt{Epinions} and \texttt{Ciao}, the rating values are integers from 1 (like least) to 5 (like most). Since observed ratings are very sparse (rating density 0.0140\% for \texttt{Epinions} and 0.0368\% for \texttt{Ciao}), thus the rating prediction on these two datasets is challenging.
In addition, we also simulate a semi-synthetic dataset based on \texttt{MovieLens}. It is well-known that \texttt{MovieLens} is a benchmark dataset of user-movie ratings without social network information. For \texttt{MovieLens-1M}, we first need to construct a social network $G$ by placing an edge between each pair of users independently with a probability 0.5 depending on whether the nodes belong to $G$. Recall that the social network is viewed as the confounder (common cause) which affects both exposure variables and ratings. We generate the exposure assignment by the confounder $Z_u$ of three levels $\Delta(Z_u)\in\{-0.35,0,0.35\}$. Then, the exposure $a_{ui}$ and rating outcome $y_{ui}$ are simulated as follows. \begin{flalign*} a_{ui}&\sim\operatorname{Bern}\left(\Delta\left(Z_{u}\right)\right)\\
y_{ui}&=a_{ui}\cdot(y_{ui}^{\text{mov}}+\beta_u\Delta\left(Z_{u}\right)+\varepsilon) &\varepsilon\sim N(0,1), \quad u\in G\\
y_{ui}&=y_{ui}^{\text{mov}} & u\notin G
\label{eq:semi} \end{flalign*} where $y_{ui}^{\text{mov}}$ is the original rating in \texttt{MovieLens} and the parameter $\beta_u$ controls the amount of social network confounder. The exposure $a_{ui}$ indicating whether item $i$ being exposed to user $u$ is given by a Bernoulli distribution parameterized by the confounder $Z_u$. The non-zero $a_{ui}$ is used to simulate the semi-synthetic rating $y_{ui}$ by the second equation. The third equation indicates that the ratings of user will keep unchanged if s/he is not connected by $G$.
\subsection{Baselines}\label{sec:baseline} We compare our DENC against three groups of methods, covering matrix factorization method, social network-based method, and propensity-based method. For each group, we select its representative baselines with details as follows.
\begin{itemize}[leftmargin=*]
\item \textbf{PMF}~\cite{mnih2008probabilistic}:
The method utilizes user-item rating matrix and models latent factors of users and items by Gaussian distributions;
\item \textbf{NRT}~\cite{li2017neural}:
A deep-learning method
that adopts multi-layer perceptron network to model user-item interactions for rating predictions.
\item \textbf{SocialMF}~\cite{jamali2010matrix}:
It considers the social information by adding the propagation of social relation into the matrix factorization model.
\item \textbf{SoReg}~\cite{ma2011recommender}:
It models social information as regularization terms to constrain the Matrix Factorization framework.
\item \textbf{SREE}~\cite{li2017social}:
It models users and items embeddings into a Euclidean space as well as users' social relations.
\item \textbf{GraphRec}~\cite{fan2019graph}:
This is a state-of-the-art social recommender that models social information with Graph Neural Network, it
organizes user behaviors as a user-item interaction graph.
\item \textbf{DeepFM}~\cite{guo1703factorization}\textbf{+}:
DeepFM is a state-of-the-art recommender that integrates Deep Neural Networks and Factorization Machine (FM).
To incorporate the social information into DeepFM,
we change the output of FM in DeepFM+ to the linear combination of the original FM function in ~\cite{guo1703factorization} and
the pre-trained \textit{node2vec} user embeddings.
We also change the task of DeepMF from click-through rate (CTR) to rating prediction.
\item \textbf{CausE}~\cite{bonner2018causal}:
It firstly fits exposure variable embedding with Poisson factorization, then integrates the embedding into PMF for rating prediction.
\item \textbf{D-WMF}~\cite{wang2018deconfounded}:
A propensity-based model which uses Poisson Factorization to infer latent confounders then augments Weighted Matrix Factorization to correct for potential confounding bias.
\end{itemize}
\subsection{Model Variants Configuration} To get a better understanding of our DENC method, we further evaluate the key components of DENC including \textit{Exposure model} and \textit{Social network confounder}. We evaluate the performance of DENC on the condition that if a specific component is removed, and then compare the performance of the intact DENC method. In the following, we define two variants of DENC as (1) DENC-$\alpha$ that removes \textit{Exposure model}; (2) DENC-$\beta$ that removes \textit{Social network confounder}. Note that we do not consider the evaluation of removing \textit{Deconfounder} in DENC, since \textit{Deconfounder} models the inherent factors of user-item information, removing user-item information in a recommender can result in poor performance. We record evaluation results in Table~\ref{tab:component} and have the following findings: \begin{itemize}[leftmargin=*] \item By comparing DENC with DENC-$\alpha$, we find that \textit{Exposure model} is important for capturing missing patterns and thus boosting the recommendation performance. Removing \textit{Exposure model} can lead a drastic degradation of MAE/RMSE by 20.41\%/24.08\% on \texttt{Epinions} and 18.93\%/24.34\% on \texttt{Ciao}, respectively. \item We observe that without \textit{Social network confounder}, the performance of DENC-$\beta$ is deteriorated significantly, with the degradation of MAE/RMSE by 16.10\%/20.50\% on \texttt{Epinions} and 13.83\%/11.31\% on \texttt{Ciao}, respectively. \item \textit{Exposure model} has a greater impact on DENC compared with \textit{Social network confounder}. It is reasonable since \textit{Exposure model} simulates the missing patterns, then \textit{Social network confounder} can consequently debias the potential confounding bias under the guidance of missing patterns. \end{itemize}
\begin{table} \centering
\caption{
Experimental results of DENC-$\alpha$ and DENC-$\beta$.
}
\begin{tabular}{c||c||cc}\hline
Dataset &Models &MAE &RMSE\\\hhline{-||-||--}
\multicolumn{1}{c||}{\texttt{Epinions}} &DENC-$\alpha$ &0.4725 &0.8234 \\
\multicolumn{1}{c||}{} &DENC-$\beta$ &0.4294 &0.7876 \\
\multicolumn{1}{c||}{} &DENC &0.2684 &0.5826\\
\multicolumn{1}{c||}{\texttt{Ciao}} &DENC-$\alpha$ &0.4380 &0.8026 \\
\multicolumn{1}{c||}{} &DENC-$\beta$ &0.3870 &0.6723 \\
\multicolumn{1}{c||}{} &DENC &0.2487 &0.5592\\
\hline \end{tabular} \label{tab:component} \end{table}
\subsection{Investigation on Different Network Embedding Methods}\label{sec:embedding} We construct network embedding with node2vec~\cite{grover2016node2vec} that has the capacity of learning richer representations by adding flexibility in exploring neighborhoods of nodes. Besides, by adjusting the weight of the random walk between breadth-first and depth-first sampling, embeddings generated by node2vec can balance the trade-off between homophily and structural equivalence~\cite{henderson2012rolx}, both of which are essential feature expressions in recommendation systems. The key characteristic of node2vec is its scalability and efficiency as it scales to networks of millions of nodes.
By comparison, we further investigate how different network embedding methods impact the performance of DENC, i.e., LINE~\cite{tang2015line}, SDNE~\cite{wang2016structural}. \begin{itemize}[leftmargin=*] \item \textbf{LINE}~\cite{tang2015line} preserves both first-order and second-order proximities, it suits arbitrary types of information networks and can easily scale to millions of nodes. \item \textbf{SDNE}~\cite{wang2016structural} is a Deep Learning-based network embedding method, like LINE, it exploits the first-order and second-order proximity jointly to preserve the network structure. \end{itemize} We train the three embedding methods with embedding size $d$=10 while the batch size and epochs are set to 1024 and 50, respectively. The experimental results are given in Table~\ref{tab:emb}.
\begin{table}[!htb] \centering
\caption{
Experimental results of DENC under node2vec, LINE, SDNE.
} \resizebox{0.48\textwidth}{!}{
\begin{tabular}{c||c||cc||cc}\hline
Dataset &Embedding &MAE &RMSE &Precision@20 &Recall@20\\\hhline{-||-||--||--}
\multicolumn{1}{c||}{\texttt{Epinions}} &node2vec &0.2684 &0.5826 &0.2832 &0.2501\\
\multicolumn{1}{c||}{} &LINE &0.4241 &0.6307 &0.1736 &0.1534\\
\multicolumn{1}{c||}{} &SDNE &0.4021 &0.6137 &0.1928 &0.1837\\
\multicolumn{1}{c||}{\texttt{Ciao}} &node2vec &0.2487 &0.5592 & 0.2703 & 0.2212\\
\multicolumn{1}{c||}{} &LINE &0.5218 &0.7605 &0.1504 &0.1209 \\
\multicolumn{1}{c||}{} &SDNE &0.4538 &0.6274 &0.2082 &0.1594\\
\hline \end{tabular} } \label{tab:emb} \end{table}
The results show that under the same experimental settings, DENC performs worse with embeddings trained by LINE and SDNE compared with node2vec on both datasets. Although LINE considers the higher-order proximity, unlike node2vec, it still cannot balance the representation between homophily and structural equivalence~\cite{henderson2012rolx}, in which connectivity information and network structure information can be captured jointly. The results show that our DENC benefits more from the balanced representation that can learn both the connectivity information and network structure information. Based on higher-order proximity, SDNE develops a deep-learning representation method. However, compared with node2vec, SDNE suffers from higher time complexity. The deep architecture of SDNE framework mainly causes the high time complexity of SDNE, the input vector dimension can expand to millions for the auto-encoder in SDNE~\cite{cui2018survey}. Thus, we consider it reasonable that our DENC with SDNE embedding cannot outperform the counterpart with node2vec embedding under the same training epochs, since it requires more iterations for SDNE to get finer representation.
\end{document} | arXiv |
Research article | Open | Open Peer Review | Published: 13 August 2018
Hospitalization rates and outcome of invasive bacterial vaccine-preventable diseases in Tuscany: a historical cohort study of the 2000–2016 period
Elena Chiappini1,
Federica Inturrisi2,
Elisa Orlandini3,
Maurizio de Martino1 &
Chiara de Waure ORCID: orcid.org/0000-0002-4346-14944
BMC Infectious Diseasesvolume 18, Article number: 396 (2018) | Download Citation
Invasive bacterial diseases (IBD) are a serious cause of hospitalization, sequelae and mortality. Albeit a low incidence, an increase in cases due to H. influenzae was registered in the past 4 years and, in the Tuscany region, an excess of cases due to N. meningitidis since 2015 is alarming. The purpose of this study is to deepen the knowledge of IBD epidemiology in Tuscany with particular attention to temporal trends.
Tuscan residents hospitalized for IBD from January 1st 2000 to March 18th 2016 were selected from the regional hospital discharge database based on ICD-9-CM codes. Age-specific and standardized hospitalization rates were calculated together with case-fatality rates (CFRs). A time-trend analysis was performed; whereas, prognostic factors of death were investigated through univariable and multivariable analyses.
The average standardized hospitalization rates for invasive meningococcal diseases (IMD), invasive pneumococcal diseases and invasive diseases due to H. influenzae from 2000 to 2016 were 0.6, 1.8, and 0.2 per 100,000, respectively. The average CFRs were 10.5%, 14.5% and 11.5% respectively with higher values in the elderly. Older age was significantly associated with higher risk of death from all IBD. A significant reduction in hospitalization rates for IMD was observed after meningococcal C conjugate vaccine introduction. The Annual Percentage Change (APC) was -13.5 (95% confidence interval (CI) -22.3; -3.5) in 2005–2013 but has risen since that period. Furthermore, a significant increasing trend of invasive diseases due to H. influenzae was observed from 2005 onwards in children 1–4 years old (APC 13.3; 95% CI 0; 28.3).
This study confirms changes in the epidemiology of invasive diseases due to H. influenzae and IMD. Furthermore, attention is called to the prevention of IBD in the elderly because of the age group's significantly higher rate of hospitalizations and deaths for all types of IBD.
Invasive bacterial diseases (IBD) are an important public health issue and cause a serious burden in several countries, particularly among young persons and the elderly. The most common IBD clinical manifestations are septicemia and meningitis, with the first occurring even without the presence of the second, accounting together for 170,000 annual deaths worldwide [1, 2]. Meningitis is a severe infection of the meninges and can rapidly progress from the sudden onset of non-specific symptoms (including fever, nausea, vomiting, and neck stiffness) to death in 24 h. As many as 20–50% of the survivors may have permanent sequelae, such as hearing loss, amputation, or neurological and behavioral impairments [3,4,5]. Septicemia is a life-threatening condition that can cause tissue damage, organ failure, and death [6]. The three most common etiological agents of IBD are Haemophilus influenzae, Streptococcus pneumoniae, and Neisseria meningitidis. They are carried asymptomatically in the human nasopharynx and transmitted by aerosol droplets or secretions during close or lengthy contact. H. influenzae may be non-encapsulated (non-typeable) or encapsulated with a polysaccharide capsule. In the latter case, six serotypes (a-f) are recognized, with H. influenzae serotype b (Hib) being the most pathogenic. Before the availability of Hib conjugate vaccines in the late 1990s, H. influenzae was causing the majority of bacterial meningitis and IBD, while it now accounts for only 2–7% of cases [7,8,9,10]. Invasive diseases due to H. influenzae are most common in children below 5 years of age and rare in adolescents and adults [11]. With the decline of cases due to H. influenzae, S. pneumoniae became the leading cause of IBD, especially among children younger than 5 years of age, elderly and people with chronic diseases or immuno-compromised [12]. Out of 93 known S. pneumoniae serotypes, only 20–30 are responsible for the majority of invasive pneumococcal diseases (IPD) worldwide [13]. Furthermore, S. pneumoniae is known as the leading cause of community-acquired pneumonia, and the lethality associated with bacteriaemic pneumococcal pneumonia is 6–20% [14]. The 7-valent pneumococcal conjugate vaccine (PCV7) for infants and young children was licensed in Europe in 2001, leading to a reduction in hospitalization rates for all S. pneumoniae –related diseases in children [15,16,17]. With the replacement of PCV7 with the 13-valent pneumococcal conjugate vaccine (PCV13) in 2010–2011 and the extension of vaccination to elderly and people at risk [18], it is likely that N. meningitidis will become a major agent of IBD worldwide, in particular if vaccination coverage will not reach high levels. Moreover, N. meningitidis is the only bacterium able to produce epidemics of meningitis. Incidence rates of invasive meningococcal diseases (IMD) are generally highest in children below 5 years of age followed by adolescents and young adults. Thirteen serogroups of N. meningitidis can be identified on the basis of the polysaccharide capsule but only six are responsible for most IMD cases: A, B, C, W135, X, and Y [2]. The distribution of these serogroups varies geographically, likely because of differences in population immunity and environmental factors. In Europe, serogroup B (MenB) is the main cause of IMD, accounting for up to 80% of cases in some countries, followed by serogroups C (MenC) and Y (MenY) [2, 19]. With the introduction of the MenC conjugate vaccine (MCC) in the immunization schedules of several European countries, a significant decline in the incidence of MenC diseases has occurred over the past 10 years [19,20,21].
Nowadays, in Italy, PCV13 is given, together with Hib vaccine, in three doses within the first year of age (3, 5–6, 11–13 months of age) [22]; whereas MCC is delivered in a single dose at 13–15 months of age with a catch-up at 12–14 years of age [23]. In 2014–2015, a multicomponent MenB vaccine (4CMenB) has been introduced and is currently given in two, three or four doses depending on child's age at the time of vaccination. Nevertheless, until the launch of the new National Immunization Plan in January 2017, MenB vaccine has been offered free of charge only in few Italian regions [23,24,25]. Incidence rates for IBD are relatively low in all Italian regions. In 2014, national incidence rates were as follows: 0.17 per 100,000 for invasive H. influenzae diseases, 0.27 per 100,000 for IMD, and 1.57 per 100,000 for IPD. However, an increase in the incidence rates of invasive H. influenzae diseases was registered from 2011 [26]. Moreover, in particular in the Tuscany region, an excess of IMD cases was registered from January 2015 onwards [27]. Health authorities responded to the increasing number of cases with an extraordinary vaccination campaign [28, 29].
The present study aims to study the epidemiology of IBD in Tuscany, with particular attention to temporal trends. For the three types of IBD, over a 16-year period, we estimated: i) age-specific and standardized hospitalization rates and their relationship with vaccination coverage; ii) age-specific case-fatality rates (CFR); and iii) time-trends of age-specific hospitalization rates. Additionally, prognostic factors for death were investigated.
Study design, case definition and data collection
This is a historical cohort study conducted in Tuscany, an Italian region with a population of 3.7 million people. The regional hospital discharge database was accessed in order to identify patients admitted with a diagnosis of IBD due to N. meningitidis, S. pneumoniae or H. influenzae from January 1st 2000 to March 18th 2016. The database relies on the International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) system that is currently used in Italy [30]. The project has been approved by the Ethics Committee of the "Azienda Ospedaliero-Universitaria Meyer" of Florence on October 4th 2010 (authorization number 2010/7880).
Eligible patients were retrospectively searched in the regional hospital discharge database using the following ICD-9-CM codes for primary or secondary diagnosis:
IMD: 036.0 (meningococcal meningitis), 036.1 (meningococcal encephalopathy), 036.2 (meningococcal septicemia), 036.40 (unspecified meningococcal carditis), 036.41 (meningococcal pericarditis), 036.42 (meningococcal endocarditis), 036.43 (meningococcal myocarditis), 036.81 (meningococcal optic neuritis), 036.82 (meningococcal arthropathy), 036.89 (other specified forms of meningococcal infection), and 036.9 (unspecified meningococcal infection);
IPD: 038.2 (pneumococcal septicemia), and 320.1 (pneumococcal meningitis);
invasive H. influenzae diseases: 038.41 (septicemia due to H. influenzae), and 320.0 (Haemophilus meningitis).
Hospital discharge records of eligible patients included the following data: details of the admitting hospital, age, gender, nationality, region of residence, date of admission, one primary and five secondary diagnoses, surgical and other procedures, date and type of discharge. All hospitalized patients living in Tuscany and discharged from a Tuscan hospital, with one of the ICD-9-CM codes described above in primary or secondary diagnosis, were included in the study. One day-hospitalizations were excluded. A cross-check of data included in the hospital discharge record of included patients was conducted to avoid duplicates due to patients being transferred from a hospital to another one. With respect to the population at risk, the number of people residing in Tuscany during the study period (at January 1st of each year), stratified by age and gender, was taken from the Italian National Statistical Institute (ISTAT) database [31]. Data on vaccination coverage of H. influenzae (Hib), S. pneumoniae (PCV) and N. meningitidis (MCC) at 24 months of age were searched on the Italian Ministry of Health database and ICONA studies [32,33,34].
Continuous variables were summarized using mean ± standard deviation (SD), whereas categorical variables were reported as absolute and relative frequencies. Statistical analyses were carried out using STATA software version 13.1 except as otherwise specified.
Age-specific hospitalization rates (HR), stratified for type of invasive disease, were calculated as cases per 100,000 together with 95% confidence intervals (95% CI). Resident population at January 1st of each year was used for the calculation. Standardized hospitalization rates (SHR) and their 95% CI were considered to compare hospitalizations between years. Standardization was performed with respect to age and gender using the Italian population (latest available data from 2016) as external weight. 95% CIs were obtained as standardized rate ± 1.96*standard error (SE), and SE was calculated with the following Armitage and Berry formula [35]:
$$ \sqrt{\frac{\sum \frac{\left({T}_ix{N}_i^2 xK\right)}{n_i}}{{\left(\sum {N}_i\right)}^2}} $$
Ti = crude rate for each age class
Ni = size of the reference population in each age class
ni = size of the study population in each age class
K = multiplication factor (100,000).
Patients were stratified into seven age groups following the classification adopted by the National Surveillance System: <1, 1–4, 5–9, 10–14, 15–24, 25–64, ≥65 years of age. Case-fatality rates (CFR) were calculated dividing the number of deaths by the number of cases and were presented stratified for type of invasive disease.
Time-trend analysis
Changes in overall and age-specific HRs from 2000 to 2015 were assessed by joinpoint (JP) regression according to Kim's method [36]. Data regarding 2016 were excluded from this analysis because they were partial. A joinpoint represents the time point when a significant trend change is detected. Time changes were expressed in terms of Annual Percent Change (APC) with 95% CI. The null hypothesis was tested using a maximum of three changes in the slope with an overall significance level of 0.05 divided by the number of joinpoints in the final model. Joinpoint Regression program version 4.3.1 was used to carry out the analysis.
Health outcome analysis
Chi-square test was used to assess the relationships between the health outcome (dead, alive) and independent variables such as: age (<5, 5–17, 18–64, ≥65 years of age), gender, nationality (Italian, non-Italian) and Charlson Index [37]. The Charlson Index was used as a proxy of comorbidity and calculated according to the algorithms developed by Quan et al. [38] looking at Enhanced ICD-9-CM Coding in primary and secondary diagnoses. The STATA additional software package "charlson" was used for this calculation. Variables with p-values below 0.25 at the univariable analysis were entered in a logistic regression model. The results were shown in terms of Odds Ratio (OR) and 95% CI. All the analyses and models were carried out stratifying by type of invasive disease.
A total of 1691 patients with IBD were hospitalized between January 1st 2000 and March 18th 2016 at 52 hospitals in Tuscany region; 107 were residents outside the region and excluded from further analysis. Among the residents, 288 were children and adolescents (<18 years), and 1296 were adults and elderly (≥18 years). Additional file 1: Table S1 summarizes the study population's characteristics. Most children and adolescents were admitted to the Anna Meyer Children's University Hospital in Florence (n = 127; 44.1%). More than half of the hospitalizations of children and adolescents were due to IMD (n = 153; 53.1%) followed by IPD (n = 112; 38.9%). Children's and adolescents' mean age was 4.8 ± 5.2 years and the mean hospitalization length was 16 ± 15.4 days; 56.9% of them were males and 92% were Italian. Among adults and elderly, IPD was the most common IBD (n = 1017; 78.5%). Their mean age was 61.3 ± 18 years, males and females were equally represented and the majority was Italian. The mean length of hospitalization was 12 ± 8.9 days. Invasive H. influenzae diseases accounted for approximately 8% of cases in both groups. Data on vaccination coverage at 24 months of age in Tuscany were available for Hib from 2003 to 2015. Hib average coverage during the study period was of 94% but two drops to 88% were registered in 2003 and 2010. On the contrary, data for PCV and MCC vaccination coverage were retrieved only for 2003, 2008 and 2013–2015. Vaccination coverage was very low in 2003 and 2008, whereas it was between 92.9 and 94% for PCV and between 87.8 and 90.9% for MCC in 2013–2015.
More than one-quarter (n = 86; 26.1%) of the 330 hospitalizations for IMD occurred in children less than 4 years of age. No child <1 year of age died, while five died in the 1–4 years age group, yielding a CFR of 8.2% in this age group. People from 15 to 24 and from 25 to 64 years of age accounted for almost half (n = 158; 47.9%) of IMD hospitalizations, with CFRs of 8.0% and 9.6% respectively. The highest number of deaths (11 deaths out of 42 cases) was registered in elderly (≥65 years of age) with a CFR of 26.2%. More than half (n = 581; 51.5%) of the 1129 hospitalizations for IPD occurred in people ≥65 years of age with more than three-quarters (n = 122; 75.3%) of all deaths due to this infection, yielding a CFR of 21%. Similarly, almost half (n = 61; 48.8%) of the 125 hospitalizations for invasive H. influenzae diseases occurred in the ≥65 years age group. All deaths for this IBD were registered in this age group (CFR: 21.3%) except one in the 1–4 years age group (CFR: 11.1%) (Table 1). During the whole study period, CFR due to IMD had the lowest mean value (10.5%, min 0.0%, max 33.3%). IPD and invasive H. influenzae diseases presented a mean CFR of 14.5% (min 7.8%, max 23.8%) and 11.5% (min 0.0%, max 50.0%) (Fig. 1). Table 2 reports SHRs for type of IBD and year. The mean SHR for IMD was 0.6 per 100,000, with two peaks around 1 per 100,000 in 2004–2005 and in 2015. IPD had a mean SHR of 1.8 per 100,000, ranging from 1.4 per 100,000 in 2010 to 2.2 per 100,000 in 2012. During the study period, invasive H. influenzae diseases had SHRs always below or equal to 0.3 per 100,000 (mean SHR: 0.2 per 100,000), with higher values in 2000–2005, 2012 and 2014 (Table 2). In the latter, the age-specific HRs for children <1 year of age and from 1 to 4 years of age showed peaks up to 7 per 100,000 children. The relationship between data on vaccination coverage and age-specific HRs for invasive H. influenzae diseases is shown in Fig. 2. On the contrary, the relationship between PCV and MCC vaccination coverage and HRs was not shown because of lack of data.
Table 1 Absolute and relative frequencies of cases and deaths, and CFRs by age group due to IBD in Tuscany from 2000 to 2016a (N = 1584)
Case-fatality rates (CFRs) per type of IBD from 2000 to 2016
Table 2 Summary table of HRs (per 100,000) standardized for age and gender
Age-specific HRs of invasive H. influenzae diseases in children below 4 years of age and Hib vaccination coverage in Tuscan children from 2003 to 2015
The joinpoint analysis by type of IBD and age group is shown in Tables 3, 4 and 5. As far as IMD is concerned, considering all ages, joinpoints were found in 2005 and 2013 showing a significant decreasing trend in 2005–2013 (APC -13.4; 95% CI -22.3; -3.5; p < 0.0001) and a positive but non-significant trend before and after this time. No joinpoints or significant trends were shown for children up to 9 years old. However, while infants <1 year of age had a positive APC (APC 3.1; 95% CI -5.1; 12.1), HRs for the 1–4 years and 5–9 years age groups tended to decrease (APC -5.9 and -7.6 respectively). The greatest reduction was found in the 10–14 years age group, whose APC decreased from 7.6 (95% CI -27.7; 60.2) in 2000–2004 to -8.5 (95% CI -15.9; -0.4; p < 0.0001) afterwards. No joinpoints were found in the other age groups. With respect to IPD, the overall trend was stable over time (APC 0.7; 95% CI -0.6; 2.1). Infants <1 year of age had an overall significant decreasing trend (APC -9.4; 95% CI -16; -2.3; p < 0.0001) (data not shown) with a joinpoint in 2004 and an APC of -16.7 in 2004–2015 (95% CI -28; -3.6; p < 0.0001). Children from 1 to 4 years of age also had a negative but non-significant APC (APC -7; 95% CI -13.7; 0.2). Joinpoints were found in the 10–14 years age group in 2002, 2009 and 2012, with positive and negative fluctuating trends. Young adults in the 15–24 years age group showed a joinpoint in 2003: APC changed from 95.2 (95% CI -50.5; 670.3) in 2000–2003 to -13.4 (95% CI -22.5; -3.3; p < 0.0001) afterwards. No joinpoints were found in adults and elderly (positive non-significant trends). For invasive H. influenzae diseases, there was an overall decreasing but non-significant trend (APC -1.5; 95% CI -5.3; 2.4). Joinpoints were found in all age groups except for adults and elderly. Infants <1 year of age had a significant decreasing trend in 2000–2008 (APC -17.5; 95% CI -30.5; -2.1; p < 0.0001) followed by an increasing but non-significant trend. Joinpoints were found in children from 1 to 4 years of age in 2002 and 2005: a significant increase was seen in 2005–2015 (APC 13.3; 95% CI 0.0; 28.3; p < 0.0001). On the contrary, the trend decreased for children 5–9 years old in 2008–2015 (APC -15.1; 95% CI -24; -5.1; p < 0.0001). In the 10–14 years age group, joinpoints were found in 2003, 2009 and 2012, with positive and negative fluctuating trends. Young adults in the 15–24 years age group had an overall significant decreasing trend (APC -3.4; 95% CI -5.5; -1.4; p < 0.0001) (data not shown) with a joinpoint in 2003 and an APC of -19.3 in 2003–2015 (95% CI -30.4; -6.5; p < 0.0001). An exemplary time-trend change is illustrated in Additional file 2: Figure S1.
Table 3 Findings of the joinpoint regression for IMD by age group
Table 4 Findings of the joinpoint regression for IPD by age group
Table 5 Findings of the joinpoint regression for invasive H. influenzae diseases by age group
The univariable analyses showed that there were more deaths in older patients, in females, in Italian patients, and in patients with comorbidities. Among all deaths due to IMD, only 9 (29%) were registered in the pediatric age as compared to 22 (71%) in people ≥18 years of age (p = 0.004). The same was observed for IPD (p < 0.0001) and invasive H. influenzae diseases (p = 0.003). Age was entered into all IBD logistic regression models and, in light of small absolute frequencies, it was classified as <18 years vs ≥18 years of age for IPD and as <65 years vs ≥65 years of age for invasive H. influenzae diseases. Gender was not shown to be significantly associated to death, but it was kept in the IMD and IPD logistic regression models. Among patients who died for IPD, there were more Italians than non-Italians (p = 0.052). Significant associations were not found for the other two IBD, therefore nationality was entered only into the logistic regression model for IPD. As for the Charlson Index, a smaller percentage of people without comorbidities was observed among patients who died as compared to those who survived. For instance, 90% of IMD patients discharged alive had no comorbidities in comparison to 74.2% among who died (p = 0.014). Significant associations were also seen in the other two IBD and Charlson Index was entered into all models as a dichotomous variable (presence/absence of comorbidities) (Table 6).
Table 6 Findings of univariable analyses performed by chi-square test
The final logistic regression models for type of invasive disease are shown in Table 7. They were overall statistically significant and demonstrated that older age was a risk factor for dying for all IBD. In particular, the IMD model was entirely explained by the variable age, with older age (≥65 years old) associated to a higher risk of dying (OR 3.13; 95% CI 1.14; 8.60) compared to adults (18–64 years old). As for the IPD model, adults and elderly (OR 17.43; 95% CI 2.40; 126.35) and patients with comorbidities (OR 1.45; 95% CI 1.03; 2.04) had a higher risk of death compared to patients in pediatric age and without comorbidities respectively. Similarly, to IMD, the model for invasive H. influenzae diseases was entirely explained by the variable age: elderly (≥65 years of age) showed a higher risk of death (OR 17.94; 95% CI 2.16; 148.71).
Table 7 Findings of multivariable logistic regression models
This historical observational study assessed the trends of IBD hospitalizations in a population of 3.7 million people over the past 16 years. The findings highlighted decreasing hospitalization rates for IPD in infants <1 year of age, likely because of the effects of PCV vaccines. A similar reduction in S. pneumoniae-related hospitalizations in children was shown in two other Italian regions, Friuli Venezia Giulia and Veneto [17]. Although data were limited, high PCV vaccination coverages retrieved for 2013–2015 support the conclusion. The data showed that, in the last few years, hospitalization rates for invasive H. influenzae diseases increased, as also reported by the National Surveillance System over the past 4 years. In fact, an increasing trend, although non-significant, was observed in infant <1 year of age from 2008 onwards, and a significantly increasing trend was shown in the 1–4 years age group from 2005 onwards. One can speculate that this increase was linked, among other reasons, to a drop in Hib vaccination coverage. Thanks to the high vaccination coverage reached in almost all Italian regions, cases attributable to serotype b, the only ones preventable by vaccination, are rare [26]. Nevertheless, the common belief that invasive H. influenzae diseases have disappeared after the introduction of the vaccine is not supported and should go no further. In fact, on average, one case out of nine per year (11.1%) occurred in children <4 years of age in 2012–2015. Overall, HRs for IPD were in line with national estimates, whereas HRs for invasive H. influenzae diseases, although in line with estimates reported by some Italian regions, were higher than national estimates presumably affected by underreporting [26]. A more complex situation emerged for IMD, whose HRs appeared to be, on average, higher than the national ones (0.6 per 100,000 vs 0.3 per 100,000) [26], with two high peaks in 2004–2005 and in 2015. Considering that the time-trend analysis also revealed joinpoints around those years, we can assume two crucial changes in the epidemiology of IMD in Tuscany. The first likely reflects the introduction of MCC vaccine in 2005 with ensuing reduction of hospitalizations especially in children from 1 to 4 years of age (although non-significant) and in adolescents from 10 to 14 years of age, both primary targets of the vaccination campaign. These findings were also in line with a recent time-trend analysis investigating the impact of the MCC vaccine introduction in Italy [39]. Nevertheless, not enough data on vaccination coverage were available and thus no relationship could be determined in changes in age-specific HRs in relation to vaccination. The second change is in line with the increasing number of cases reported in young adults by the National Surveillance System in 2015 and in the first quarter of 2016 [26]. This change was also seen in our data (in particular regarding the 15–24 years age group) with nine and three cases respectively compared to one case per year in the previous 2 years (data not shown). This brought to the implementation of extraordinary measures and to vaccinate, free of charge, people between 20 and 45 years of age and, under request and with co-payment, people above 45 years of age [28, 29]. It is important to note that, although no death occurred, 25 of the 330 IMD cases (7.6%) affected infants <1 year and that the trend of HR for this age group was the only positive one in the pediatric age (although non-significant). In Tuscany, as well as in other Italian regions, infants <1 years of age are not covered by the MCC vaccination that is offered at 13th months of age. The introduction of MenB vaccine for infants from 2014 onward can be expected to produce some benefits in years to come.
Regarding IBDs in adults, a note on IMD is warranted. Although the increased number observed in 2015–2016 has been mainly registered in young adults, adults from 25 to 64 years old contributed with half of the total cases per year: 16 cases in 2015 and six in the first quarter of 2016 as compared to three cases per year in the previous 2 years (data not shown). Time-trend analysis for this age group showed a positive but non-significant trend. Attention should be paid to this fast-changing situation also considering that adults above 45 years of age are not strictly a target of extraordinary MCC vaccination measures.
A deeper analysis is needed for the elderly. Despite the decreasing trend in IBD in vaccinated children (direct effect) and in unvaccinated subjects of all ages (indirect effect) [40,41,42,43], the level of disease control in the elderly is suboptimal. In fact, although no time-trends were observed, the absolute number of cases and CFRs remain high for all three IBD. For example, 42 of the 330 IMD cases (12.7%) occurred in people ≥65 years old, contributing with the highest number of deaths (11 out of 31 deaths due to IMD). This would appear even more relevant looking at invasive H. influenzae diseases: 61 out of the 125 cases (49%) were registered in elderly as well as all deaths except one. The age-specific CFRs were higher than European estimates reported by the European Centre for Disease Prevention and Control (ECDC) [1]: 26.2% vs 17.1% for IMD, 21% vs 14.3% for IPD, and 21.3% vs 15% for invasive H. influenzae diseases. Furthermore, in all three regression models, older age was significantly associated with a higher risk for death. Several studies in the literature found older age and Charlson comorbidities to be independent predictors of death [44, 45]. This evidence calls for actions to extend out high vaccination coverage in elderly and people with chronic conditions to prevent the occurrence of such IBD, in particular IPD, in these groups.
In fact, while MCC and Hib vaccines are not given to the elderly, the 23-valent pneumococcal polysaccharide vaccine (PPV23) first and then the PCV13 have been used in Italy in people ≥65 years of age. Nevertheless, albeit data on adults vaccination coverage are not routinely collected, local or regional studies suggest that vaccination coverage in people ≥65 years of age is quite low, varying from 0.7 to 50% between 2004 and 2008 [22], when the PPV23 was administered to elderly [46]. A recently concluded randomized trial in the Netherlands provide the missing evidence of PCV13 efficacy in preventing vaccine-type IPD in older adults [47]. This evidence, together with an increased awareness of the problem of IBD in the elderly, should support policy makers in their decisions on the implementation of pneumococcal vaccination. This is envisaged also because vaccinating elderly against S. pneumoniae may prevent not only IPD but also pneumonia, which causes 1 million hospitalizations in Europe, costs about €10 billion per year, and represents the most frequent cause of death from infection [48, 49].
A positive point of the present study is that it gives a thorough overview of the epidemiology of IBD yielding also CFRs. This measure is widely used as an outcome indicator to make comparisons over time and between areas as its calculation is less prone to bias [50]. Moreover, compared to the National Surveillance System which included invasive diseases since 2007 only [22], this study was able to provide a picture of IBD over a wider time window overcoming the apparent increase in the number of cases which occurred in the National Surveillance System. Our study presents also some limitations. One is concerning the general sparse-cells problem that makes joinpoint models unstable and may justify fluctuating positive and negative trends in children from 10 to 14 years of age both for IPD and invasive H. influenzae diseases. Furthermore, it should be kept in mind that in presence of zero-counts, small absolute fluctuation may have a great relative impact. Another limitation was the lack of information on death after discharge from the hospital or transfer to a private non-accredited institute even though we consider it improbable to affect our results as the number of transferred patients was very low. Another limitation is represented by the lack of information on serotype distribution and individual vaccination records that could have allowed a more in-depth analysis of the relationship between vaccination and the occurrence of diseases. Additionally, over the past 16 years, diagnostics methods have become more sensitive and life support techniques could have influenced the health outcome. Finally, R-square values of regression models were very low, ranging between 4 and 16% because of the limited number of variables available from the hospital discharge records.
In conclusion, the results of our study contribute to the body of evidence on the epidemiology of IBD and the importance of ensuring high vaccination coverage. A constant effort should be made to attain and maintain high vaccination coverage among children in order to further reduce the incidence of all IBD and control apparent increasing trends. In particular, attention should be paid to the increase in invasive H. influenzae diseases and to the changing epidemiological scenario of IMD. Furthermore, actions should be also promoted to implement vaccination in the elderly. Eventually, prevention remains the most valuable tool to help reducing the burden of IBD in all age groups.
This study shows changes in the epidemiology of IBD, particularly due to H. influenzae and N. meningitidis, and high rates of hospitalizations and deaths for all types of IBD in elderly. This evidence calls for actions in order to maintain high vaccination coverage among children and promote vaccination in older age groups.
4CMenB:
multicomponent MenB vaccine
APC:
Annual Percentage Change
CFR:
case-fatality rates
ECDC:
Hib:
H. influenzae serotype b
hospitalization rates
IBD:
invasive bacterial diseases
ICD-9-CM:
International Classification of Disease, Ninth Revision, Clinical Modification
IMD:
invasive meningococcal diseases
IPD:
invasive pneumococcal diseases
JP:
joinpoint
MenC conjugate vaccine
MenB:
meningococcal serogroup B
MenC:
meningococcal serogroups C
meningococcal serogroup Y
PCV13:
13-valent pneumococcal conjugate vaccine
PCV7:
7-valent pneumococcal conjugate vaccine
European Centre for Diseases Prevention and Control (ECDC). Surveillance of invasive bacterial diseases in Europe. Stockholm. 2011:2013. https://doi.org/10.2900/1510.
World Health Organization (WHO). Meningococcal meningitis. Fact Sheet N°141 2015. http://www.who.int/mediacentre/factsheets/fs141/en/. (Accessed 3 June 2016).
Rosenstein NE, Perkins BA, Stephens DS, Popovic T, Hughes JM. Meningococcal disease. N Engl J Med. 2001;344:1378–88. https://doi.org/10.1056/NEJM200105033441807.
Stein-Zamir C, Shoob H, Sokolov I, Kunbar A, Abramson N, Zimmerman D. The clinical features and long-term sequelae of invasive meningococcal disease in children. Pediatr Infect Dis J. 2014;33:777–9. https://doi.org/10.1097/INF.0000000000000282.
van de Beek D. Progress and challenges in bacterial meningitis. Lancet. 2012;380:1623–4. https://doi.org/10.1016/S0140-6736(12)61808-X.
Center for Disease Control and Prevention (CDC), World Health Organization (WHO). Epidemiology of Meningitis Caused by Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Lab. Methods Diagnosis Meningitis caused by Neisseria meningitidis, Streptococcus pneumoniae, Haemophilus Influ. 2nd ed., 2011.
Ladhani S, Slack MPE, Heath PT, von Gottberg A, Chandra M, Ramsay ME, et al. Invasive Haemophilus influenzae disease, Europe, 1996-2006. Emerg Infect Dis. 2010;16:455–63. https://doi.org/10.3201/eid1603.090290.
Schuchat A, Robinson K, Wenger JD, Harrison LH, Farley M, Reingold AL, et al. Bacterial meningitis in the United States in 1995. Active surveillance team. N Engl J Med. 1997;337:970–6. https://doi.org/10.1056/NEJM199710023371404.
Thigpen MC, Whitney CG, Messonnier NE, Zell ER, Lynfield R, Hadler JL, et al. Bacterial meningitis in the United States, 1998-2007. N Engl J Med. 2011;364:2016–25. https://doi.org/10.1056/NEJMoa1005384.
Brouwer MC, van de Beek D, Heckenberg SGB, Spanjaard L, de Gans J. Community-acquired Haemophilus influenzae meningitis in adults. Clin Microbiol Infect. 2007;13:439–42. https://doi.org/10.1111/j.1469-0691.2006.01670.x.
Center for Disease Control and Prevention (CDC). Haemophilus influenzae. In: Hamborsky J, Kroger A, Wolfe S, editors. Epidemiology and Prevention of Vaccine-Preventable Diseases. 13th ed., Washington D.C.: Public Health Foundation; 2015.
Center for Disease Control and Prevention (CDC). Pneumococcal Disease. In: Hamborsky J, Kroger A, Wolfe S, editors. Epidemiology and Prevention of Vaccine-Preventable Diseases. 13th ed., Washington D.C.: Public Health Foundation; 2015.
Johnson HL, Deloria-Knoll M, Levine OS, Stoszek SK, Freimanis Hance L, Reithinger R, et al. Systematic evaluation of serotypes causing invasive pneumococcal disease among children under five: the pneumococcal global serotype project. PLoS Med. 2010;7 https://doi.org/10.1371/journal.pmed.1000348.
Kalin M, Örtqvist Å, Almela M, Aufwerber E, Dwyer R, Henriques B, et al. Prospective study of prognostic factors in community-acquired Bacteremic pneumococcal disease in 5 countries. J Infect Dis. 2000;182:840–7. https://doi.org/10.1086/315760.
Griffin MR, Zhu Y, Moore MR, Whitney CG, Grijalva CG. U.S. hospitalizations for pneumonia after a decade of pneumococcal vaccination. N Engl J Med. 2013;369:155–63. https://doi.org/10.1056/NEJMoa1209165.
Martinelli D, Pedalino B, Cappelli MG, Caputi G, Sallustio A, Fortunato F, et al. Towards the 13-valent pneumococcal conjugate universal vaccination. Hum Vaccin Immunother. 2014;10:33–9. https://doi.org/10.4161/hv.26650.
Baldo V, Cocchio S, Gallo T, Furlan P, Clagnan E, Del Zotto S, et al. Impact of pneumococcal conjugate vaccination: a retrospective study of hospitalization for pneumonia in north-East Italy. J Prev Med Hyg. 2016;57:E61–8.
Gladstone RA, Jefferies JM, Faust SN, Clarke SC. Pneumococcal 13-valent conjugate vaccine for the prevention of invasive pneumococcal disease in children and adults. Expert Rev Vaccines. 2012;11:889–902. https://doi.org/10.1586/erv.12.68.
Halperin SA, Bettinger JA, Greenwood B, Harrison LH, Jelfs J, Ladhani SN, et al. The changing and dynamic epidemiology of meningococcal disease. Vaccine. 2012;30(Suppl 2):B26–36. https://doi.org/10.1016/j.vaccine.2011.12.032.
Trotter C, Samuelsson S, Perrocheau A, de Greeff S, de Melker H, Heuberger S, et al. Ascertainment of meningococcal disease in Europe. Euro Surveill. 2005;10:247–50.
Trotter CL, Ramsay ME. Vaccination against meningococcal disease in Europe: review and recommendations for the use of conjugate vaccines. FEMS Microbiol Rev. 2007;31:101–7. https://doi.org/10.1111/j.1574-6976.2006.00053.x.
National Center of Epidemiology and Surveillance and Health Promotion (CNESPS), National Institute for Health (Istituto Superiore di Sanità ISS). Data and evidences for the use of anti-pneumococcal vaccines in risk subjects of all ages and for the eventual vaccination of the elder population. Rome: 2013.
Italian Ministry of Health, National Institute for Health (Istituto Superiore di Sanità ISS). Italian Vaccine Action Plan 2016-2018. 2015.
Gasparini R, Amicizia D, Lai PL, Panatto D. Meningococcal B vaccination strategies and their practical application in Italy. J Prev Med Hyg. 2015;56:e133–E139.
Watson PS, Turner DPJ. Clinical experience with the meningococcal B vaccine, Bexsero(®): prospects for reducing the burden of meningococcal serogroup B disease. Vaccine. 2016;34:875–80. https://doi.org/10.1016/j.vaccine.2015.11.057.
National Institute for Health (Istituto Superiore di Sanità ISS). Surveillance data on invasive bacterial diseases updated to April 4th 2016. Rome: 2016.
Stefanelli P, Miglietta A, Pezzotti P, Fazio C, Neri A, Vacca P, et al. Increased incidence of invasive meningococcal disease of serogroup C / clonal complex 11, Tuscany, Italy, 2015 to 2016. Euro Surveill. 2016;21:1–5. https://doi.org/10.2807/1560-7917.ES.2016.21.12.30176.
Istitution of Tuscany Region. Vaccination campaign against meningococcal C - Measures of prophylaxis and prevention 2016. http://www.regione.toscana.it/-/campagna-contro-il-meningococco-c.
Italian Ministry of Health. Circolare n. 5783 del 1° marzo 2016. 2016.
Italian Ministry of Health. Decreto Ministeriale 28 Dicembre 1991. Istituzione della scheda di dimissione ospedaliera. 1992 G.U. 17 gennaio, n.13.
Italian National Statistical Institute (ISTAT). Geo-Demo ISTAT: maps, population and demographic statistics. http://demo.istat.it/index.html.
Italian Ministry of Health. Immunization in pediatric age: Vaccination coverage. http://www.salute.gov.it/portale/documentazione/p6_2_8_3_1.jsp?lingua=italiano&id=20.
National Institute for Health (Istituto Superiore di Sanità ISS). ICONA 2003: National infant vaccination coverage in Italy. Rapporti ISTISAN 03/37 2003.
National Institute for Health (Istituto Superiore di Sanità ISS). ICONA 2008: National vaccination coverage survey among children and adolescents. Rapporti ISTISAN 09/29 2009.
Armitage P, Berry G, Matthews J. Statistical methods in medical research. 3rd ed. Oxford: Blackwell Scientific Publications; 1994.
Kim HJ, Fay MP, Feuer EJ, Midthune DN. Permutation tests for joinpoint regression with applications to cancer rates. Stat Med. 2000;19:335–51.
Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373–83.
Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi J-C, et al. Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Med Care. 2005;43:1130–9.
de Waure C, Miglietta A, Nedovic D, Mereu G, Ricciardi W. Reduction in Neisseria meningitidis infection in Italy after meningococcal C conjugate vaccine introduction: a time trend analysis of 1994-2012 series. Hum Vaccin Immunother. 2016;12:467–73. https://doi.org/10.1080/21645515.2015.1078951.
Trotter CL, Maiden MCJ. Meningococcal vaccines and herd immunity: lessons learned from serogroup C conjugate vaccination programmes. Expert Rev Vaccines. 2014;8:851–61. https://doi.org/10.1586/erv.09.48.
Trotter CL, McVernon J, Ramsay ME, Whitney CG, Mulholland EK, Goldblatt D, et al. Optimising the use of conjugate vaccines to prevent disease caused by Haemophilus influenzae type b, Neisseria meningitidis and Streptococcus pneumoniae. Vaccine. 2008;26:4434–45. https://doi.org/10.1016/j.vaccine.2008.05.073.
Heymann D, Aylward B. Mass vaccination in public health. Control Commun. Dis. Man. 19th ed., Washington D.C.: American Public Health Association; 2008.
Weil-Olivier C, Gaillat J. Can the success of pneumococcal conjugate vaccines for the prevention of pneumococcal diseases in children be extrapolated to adults? Vaccine. 2014;32:2022–6. https://doi.org/10.1016/j.vaccine.2014.02.008.
Kanerva M, Ollgren J, Virtanen MJ, Lyytikäinen O, Prevalence Survey Study Group. Risk factors for death in a cohort of patients with and without healthcare-associated infections in Finnish acute care hospitals. J Hosp Infect. 2008;70:353–60. https://doi.org/10.1016/j.jhin.2008.08.009.
Falsetti L, Viticchi G, Tarquinio N, Silvestrini M, Capeci W, Catozzo V, et al. Charlson comorbidity index as a predictor of in-hospital death in acute ischemic stroke among very old patients: a single-cohort perspective study. Neurol Sci. 2016;37:1443–8. https://doi.org/10.1007/s10072-016-2602-1.
Germinario C, Tafuri S, Vece MM, Prato R. Pneumococcal polysaccharide immunization strategies in Italian regions. Ig Sanita Pubbl. 2010;66:659–70.
Bonten MJM, Huijts SM, Bolkenbaas M, Webber C, Patterson S, Gault S, et al. Polysaccharide Conjugate Vaccine against Pneumococcal Pneumonia in Adults. N Engl J Med. 2015;372:1114–25. https://doi.org/10.1056/NEJMoa1408544.
Ludwig E, Bonanni P, Rohde G, Sayiner A, Torres A. The remaining challenges of pneumococcal disease in adults. Eur Respir Rev. 2012;21.
Gibson GJ, Loddenkemper R, Lundbäck B, Sibille Y. Respiratory health and disease in Europe: the new European lung white book. Eur Respir J. 2013;42.
Daly E, Mason A, Goldacre M. Using case fatality rates as a health outcome indicator: literature review | National Centre for health outcomes development (NCHOD). 2000.
The dataset analysed during the current study is not publicly available but are available from the corresponding author on reasonable request.
Anna Meyer Children's University Hospital, Department of Health Sciences, University of Florence, Florence, Italy
Elena Chiappini
& Maurizio de Martino
Department of Epidemiology & Biostatistics, VU University Medical Center (VUmc), Amsterdam, the Netherlands
Federica Inturrisi
Tuscany Regional Government Department of Right to Health and Solidarity Policies, Information Technology Section, Florence, Italy
Elisa Orlandini
Department of Experimental Medicine, University of Perugia, Piazzale Gambuli 1, 06132, Perugia, Italy
Chiara de Waure
Search for Elena Chiappini in:
Search for Federica Inturrisi in:
Search for Elisa Orlandini in:
Search for Maurizio de Martino in:
Search for Chiara de Waure in:
EC, FI and CdW designed the study. EO collected the data, FI performed the statistical analysis and EC, CdW and MdM contributed to data interpretation. FI and CdW drafted the manuscript and EC, EO and MdM critically revised it. All authors have read and approved the final manuscript and agreed to be accountable for all aspects of the work.
Correspondence to Chiara de Waure.
Ethics declarations
The project has been approved by the Ethics Committee of the university hospital "Azienda Ospedaliero-Universitaria Meyer" of Florence on October 4th 2010 (authorization number 2010/7880). Data were obtained from an electronic database using ICD-9-CM. A specific informed consent was not considered necessary according to the Ethics Committee approving the project, because this was a descriptive epidemiological study performed on administrative data, which are routinely collected from any hospitalized patient after obtaining his/her consent. Furthermore, no human experimentation was foreseen by the study and patient information was anonymized and de-identified prior to analysis.
The authors declare that they have no competing interests. CdW is Associate Editor of BMC Health Services Research and BMC Infectious Diseases.
Additional file 1:
Table S1. Characteristics of patients with IBD resident in Tuscany in 2000–2016 (N = 1584). (DOCX 12 kb)
Figure S1. Joinpoint regression of IMD HRs, all ages, years 2000–2015. (TIF 61 kb)
Hospitalization, trend | CommonCrawl |
\begin{document}
\title{Learning and Trust in Auction Markets}
\author{ Pooya Jalaly \thanks{Email: \texttt{[email protected]}. Work supported in part by NSF grant CCF-1563714, ONR grant N00014-08-1-0031, and a Google Research Grant. } \and Denis Nekipelov \thanks{Department of Economics, University of Virginia, Monroe Hall, Charlottesville, VA 22904, USA. Email: \texttt{ [email protected]}. Work supported in part by NSF grant CCF-1563714 , and a Google Research Grant. }
\and
\'{E}va Tardos\thanks{Department of Computer Science, Cornell University, Gates Hall, Ithaca, NY 14853, USA, Email: \texttt{[email protected]}. Work supported in part by NSF grant CCF-1563714, ONR grant N00014-08-1-0031, and a Google Research Grant.} }
\maketitle \begin{abstract} Auction theory analyses market designs by assuming all players are fully rational. In this paper we study behavior of bidders in an experimental launch of a new advertising auction platform by Zillow, as Zillow switched from negotiated contracts to using auctions in several geographically isolated markets. A unique feature of this experiment is that the bidders in this market are local real estate agents that bid in the auctions on their own behalf, not using third-party intermediaries to facilitate the bidding. To help bidders, Zillow also provided a recommendation tool that suggested the bid for each bidder.
Our main focus in this paper is on the decisions of bidders whether or not to adopt the platform-provided bid recommendation. We observe that a significant proportion of bidders do not use the recommended bid. Using the bid history of the agents we infer their value, and compare the agents' regret with their actual bidding history with results they would have obtained consistently following the recommendation. We find that for half of the agents not following the recommendation, the increased effort of experimenting with alternate bids results in increased regret, i.e., they get decreased net value out of the system. The proportion of agents not following the recommendation slowly declines as markets mature, but it remains large in most markets that we observe. We argue that the main reason for this phenomenon is the lack of trust that the bidders have in the platform-provided tool.
Our work provides an empirical insight into possible design choices for auction-based online advertising platforms. While search advertising platforms (such as Google or Bing) allow bidders to submit bids on their own and there is an established market of third-party intermediaries that help bidders to bid over time, many display advertising platforms (such as Facebook) optimize bids on bidders' behalf and eliminate the need for the bidders to bid on their own or use intermediaries. Our empirical analysis shows that the latter approach is preferred for markets where bidders are individuals, who don't have access to third party tools, and who may question the fairness of platform-provided suggestions. \end{abstract}
\section{Introduction} Auction theory analyses market design by assuming all players behave fully rationally, and the outcome is a (Bayes) Nash equilibrium of the game. Some recent work, such as \cite{NST:2015}, suggests to replace this assumption for repeated games (such as ad-auctions) with modeling the players as learners, assuming they use a form of no-regret learning in repeated games to find the best strategy to play. No-regret learning can be implemented with less available information, but the assumption is still modeling agents with a strong form of rationality, and using high level of data analytics. This assumption is well justified in auctions where bidders use strong tools for data analytics, or have a market place of third-party intermediaries to facilitate the bidding, and bidders invest enough in the market to pay for the analytics. Bidders in these market places use algorithmic bidding tools, and such tools do optimize rationally, and hence are much less subject to human biases.
In this paper we study bids in an experiment with auction where bidders are humans, each with relatively small investment, and were not using algorithmic tools. In such auctions, the reality may challenge the above assumptions. The actions of human bidders, not assisted by strong analytical tools, may be affected by issues not considered in classical auction theory: the bidders will lack information and lack the attention needed to make rational decisions, and may also be effected by behavioral biases that are not accounted for in the standard theory.
Our data comes from an experimental launch of a new advertising auction platform by Zillow. Zillow.com is the largest residential real estate search platform in the United States used by 140 million of people each month according to the company's statistic \cite{zillow-statistics}. Viewers are looking to buy or sell houses, want to see available properties, typical prices, and learn about market characteristic. The platform is monetized by showing ads of real estate agents offering their services. Historically, Zillow used negotiated contracts with real-estate agents for placing ads on the platform. In the experiment we study, several geographically isolated markets were switched from negotiated contracts to auction based pricing and allocation. The auction design used was a form of generalized second price, very similar to what is used in many other markets except that agents were paying for impressions (and not only for clicks). A unique feature of this experiment is that the bidders in this market are local real estate agents that bid in the auctions on their own behalf. This is unlike many existing online marketplaces where bidders use third-party intermediaries to facilitate the bidding. Along with the new auction platform, Zillow provided the bidders the recommendation tool that suggested the bid for each bidder based on the inputs of this bidder's target parameters (e.g. \etedit{impression volume,} budget, and competing bids of other bidders).
The main focus of our paper is understanding the bidder's decision whether or not to adopt the platform-provided bid recommendation. Bidders were required to log into the system if they wanted to change their bid, and once they logged in, the system offered a suggested bid: the recommended bid for maximizing the obtained impression volume for the bidders' budget. Our main conclusion is that the bidders lack trust in the recommendation, and both bidders and the platform would have been better off if the system didn't offer bidders the opportunity to avoid the recommended bid.
Our main metric for the analysis of the bidders' bid sequences is the average regret measuring the difference between the average utility that was achieved by the bid sequence and the utility from the best fixed bid in the hindsight. A fixed bid is not actually optimal in this environment, as bidding differently on different days of the week would have been beneficial for the bidders. However, regret for a fixed bid with hindsight seems to be the most fair comparison, as the recommendation tool was essentially making fix bid recommendations (a limitations of its design), and the bidder's behavior seems to be well approximated with looking for a good fix bids: they didn't update bids frequently enough to take advantage of the opportunities varying with the days of the week.
We observe that a large proportion of bidders does not use the recommended bid to make bid changes immediately following the introduction to the new market. Our main finding is that the observed bid sequences that deviated from recommended bids didn't typically result in smaller average regret than the recommended bid. In other words, even though many bidders attempted to adjust bids on their own, and they had the opportunity to gain over the recommended bid in terms of overall value (by bidding differently on weekdays and weekends), many bidders would have been better off by always using recommended bid. The number of bidders who outperformed the bid recommendation in our study is about the same as the number of those who did worse. The proportion of bidders following the recommendation slowly increases as markets mature, but it remains large in most markets that we observe. We argue that the main reason for this phenomenon is the lack of trust that the bidders have in the platform-provided tool.
An important challenge in understanding the data is the uncertainty in the bidders values for each impression. Most bidders in this market are limited by small budgets, and as a result their bid, even if interpreted as a fully rational learning behavior, may not have enough information to infer the value of the bidders. We use the the learning based inference of \cite{NST:2015} to infer the agent's values based on their bidding behavior, and then compare this regret, to the regret on this inferred value, had the follow the platform's bid recommendation. Note that the regret inferred for the bidders behavior is a lower bound on the actual regret: the value we infer for the player is the value that would give them the smallest possible regret. Our results show that under this value, they would have less regret had they adopted the platform recommendation.
Our work provides an empirical insight into possible design choices for auction-based online advertising platforms. Search advertising platforms (such as Google or Bing) allow bidders to submit bids on their own and there is an established market of third-party intermediaries that help bidders to bid over time. This market design allows for more complex bidding functions, for example allowing agents to express added value for subsets of the impression opportunities via multiplicative bid-adjustments (e.g., based on the age of the viewer). In contrast, many display advertising platforms (such as Facebook) use a simpler bidding language, and optimize bids on bidders' behalf based solely on their budgets. This eliminates the need for the bidders to bid on their own or use intermediaries. Our empirical analysis shows that, despite its more limited expressibility, the latter approach may be preferred for markets where bidders are individuals who don't have access to third party tools, and who may question the fairness of platform-provided suggestions.
\paragraph{\textbf{Related Work}} Number of papers in recent years focus on estimating bidder's value in online advertising auctions. One the earliest papers in this area is \cite{AtheyNekipelov}, who study bidder values in Bing's GSP auction for search ads. They use the equilibrium characterization of GSP, and find that the bidders utility functions are smooth and strongly convex as the function of their bids. This ensures that if bids are at equilibrium, bidder valuations are uniquely identifiable based on the bid. In dynamic or new markets where interaction is repeated, the value of each individual interaction is small, and bidders are not (yet) knowledgeable about the system, it is better to model players as learners. \cite{NST:2015} suggest this assumption for studying bidders in Bing's market for search ads, and shows how to infer values based on bidding behaviour under this weaker assumption on the outcome. To evaluate the effectiveness of the bid-recommendation tool for the bidders, we need to estimate their value for impressions. We do this using the methodology developed in \cite{NST:2015}, making the assumption that agents are low-regret learners.
In a recent paper \cite{NisanN16} the authors report on a human subject experiment on the reliability of regret based inference. In their experiment, human subjects participated in bidding games (including the GSP format). The paper asks the question if human behavior can be modeled as no-regret learning, and to what extent the inference based on the low regret assumption can be used to recover the bidders value from their bidding behavior. Their finding are mixed. They find the players whose value is high behave rationally, experiment with the best bidding behaviour, achieve very low regret, and inference based on this assumption accurately recovers their value. The finding for players with low types is less positive. Some participants in the experiments were given values so low, that rational behavior would have them drop out of the auction (or bid so low they are guaranteed to lose). Such low value players were frustrated by the game, and behaved rather irrationally at times. It is interesting to think about the contrast between the participants in the Nisan-Noti laboratory experiment and the agents in the Zillow field experiment. The players in the Nisan-Noti experiment were paid to participate (even if frustrated), while in contrast participation in Zillow's ad-auctions is optional, and for typical real estate agents Zillow may not be the main channel through which they get the ``client leads''. Frustrated agents can drop out, and in fact, there were many short lived agents in our data. We focus our analysis on agents that stay in the system for an extended period of time. In addition, we note that \cite{NisanN16} as well as \cite{NST:2015} identify the value with smallest regret error relative to the value. This method favors larger values, that make the relative error smaller. Using the value with smallest absolute error would have made the identification mode successful even for bidders with relatively smaller values. This is the method we will use in this paper.
A distinctive feature of Zillow's field experiment was that the bidders were provided the bid recommendation tool. Such tools are not unique to Zillow and are routine in search advertising on Google and Bing such as in \cite{google-tool}. \cite{LinkedIn16} report experiments with adding bid-recommendations at LinkedIn, where they find that the advertisers and the publisher both benefit having recommendations. On those platforms there is also a set of third-party tools (not provided by platforms) that facilitate bidding. However, on Zillow the bidders were faced with the choice between trusting the recommendation provided by Zillow's tool or learning on their own. Our work thus bridges the gap betwen the literature on empirical analysis of algorithmic learning in games and the literature on recommender systems without trust (e.g. see \cite{ricci:2010} for a survey of the latter).
\section{Auction design} \paragraph{\textbf{Background.}} Zillow.com is the largest residential real estate search platform in the United States. Like all of the big search platforms, it is ``consumer-facing": it offers consumers free interactive information about the current real estate listings, historical data on real estate sales, real estate valuation for the properties that are not currently for sale, the background local demographic information that includes average incomes, age and education of residents as well as the measures of qualities of local schools. Similar to other Internet based services,
Zillow's business is based on monetization of consumer page views by selling advertisement opportunities.
Whenever a consumer clicks on particular property from the list of the search results, the page that opens gives details on the property. In addition, on the right side and at the bottom of the page, a list of real estate agents are shown. A sample page is shown on Figure \ref{fig1}. The first agent on the list is always a listing agent for the property that that consumer has clicked on (if the property is for sale). The rest of the agents listed (highlighted as ``premier agents'') are real estate agents advertising their services to consumers viewing the listed property page.
At the time period when we collected our data, only three premier agents were shown per page and the list of premier agents was identical on the side and at the bottom of the page.
\begin{figure}
\caption{Sample property search result on Zillow.com with highlighted premiere agents}
\label{fig1}
\end{figure}
Premier agents buy impressions from specific zip codes that they selected. Once the system identifies the agent's eligible for a given impressions, the agents shown on the page in random order.
As a result, in expectation all impressions in the given zip code have the same value (in contrast with Google where bidders are ordered by their bid, and higher positions are viewed as better).
Historically, as on many other consumer platforms on the Internet, these impressions were sold through negotiated contracts between Zillow and the real estate agents.
In order to improve fairness and efficiency of the market, in the period between 2012 and 2015 Zillow engaged in a series of long-term experiments in which for a set of select geographically distinct zip codes across the United States, the negotiated contract system for selling impressions was replaced with auction-based system. The goal of these large scale experiments was twofold. On the one hand, the platform wanted to study the revenue impact of switching from rigid system of long-term fixed price contracts to a dynamic auction system that allows the impression prices to change in real time. On the other hand, they wanted to use the bids to understand the discrepancy between the negotiated prices and the agent's values for impression.
The auction format used for the experiments was generalized second prize (GSP) auction for the slots of available ad positions.
Recall that the ad delivery system randomizes the order of the agents and so keeps all impressions the same expected quality for each agent.
Maybe the most natural mechanism would be to simply select the three highest bids, and price them at the uniform price of the 4th bid. However, \cite{chawla:16} shows that the use of uniform price mechanism may limit the quality of inference of bidder values from the bids, and a form of discriminative price mechanism, such as the GSP, allows for better inference of values. To distinguish the agents by their bid, for the agents with a bit lower bids, Zillow's mechanism decreased the probability of the agent's ad getting places on the page.
Below we outline the structure of the implemented mechanism and the structure of static best responses of bidders.
\paragraph{\textbf{The mechanism.}} \label{Sec:AuctionBestResponses} The mechanism implemented in the large scale experiments run by Zillow can be characterized as \begin{itemize} \setlength{\itemsep}{0pt}\setlength{\parsep}{0pt}\setlength{\parskip}{0pt} \item real-time, with ads places in real time as opportunities arise \item weighted, higher bids have a higher chance of being shown, \item agents are paying per impression, unlike the per-click payment used for search ads \item generalized second price auction \item with reserve prices and budget smoothing. \end{itemize} In this mechanism each bidder $i$ submits her bid $b_i$ and (typically per month) budget, though agents are allowed to submit budgets for shorter periods, and some do. The auction platform takes fixed position ``weights'' $\gamma_j$. These weights are used by the platform to induce the dependence of the impression allocation on the rank of each bidder's bid: weights sum to 1, and the agents with $j$th highest bid is shown with probability $ 3\gamma_j$ on a page, so 3 agents are shown on each page. The weights used in the system are $0.33, 0.28, 0.22, 0.17$, so only the highest 4 bids have a chance of being shown.
Real-estate agents typically have relatively small budgets, so the system needs to implement a form of ''budget-smoothing'' or pacing to have the agents participate in auctions evenly across the time interval. For each bidder $j$, the system determines a budget-smoothing probability $\pi_j$, that in expectation will ensure that the agents don't overspend their budget.
The mechanism then implements the GSP {\it taking into account bidders' budgets.} In each impression opportunity this is done as follows: \begin{enumerate} \setlength{\itemsep}{0pt}\setlength{\parsep}{0pt}\setlength{\parskip}{0pt} \item The advertiser database is queried for all bidders eligible to be shown in a given impression opportunity, that is, advertisers bidding on the ZIP code of the property \item For the set of eligible agents, the system determines the filtering probabilities for budget smoothing. To do this, the system needs to estimate the expected spent of the agent given her bid $b_i$. This turns out to be a fixed point computation, as each expected spend depends also on the filtering probabilities of other agents. \item the remaining bidders are ranked by the order of their bids \item Three of the top four remaining bidders are displayed, so that the probability that the ad of bidder ranked $j$ is shown is $3\gamma_j$ \item If the bidder ranked $j$ is shows, she pays the bid of the bidder ranked $j+1$ (or the reserve price) for the impression. \end{enumerate} To avoid having to deal with ties, Zillow effectively implemented a priority order via assigning each agent ''quality score" very close to 1, to determine the order of agents with identical bids.
We describe the details of budget smoothing, and analyze the properties of the auction mechanism from the {\it expected impression} perspective. Although, this gives a simplified view of the system (e.g. avoiding dynamics and the fluctuation of the impression volume), that allows to discuss the incentives in the auction mechanism in the most crystallized way. If $NP$ is the total number of
page views in thousands over the time period of the bidder $i$ and $\bar{B}_i$ is the total budget of the bidder, then the {\it per thousand impression opportunity budget} can be expressed as $ B_i=3\,\bar{B}_i\, \big/\, NP. $ This reflects the fact that on each
page there are 3 ad impression opportunities for 3 available slots and each bidder can only appear in one of the three slots. We further assume that the bidders are risk-neutral and have quasi-linear utility, so their utility is characterized by {\it values per thousand impressions} $v_i$.
In the per impression context the budget smoothing process can be characterized by a probability $\pi_i$ that determines eligibility of bidder $i$ for an auction. {\it Conditional on not being budget smoothed}, bidder $i$ participates in a generalized second price auction. Since all bidders may be budget smoothed, the group of auction participants becomes random. As a result, the auction outcomes for bidder $i$ (her rank $j$, and so the probability $\gamma_j$ that her ad is being shown, as well as her price per impression) are random. Taking expectation over the budget smoothing of other agents, we can construct the expected auction outcomes: $\mbox{eCPM}_i(b_i)$ is the expected cost per a thousand impression opportunities. Note that this expectation is also a function of the other bids $b$ and the budget smoothing probabilities $\pi$ of opponent bidders, a dependence that we will make explicit in notation when useful. Similarly, the probability of the impression being shown is also a random variable which we denote by $eQ_i(b_i)$. We note that both these objects are defined via the conditional expectation, i.e. they determine the spend and the impression probability conditional on bidder $i$ not being budget smoothed. The impression eligibility probability is determined by the balanced budget condition: \begin{equation}\label{eq:smoothing} \pi_i=\min\left\{1,\;\frac{B_i}{\mbox{eCPM}_i(b_i)}\right\}. \end{equation}
Note that if the expected per impression spent does not exceed the per impression budget, then such a bidder should not ever be budget smoothed.
\paragraph{\textbf{Best Responses}.} Now we can characterize the structure of the bidder's utility and the best response bid. When $\mbox{eCPM}_i(b_i)<B_i$, the expected utility per impression is determined solely by the expected auction outcomes, i.e. $$ u_i(v_i,B_i,b_i)=eQ_i(b_i)\,v_i-\mbox{eCPM}_i(b_i). $$
Classical analysis of such an economic system would assume that the outcome is a Nash equilibrium of the game, where each
bidder maximizes her utility by setting the bid. Due to lack of space we will skip here the details of the resulting equilibrium analysis.
We note that identifying the right bid can be challenging for the bidders, who are real-estate agents, and often don't have the data or the analytic tools to do a good job optimizing their bid. To help the advertisers, the platform provides a bid recommendation, suggesting the bid that maximizes the expected number of impression the agent can achieve based on her budget.
\paragraph{\textbf{Budget smoothing (Pacing).}} \label{Sec:BudgetSmoothing} Budget smoothing is one of the most technically challenging components of the implemented experimental mechanism. For large advertisers on platforms like Google, budgets typically play a minor role essentially working as ``insurance'' from surges in spent generated by idiosyncratic events. In contrast, advertisers on Zillow are real-estate agents, and typically have small monthly budgets relative to per impression cost. In particular, our data shows that virtually all bidders are budget smoothed over certain periods.
In these settings a carefully constructed system for budget smoothing is essential.
Take the vector of current eligible bids and budgets for bidders $i=1,\ldots,I$. The idea for recovering the filtering probabilities will be to solve for a set of probabilities $\pi_1,\ldots,\pi_I$ such that (\ref{eq:smoothing}) is satisfied for each $i$. The main ingredient in computing $\pi_i$ is the expected cost per opportunity $\mbox{eCPM}_i(b_i)$ with expectation taken with respect to the distribution of $\pi_1,\ldots,\pi_I$ of other bidders, and $eQ_i(b_i)$ is the probability of being shown conditional on not being filtered out. In Appendix \ref{smoothing:appendix} we give the algorithmic description of the computation of $\pi_i$'s.
\section{Market environment}
\paragraph{\textbf{Data description.}} In the period between 2014 and 2016\footnote{We withhold the exact start and end date of the experiments for confidentiality purposes} Zillow has run a series of large-scale experiments where the mechanism for selling ad impressions was switched from negotiated contracts to auctions. During this period, Zillow defined markets as zip codes. The real estate agents were not allowed \pjedit{to} use targeting within the zip code (i.e. by advertising only on the pages of specific real estate listings) or buy ``packages'' of impressions across multiple zip codes. In fact, a vast majority of real estate agents that we observe in the data only compete for a single zip code.
The experiments were rolled out in a large number of clearly isolated markets with zip codes coming from either separate states or sufficiently far from each other within the state. In order to facilitate this experimental mechanism rollout, Zillow has engaged in a significant marketing and training effort to ensure that real estate agents in the experimental markets understand the structure of the auction and to help agents learn how to bid well in the auction,
akin the set of tutorials provided by Google for its advertisers.
Our data comes from 57 experimental markets from Zillow. These markets are close to the entirety of markets that were switched to auction-based prices and allocations. We dropped a few markets from the data that
either did not have reliable data due to possible malfunction of the implementation of the auction mechanism, or the data span was too short to produce reliable results.
For data confidentiality purposes all dollar-valued variables, such as prices and budgets, in our data were re-scaled and do not reflect the actual amounts.
Our structural analysis in this paper will be concentrated on the much smaller set of 6 very active markets. Our goal in selecting these markets was to (i) ensure that those markets are sufficiently geographically separated,
yet have the typical statistical properties of all markets, such as impression prices, in all characteristics except the activity of agents; (ii) have sufficient number of observations of bid changes for different bidders. To understand the behavior of bidders in these auctions, we need to infer their values. Agents that are not active on the platform, do not provide enough data to reasonably estimate their value. As it will become clear in Section 4, the second part is crucial for us to be able to produce reliable evaluation of payoffs and bidding strategies of the agents.
To select the 6 markets, we first filter the markets where the number of participating agents is 15 or less, which gets down the number of regions from 57 to 12. The average frequency of bid changes per day in these regions was 0.43. The 6 markets we use for our structural estimation, are the markets with above average number of bid changes.
{\small \begin{center} \begin{table}[!ht] \begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline Variable & \multicolumn{4}{|c|}{Selected Regions} & \multicolumn{4}{|c|}{All Regions} \\ \cline{2-9}
& Mean & STD & 25\% & 75\% & Mean & STD & 25\% & 75\%\\ \hline Number of agents & 19.33 & 2.29 & 18.0 & 20.75 & 10.74 & 5.32 & 6.0 & 15.0 \\ \hline Bids & 23.94 & 14.14 & 17.3 & 19.31 & 18.79 & 9.71 & 14.06 & 23.84 \\ \hline Budgets (daily) & 8.92 & 3.0 & 6.31 & 11.71 & 9.22 & 4.96 & 5.9 & 12.44 \\ \hline Active duration & 85.97 & 10.38 & 78.03 & 91.5 & 96.04 & 20.74 & 86.53 & 107.33 \\ \hline Reserve price & 11.65 & 7.03 & 7.99 & 10.74 & 13.39 & 9.55 & 6.0 & 16.93 \\ \hline Bid changes & 0.73 & 0.26 & 0.54 & 0.85 & 0.22 & 0.28 & 0.03 & 0.32 \\ \hline Impression Volume & 5.52 & 1.72 & 4.25 & 5.89 & 5.29 & 3.19 & 2.73 & 6.89 \\ \hline \end{tabular} \caption{Basic information for all regions and the selected regions. The impression volume's unit is 1000 impressions per day. Bids, reserve prices and budgets are also per 1000 impressions. Active Duration is in days. Bid changes is the average number of agents that change their bid per day in a region. The average of bids, budgets and active duration has been calculated for each agent first and then their averages has been taken over all agents of each region.} \label{Tab:AllRegionsBasicInfo} \end{center} \end{table} \end{center} } In Table \ref{Tab:AllRegionsBasicInfo} we display basic statistics from our data. The table contrast the statistics for our selected 6 markets with the statistics of the entire set of 57 markets that we analyzed. Presented statistics correspond to the number of participating bidders, their bids and budgets, period of time when the bidder is active in an auction (i.e. has the bid above the reserve price and did not exhaust the budget), daily frequency of bid changes and the market reserve prices. The Table indicates that our selected 6 markets have similar values of monetary variables (e.g. average bid of 23.9 in selected markets vs. 18.8 in the entire set of markets and average daily budget of 8.9 in selected markets vs 9.2 in the entire set of markets). However, there are two key statistics that are clearly different in our selected set of markets: the time-average number of participating bidders (19.3 in selected markets vs. 10.7 in the entire set of markets) and the average frequency of bid changes (.7 per day in the selected markets vs. .3 per day in the entire set of markets).
This means that while the per impression values in our selected markets should be similar with those in the entire set of experimental markets, our selected markets have more intense competition and, therefore, we would expect smaller markups of the bidders and faster convergence of bidder learning towards the optimal bids.
Our data also contains the predicted monthly impression volume for the month ahead (from the start date of each bid). This estimated impression volume is an input in Zillow's bid recommendation tool whose goal is to compute the bid that will guarantee that the bidders wins impressions uniformly over time, and wins the maximum expected number of impressions for the future month for the given budget. To address the issue of uniform service Zillow implemented budget smoothing explained in the previous section.
An important takeaway from Table \ref{Tab:AllRegionsBasicInfo} is the magnitude of the relative scale of bids and budgets of bidders across the markets. As it is typical in display advertising the impression bids are expressed per mille (1000 impressions). To make the monthly budgets comparable, we convert the budgets to the same scale. The striking fact is the small scale of budgets relative to the bids. We note that we computed daily budgets for bidders using the period they were active, which is often only a subset of time. Note also that for most bidders these are their true monthly budgets (i.e., they did not increase their budgets to gain more impressions). This in contrast with the evidence from sponsored search advertising (on Google or Bing) where budgets declared to the advertising platform are often not binding. This means that the issue of smooth supply of impressions to each agents becomes one of the central issues of the platform design. The platform needs to engage in active management of eligibility of bidders for auction impressions to ensure that each bidder participates in the auctions at uniform rate over time.
Due to limitations of the data collection, we do not have the data for eligible user impressions for the entire duration of our auction dataset. In order to properly analyze the auctions, we need the data on the impressions for each bidder for which that bidder was {\it eligible} including those that she wasn't served.
For most of the period we only have Zillow's estimate for user impressions, and only have the actual impression volume for three months. For this period we noticed that impression volume fluctuates with the days of the week, as shows on Figure \ref{Fig:ImpressionFluctuations}, while the estimated impression volume doesn't show such fluctuation.
\begin{figure}
\caption{Impression volume fluctuations in weekdays shown for 6 regions with the most number bid changes. The impression volume of each region is normalized by the average daily impression volume of that region.}
\label{Fig:ImpressionFluctuations}
\end{figure}
To address this data deficiency, we take the predicted volume of eligible impressions (which is the most reliable proxi for the total number of eligible ad impressions). Using the seasonal modulation (mostly reflecting the intra-week changes of the impression volume) that we observe in the detailed impression records, we augment the impression volume predictions to produce a more reliable proxi for daily user impressions. This generates a realistic pattern for daily impressions for the entire time period that we observe.
\paragraph{\textbf{Data Processing.}}
Zillow's experiments were designed not just to evaluate the performance of auctions in selected markets per se, but also to compare key characteristics of monetization and impression \etedit{sales} in the incumbent mechanism with negotiated contracts and the new auction mechanism. To provide data for credible comparison of these variables (which we do not analyze in this paper) Zillow did not convert entire markets to auctions. Instead, a fixed proportion of impressions was reserved for fixed price contracts and the remaining inventory was released to the auction-based platform. In each market Zillow selected several agents that were brought to the auction platform (and who were not allowed to buy impressions from fixed contracts in the same markets). Towards the end of our period of observation, more agents were getting enrolled in the auction markets. For most of those new agents the period of observation is too short for statistically valid inference. As a result, we chose to drop such short-living agents.
In our structural inference we study the success of agents' bid adjustment over time. We find that a key characteristic of agents is the frequency by which they update bids. On Figure \ref{Fig:HistBidChangeFrequencies} we plot the histogram of the distribution of daily frequency of bid changes for all agents across 6 markets. The histogram shows fairly spread distribution of frequencies, close to uniform between the once-a-month update to once a week update with some agents updating the bids more frequently.
\begin{figure}
\caption{Histogram of bid change frequencies across the 6 selected regions.}
\label{Fig:HistBidChangeFrequencies}
\end{figure}
However, the analysis of frequency of bid changes within individual markets shows much less concordance in the bid update frequency across bidders. In fact the frequency of bid changes turns out to be the key variable that allows us to cluster the bidders into distinct types. We run k-mean clustering algorithm using the bid change frequency variable to partition the agents inside each region into 3 clusters, agents in cluster 1 and cluster 3 have the lowest and the highest number of bid changes per day respectively. This allows us to give a straightforward interpretation to the types identified by the cluster. If the bid changes are triggered by the benefit of the bid change out-weighting the cost of the bid change, then the three clusters can be interpreted as identifying the the bidder with high, medium and low cost of bid changes.
The results of clustering are demonstrated in Table \ref{Tab:RegionBasicInfo}. The results show fairly balanced cluster sizes across markets with the high and medium cost clusters containing the largest number of bidders and the lowest cost cluster containing the smallest number of bidders (less than a quarter of bidders).
{\small \begin{table}[!ht] \begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline Region \# & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline Number of agents & 20 & 23 & 18 & 21 & 16 & 18 \\ \hline Average bid changes per day & 1.23 & 0.91 & 0.65 & 0.54 & 0.54 & 0.51 \\ \hline Selected agents in clusters 1,2,3 & 5,5,4 & 5,6,4 & 6,5,1 & 3,2,2 & 8,3,3 & 3,6,1 \\ \hline \end{tabular} \caption{Basic information for the 6 selected regions. Filtering removes the agents that are in auction for less than 7 days or do not change their bid at all.} \label{Tab:RegionBasicInfo} \end{center} \end{table} }
\paragraph{\textbf{Auction simulator.}} Unfortunately, the system only logged actual delivered impressions for each bidder, and didn't log if a bidder lost the impression due to being outbid, or being filtered, etc. Given this limited data, we cannot evaluate system directly from the data. Instead, we need to simulate it by emulating Zillow's budget smoothing process. This simulation is the key component of our data processing strategy that will further allow us to perform structural estimation.
We calculate the outcome of the ad-auctions separately for each region. For each day, we find the set of agents who have active bids in that day, as well as their bid, their daily budget (which is calculated from the monthly budget and the leftover from the previous days), and the region's reserve price. We use this information to simulate the auction for that day by calculating each agent's filtering probability ($\pi_i$), expected payment and expected share of that day's impression volume. It is important to note that while we don't analyze data from short lived agents, they are included in the simulation.
One of the main challenges of the system as well as our simulation is to find the filtering probabilities. We describe the details of the algorithm in Appendix \ref{smoothing:appendix}. Recall that the filtering probabilities need to satisfy equation \ref{eq:smoothing}, where $\mbox{eCPM}_i(b_i)$ is a function of the filtering probabilities of other agents. We find an approximate solution to these fixed point equations by minimizing the sum of squares using Newton's method. $$ \sum^I_{i=1}\left(\pi_i-\min\left\{1,\;\frac{B_i}{\mbox{eCPM}_i(b_i;\pi_1,\ldots,\pi_I)}\right\} \right)^2 $$ with respect to $\pi_1,\ldots, \pi_I$.
The main time consuming step of each iteration is the need to compute the expected cost (eCPM) and expected impressions share (eQ) for all agents with the given probabilities $\pi$. In Appendix \ref{smoothing:appendix} we show how to do this in $O(|I|)$ time (assuming that the bids of the agents are sorted).
\section{Empirical analysis of market dynamics} \paragraph{\textbf{The dynamics of the adoption of the bid recommendation tool.}} We study the adoption of
the bid recommendation tool
designed to help the bidders to transition from the fixed price contracts to the auction-based system for impression pricing and delivery. The recommendation tool provided a simple interface that allowed the bidders to submit their monthly budget for a specific market and the tool would provide the bid that maximizes the number of impressions that could be purchased within the month with the given budget. The tool would adjust the recommendation with any change that occurs in the system, such as the arrival of the new bidder, changes of bids by existing bidders, or the change in the predicted number of market impressions.
During the market rollout Zillow made the agents aware of the tool's existence and explained the principles that were used to design the tool. However, despite the outreach and marketing work when auction platform was introduced to the set of experimental markets, the actual utilization rate of the tool was initially low.
On Figure \ref{Fig:FollowingSugBidOverTimeAllReg} we display the percentage of time when recommended bid was used by a given agent for the bid change over agent's tenure in the auction platform averaged over all agents in the experimental markets. The figure shows that when agents were introduced to the platform they tend to use the recommendation tool for less than 50\% of their bid changes. This number tends to grow to almost 100\% as the agent is exposed to the auction platform for more than 5 months.
\begin{figure}
\caption{Average fraction of time agents follow the recommended bid across the selected regions.}
\label{Fig:FollowingSugBidOverTimeAllReg}
\caption{Average fraction of time agents follow the recommended bid separated by clusters.}
\label{Fig:FollowingSugBidOverTimeClusters}
\end{figure}
Figure \ref{Fig:FollowingSugBidOverTimeClusters} presents the same trend of utilization of the bid recommendation tool but decomposed by clusters. Recall that
we cluster bidders based on their bidding frequency with cluster 1 being the cluster with the lowest frequency of bid changes and cluster 3 being the cluster with the highest frequency of bid changes. Figure \ref{Fig:FollowingSugBidOverTimeClusters} shows the persistence of the trend with the relatively low utilization of the bidding tool when the agents are just introduced to the auction platform with an increased degree of this utilization as the agent spends more time bidding in the auctions. We note that this trend is most rapid for the cluster of bidders with the highest frequency of bid changes. These bidders start using the bid recommendation tool for almost all of their bid changes after 3 months of exposure to the auction platform. At the same time the bidders that are least frequently changing their bids do not get to the point of fully using the recommendation tool even after 5 months of experience with the auction platform.
We further illustrate the persistence of this trend across the 6 markets that we study in the Appendix.
Figure \ref{Fig:FollowingSugBidRegions} in the Appendix confirms the consistency of the aggregate trend of the utilization of bid recommendation tool with those trends in individual markets. Moreover, for some markets the percentage of utilization of the bid recommendation tool is even smaller than that on average especially for the bidders that change their bids the least frequently.
This leads us to two important observations. First, the bidders in all observed markets were willing to ``experiment'' with their bids by deviating from the recommended bid. The proportion of bids devoted to experimentation is large especially when the agents are newly introduced to auction markets and remains large among the bidders that do not frequently change their bids even after 5 months of them bidding on the auction platform. Second, even though the bid recommendation tool was designed to optimize bids on behalf of the bidders, the bidders did not have the full faith that the recommendations benefit them (as opposed to the auction platform). The increasing adherence to the recommended bid is observed only after the bidders experiment with alternative bids for a sufficiently long time.
Next we try to understand if the bidder's learning and experimentation behavior results in improved outcomes, or rather simple helps them trust the platform's recommendation.
\paragraph{\textbf{Trust in system-provided bid recommendations.}} \label{sec:trust} To evaluate the bid adjustment in the Zillow's auction markets we use the methodology developed in \cite{NST:2015}. For understanding the first months of experimenting with auctions, we believe that it is best to model them as off-equilibrium. A characteristic feature of this market that we analyze in this paper is the relatively small stakes (measured in terms of per impression prices relative to the budgets). In such markets exploration is a good way to learn the best response. For agents who change their bid relatively frequently, we model their behavior as {\it no-regret learning} which then allows us to infer their value using the notion of the {\it rationalizable set} from \cite{NST:2015}.
In dynamically changing market bids will vary over time and would not necessarily maximize utility at each instant. Each bidder will then be characterized by two parameters: her value and the average regret that evaluates the success of the dynamic bid adjustment. We measure regret as the difference between the time-averaged utility attained by bidder's bid sequence and the average utility attained by the best fixed bid in hindsight. Average regret of a player reflects the properties of bidder's learning algorithm used.
We now consider a dynamic environment where the active bid of the bidder participates in many auctions for impressions. We assume that time is discrete. At each instance $t$ bidder $i$ with bid $b_{it}$ and outstanding bids of other bidders $\vec{b}_{-i,t}$ faces an allocation $eQ(b_{it},\vec{b}_{-i,t};\theta^t)$ and payment $\mbox{eCPM}(b_{it},\vec{b}_{-i,t};\theta^t)$ produced by auction outcomes for user impressions that arrived at time $t$. Here $\theta^t$ are ``environment" variables that reflect time-varying characteristics such as the rate of arrival of user impressions, budgets and budget smoothing probabilities. In our further discussion (where it does not affect mathematical clarity) we use simpler notation $eQ_{it}(b_{it})=eQ(b_{it},\vec{b}_{-i,t};\theta^t)$ and $\mbox{eCPM}_{it}(b_{it})=\mbox{eCPM}(b_{it},\vec{b}_{-i,t};\theta^t)$ leaving the dependence of allocations and spent from competing bids and environment variables implicit. Then we can express the utility of bidder $i$ at instance $t$ as $$ u_{it}(b_{it},v_i)=v_ieQ_{it}(b_{it})-\mbox{eCPM}_{it}(b_{it}). $$ The notion of utility allows us to define the average regret of bidder $i$. \begin{definition}[Average Regret] A sequence of play that we observe has $\epsilon_i$- average regret for bidder $i$ if: \begin{equation}\label{eqn:eps-regret} \forall b'\in {\mathcal B}: \frac{1}{T} \sum_{t=1}^{T}u_{it}(b_{it},v_i) \geq \frac{1}{T} \sum_{t=1}^{T} u_{it}(b',v_i)-\epsilon_i \end{equation} \end{definition}
The introduced notion of the average regret leads to the following definition of a \emph{rationalizable set under no-regret learning} (or more precisely, small average regret learning).
\begin{definition}[Rationalizable Set] A pair $(\epsilon_i,v_i)$ of a value $v_i$ and error $\epsilon_i$ is a rationalizable pair for player $i$ if it satisfies Equation \eqref{eqn:eps-regret}. We refer to the set of such pairs as the \emph{rationalizable set} and denote it with ${\mathcal NR}$. \end{definition}
To implement the construction of the rationalizable sets we choose a grid over the bid space and construct half-spaces generated by inequalities (\ref{eqn:halfplanes}) for each bid on the selected grid. On Figures \ref{Fig:HistBidChangeFrequencies:1} we show the structure of the rationalizable sets for 3 of the bidders most frequently changing their bids in region 1. The structure of the rationalizable set is similar in all the 6 markets we analyzed, see the corresponding figures \ref{Fig:HistBidChangeFrequencies:12}-\ref{Fig:HistBidChangeFrequencies:56} in the appendix. The vertical axis on these plots is the per impression value of the bidder (expressed in monetary units) while the horizontal axis is the additive average regret.
\begin{figure}
\caption{Rationalizable set for 9 agents most frequently changing bids in region 1}
\label{Fig:HistBidChangeFrequencies:1}
\end{figure}
We note a dramatic difference in the shape of these sets with the rationalizable sets for the bidders in advertising auctions on Bing estimated in \cite{NST:2015}. While the rationalizable sets in \cite{NST:2015} have smooth convex shape, the rationalizable sets for the agents on Zillow are polyhedra. This is due to a much higher degree of uncertainty for the bidders on Bing (induced by variation of estimated clickthrough rates and user targeting across user queries) that smooths out the boundary of the rationalizable set.
Another important observation is the highly concentrated set of hyperplanes that pass through the origin for many the bidders in the observed markets. In fact, as we mentioned before, agents' budget constraints on Zillow are binding for most bidders with the per impression bids exceeding the per impression budgets. This means that for those budget constrained bidders there is a set of bids that correspond to them completely spending the budgets (i.e. that all have identical expected spent). Thus regret of these agents is determined only by how many impressions each fixed bid with a binding budget generates but not their spent implying that $ v_i \frac{1}{T}\sum^T_{t=1}\left(eQ_{it}(b)-eQ_{it}(b_{it})\right)\leq \epsilon_i, $ which for each bid $b$ is the half-space that contains the origin. For bidders who spend their budgets, the small regret constraint of the rationalizable set does not give any upper bound on their valuation: these bidders are constrained by their budget, and not by their value of each impression.
In our further analysis we focus on the specific point of the rationalizable set that corresponds to the pair of value and regret where the observed bid sequence has the smallest possible average regret. Since the average additive regret and the value are expressed in the same monetary units, we can directly compare them. A simple visual analysis of plots of rationalizable sets on Figures \ref{Fig:HistBidChangeFrequencies:12}- \ref{Fig:HistBidChangeFrequencies:56} indicates that while for some bidders the smallest rationalizable average regret is small relative to the corresponding value, there is a large number of bidders with high relative regret. This is particularly pronounced for the bidders with cone-shaped rationalizable sets. From the economic perspective, this shape indicates that a small change in the bid for those bidders from the applied bid would have resulted either in a significant increase in the number of allocated impressions or in a significant drop in the per impression cost.
We want to understand why agents may not be following the platform provided recommendation: do they use a different bidding strategy as that improves their obtained utility, or is it simply a question of lack of trust? We note that the bid-recommendation tool didn't take into account the weekly impression volume fluctuation shown on the figure in Section 3, so with bidding differently on weekdays and weekends, the agents could have done better than the recommendation, and in fact would be able to achieve negative regret.
To evaluate regret, we need to infer the agents value for the impression. The rationalizable set offers a convex set of possible value and regret pairs. We use the value with smallest rationalizable additive regret as our selected value, but to display the values in context, we also want to account for two features. First, it useful to measure regret relative to bidder's value (i.e. the bidders may be prone to evaluate the ``loss" associated with their learning strategies in the increments of the total ``gain"). Second, it is also convenient to normalize the regret by the the number of impressions the agent won, so we measure ``per impression'' regret.
In Table \ref{Table:RegretDiff2} we show summary statistics on whether agents would decrease or increase their regret by not using a platform recommended recommendation in each of the 6 markets. All regrets are computed using the value we inferred using the agent's own bid. Whenever we say that the regret of a bidder's learning strategy is the same as the regret of the recommended bid either her own learning strategy is as good as the recommended bid or that she simply adhered to the recommended bid. The overall distribution of the regret difference indicates that sizeable fractions of bidders both have the regret that exceeds the regret of the recommended bid and the regret that is smaller than that if the recommended bid was used. We also show the same statistics by regions and clusters, where cluster 3 are the agents who update their bids most frequently.
{\small \begin{table} \begin{center}
\begin{tabular}{|l|l|l|l|l|l|l||l|l|l||l|} \hline Region & reg 1 & reg 2& reg 3 & reg 4 & reg 5 & reg 6 & cl 1 & cl 2 & cl 3 & all\\ \hline
worse & 21.4\% & 20\%& 8.3\% &0\% & 21.4\% & 50\% &23.3\% & 25.9\%& 6.7\%& 20.8\%\\ better& 42.9\% & 20\%& 41.7\% & 28.6\% & 21.4\% & 20\%& 46.7\%& 22.2\%&6.7\% & 29.2\%\\ equal & 35.7 \%& 60\%& 50\% & 71\%& 57.1\%& 30\% & 30\% & 51.8\% & 86.7\%& 50\%\\ \hline \end{tabular} \end{center} \caption{The percentage of agents that do worse (or better) with their bids than following the recommendation. The three columns on the right of the table offer aggregate statistics across the 6 markets segmenting agents by the frequency they update.} \label{Table:RegretDiff2} \end{table} }
We further illustrate this point on Figure \ref{Regret:Distribution}. The distribution of differences between the regret of agents' own bidding strategy and the recommended bid is close to symmetric about zero for all markets that we study. This indicates that even though the bidders chose to experiment with their bids, the experimentation did not necessarily lead to an improvement of regret over the recommendation. In fact, about a half of the bidders have worse regret from their deviating strategy than they would have had if they always chose the recommended bid.
\begin{figure}
\caption{Distribution of difference between the regret of own bidding strategy and recommended bid across agents in selected markets separated by the percentage of time the agent follows the recommendation.}
\label{Regret:Distribution}
\end{figure}
One a priori possible explanation for why agents don't follow the platform recommendation is that the recommended bid does not provide satisfactory outcomes for the agents and switching to an alternative bidding sequence improves their long-run performance. Figure \ref{Regret:Scatter} shows that this hypothesis is not consistent with the data. On average, the agents who use the recommended bid less do not show any improvements over the recommended bid measured by the average regret.
\begin{figure}
\caption{Scatter plot of difference between the regret of own bidding strategy and recommended bid across agents and the percentage of time agents use the recommended bid. The dashed line shows the best linear fit.}
\label{Regret:Scatter}
\end{figure}
Combining this information with the previous observation of an increasing trend of utilization of bid recommendation tool, we conclude that the key element that explains our results is the {\it trust} of the agents in the platform-provided bid recommendations. While the bid recommendation tool is optimized for the agents, upon entry to the platform the agents do not trust the tool. Instead, they experiment with alternative bids and compare the performance of those deviating bids with the performance of the recommended bids that they also occasionally choose. Once the agents empirically verify that the tool indeed optimizes the bids on their behalf, they start using the tool for most of the bid changes.
\paragraph{\textbf{Conclusion.}} Our conclusion is that the agents have sacrificed the performance of their advertising budgets over a long period of time just to ensure that choosing the recommended bid over some other alternative bid does not make them worse off. In this case, it would have been optimal both from the perspective of the long-term welfare of the agents and from the perspective of stability of prices on the platform to simply default all bidders to recommended bids.
\appendix \section*{Appendix}
\section{Algorithmic description of computation of budget-smoothing probabilities. }\label{smoothing:appendix} The following steps outline our budget smoothing algorithm: \begin{enumerate} \setlength{\itemsep}{0pt}\setlength{\parsep}{0pt}\setlength{\parskip}{0pt} \item Sort the bidders $i$ by their bid $b_i$ and assume bidders are numbered in this order. \item Construct an array of $2^I$ binary $I$-digit numbers from $\{0,0,\ldots,0\}$ to $\{1,1,\ldots,1\}$\etedit{where the number $N=n^N_1n^N_2 \ldots n^N_i\ldots n^N_I$ will correspond to bidders $j$ with $n^N_j=1$ not being filtered out}. Call the set of elements in this array $\mathcal N$ \item Take a subset of elements of $\mathcal N$ where $i$-th digit is equal to 1. Call this set ${\mathcal N}_i$ \item Let $N=n^N_1n^N_2 \ldots n^N_i\ldots n^N_I$ with $n^N_i=1$ and $n^N_j \in \{0,1\}$ be a specific row in ${\mathcal N}_i$, \etedit{corresponding to the outcome of filtering when the agents with $n_j^N=1$ remained}. \item Include the bid of each bidder $j$ for whom $n^N_j=1$,and determine the price of bidder $i$, calling it $\mbox{PRICE}^N_i$\etedit{which is the maximum of the reserve price, and the bid $b_j$ first agent $j>i$ with $n^N_j=1$. Let $j^N_i$ be the position of agent $i$ after filtering, and let $\gamma_j^N$ the corresponding probability $\gamma_{j^N_i}$}. \item Compute the expected spent as $$ \mbox{eCPM}_i(b_i;\pi_1,\ldots,\pi_I)=\sum\limits_{N \in {\mathcal N}_i}\etedit{\gamma_i^N} \prod_{j \neq i} \pi_j^{n^N_j}(1-\pi_j)^{\etedit{(1-n^N_j)}}\,\mbox{PRICE}^N_i. $$ \item Solve for $\pi_1,\ldots, \pi_I$ by solving a system of nonlinear equations $$ \pi_i=\min\left\{1,\;\frac{B_i}{\mbox{ eCPM}_i(b_i;\pi_1,\ldots,\pi_I)}\right\},\;\;i=1,\ldots,I. $$ For instance, we can find an approximate solution by minimizing the sum of squares \etedit{using gradient decent or Newton's method}. $$ \sum^I_{i=1}\left(\pi_i-\min\left\{1,\;\frac{B_i}{\mbox{eCPM}_i(b_i;\pi_1,\ldots,\pi_I)}\right\} \right)^2 $$ with respect to $\pi_1,\ldots, \pi_I$. \end{enumerate}
The main part of the above iterative algorithm for finding a fixed point is to compute the expected cost (eCPM) and expected impressions share (eQ) for all agents for a given set of probabilities $\pi_1,\ldots,\pi_I$. Considering all subsets of agents, this can take exponential time in number of agents. We to to run the simulations for every day and each region separately and we need to compute these many times to get the filtering probabilities, these calculations can significantly increase the running time of our simulations. Furthermore, for auctions where the number of agents is big, using an exponential time algorithm to calculate the outcome of each iteration is not feasible. In order to get around this issue, we use the fact that in each underlying GSP, only impressions of the first four agents are eligible to be shown, as well as the fact that the filtering probabilities of agents are independent. Our algorithm to find expected cost (eCPM) and expected impressions share (eQ) for given filtering probabilities runs in linear time (in number of agents).
For each agent $i\in[I]$, our algorithm first computes the expected impression share of the agent $eQ_i$ (assuming she has not been filtered). Note that $eQ_i$ only depends on the number of unfiltered agents ($r$) that have a higher bid than $i$ and the \etedit{probability ($\gamma_{r+1}$) associated with} $i$'s rank. We first find the probability that there are exactly $r\in [0,3]$ number of agents with higher bid than $i$, called $p_{i,r}(\pi)$. By multiplying $p_{i,r}(\pi)$ by the impressions of the $(r+1)$-th position ($\gamma_{r+1}$) we can find the expected impression share of agent $i$.
After finding the expected impression share of $i$, we calculate $eCPM_i$ by using $eQ_i$. In order to do this, we use the fact that the filtering probabilities of agents are independent. Furthermore, the expected cost per impression for agent $i$ is only a function of bids of agents who are bidding lower than $i$ and their filtering probabilities. So by calculating the expected cost per impression and multiplying it by the expected impression share of agent $i$, we can calculate her expected payment conditioned on $i$ being in the auction ($eCPM_i$). Recall that for each agent $i$, $eQ_i$ and $eCPM_i$ are calculated conditioned $i$ not getting filtered, so in order to calculate the total expected number impressions that she wins in the auction and her expected spent, it is enough to multiply $eQ_i$ and $eCPM_i$ by $\pi_i$. In algorithm 1 we have marked these steps for each agents.
{\small \begin{algorithm} \label{Alg:eCPM} \DontPrintSemicolon \KwIn{$b$: bids (sorted) , $\pi$: filtering probabilities, $\gamma$: rewards, $reserve$: the reserve price} \KwOut{$eCPM,eQ$} Let $\{1,2,\ldots,I\}$ be the list of all agents such that $b_1\geq b_2\geq \ldots \geq b_I \geq reserve$\; Let $p_{1,0}(\pi)=1$ and $p_{1,r}=0$ for $0<r\leq 3$\; \For{$i \in [I]$}{ \If{$i>1$}{
Let $p_{i,0}(\pi)=(1-\pi_{i-1})p_{i-1,0}(\pi)$\; \smash{\makebox[12.5cm][r]{$\left.\begin{array}{@{}c@{}}\\{}\\{}\\{}\\{}\end{array}\right\} \begin{tabular}{l}Calculating $p_{i,r}(\pi)$\\from $p_{i-1,r}(\pi)$\end{tabular}$}}
\For{$r \in [1,3]$}{
Let $p_{i,r}(\pi)=(1-\pi_{i-1})p_{i-1,r}(\pi) + \pi_{i-1}p_{i-1,r-1}(\pi)$
} } Let $eQ_i=0$\; \For{$r \in [0,3]$}{ \smash{\makebox[11.5cm][r]{$\left.\begin{array}{@{}c@{}}\\{}\\{}\end{array}\right\} \begin{tabular}{l}Calculating expected impression share \\ $eQ_i$from $p_{i,r}(\pi)$\end{tabular}$}} $eQ_i=eQ_i+\gamma_{r+1}.p_{i,r}(\pi)$\; } Let $j=i+1$\; Let $CPM_i=0$\; \uIf{$i=1$ or $\pi_i=1$} { \While{$j\in [I]$ and $\pi_j<1$}{ \uIf{$j=i+1$}{ $q_{i,j}(\pi)=\pi_{j}$\; }\Else{ $q_{i,j}(\pi)=\frac{\pi_j(1-\pi_{j-1})}{\pi_{j-1}}q_{i,j-1}(\pi)$\; } $CPM_i=CPM_i + b_j.q_{i,j}(\pi)$\; $j=j+1$\; } \smash{\makebox[11.5cm][r]{$\left.\begin{array}{@{}c@{}}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\\{}\end{array}\right\} \begin{tabular}{l}Calculating cost per impression\end{tabular}$}} }\Else{ $CPM_i=\frac{CPM_{i-1} - \pi_{i}b_{i}}{1-\pi_{i}}$\; } \If{$j=I+1$}{ \uIf{$i=I$}{ $q_{i}(\pi)=1$\; }\Else{ $q_{i}(\pi)=\frac{1-\pi_{j-1}}{\pi_{j-1}}q_{i,j-1}(\pi)$\; } $CPM_i=CPM_i + reserve.q_i(\pi)$\; } $eCPM_i= eQ_i.CPM_i$ } \caption{Calculating eCPM and $eQ$ of agents in linear time.} \Return $eCPM,eQ$ \end{algorithm} }
If this algorithm is implemented naively, the most expensive computation is computing $p_{i,r}(\pi)$ for all the agents. For each agent, it takes $O(I^3)$ to find all the configurations where there are $r\in [0,3]$ agents who are not filtered and have higher bid than $i$. For each configuration, it takes $O(I)$ to compute its probability. By using dynamic programming, we can compute $p_{i,r}(\pi)$ from $p_{i-1,r}(\pi)$ by considering the cases where agent $i$ is getting filtered and is not getting filtered separately. For initialization, we set $p_{1,0}(\pi)=1$ and $p_{1,r}(\pi)=0$ for $0<r\leq 3$. We use the following update rule to computer $p_{i,r}(\pi)$ for $i>0$: $$ p_{i,r}(\pi)=\begin{cases} (1-\pi_{i-1})p_{i-1,r}(\pi) & r=0\\ (1-\pi_{i-1})p_{i-1,r}(\pi) + \pi_{i-1}p_{i-1,r-1}(\pi) & 0<r \leq 3 \end{cases} $$
This reduces the running time of computing $p_{i,r}$ for each agent from $O(I^4)$ to $O(1)$. The calculations for finding $CPM$ and $eCPM$ can be done in a similar way: instead of computing eCPM for each agent from scratch, we can compute the expected price per impression from the previous calculations. When $i=1$ (she is the highest bidder), or $\pi_i=1$ (she is never getting filtered), first, we set the expected cost per impression to 0 and find the smallest $j>i$ such that $\pi_j=0$. Then, for each $k\in (i,j]$, we find the probability that all the agents $z \in (i,j)$ are getting filtered and $k$ is not getting filtered ($q_{i,k}(\pi)$), multiply it by bid of $k$ ($b_k$) and add the result number to the expected cost per impression. If for all $j>i$, $\pi_j<1$ then we set $j=I+1$ and we assume that $I+1$ is an agent who is never getting filtered and has a bid equal to the reserve price. Note that we also compute $q_{i,k}(\pi)$ from $q_{i,k-1}(\pi)$ in $O(1)$, instead of computing it for each $k$ from scratch.
When $i>1$ and $\pi_i<1$, we use the previous expected cost per impression that we had from the previous agent (lets call it $CPM_{i-1}$) to calculate the cost per impression of agent $i$ ($CPM_{i}$) by doing the following calculation $$CPM_{i}= \frac{CPM_{i-1} - \pi_{i}b_{i}}{1-\pi_{i}}$$ This operation nullifies the effect of agent $i$ in the cost per impression of the previous agent ($i-1$) and calculates the new expected cost per impression. Finally, we set $eCPM_i=eQ_i.CPM_i$. Note that even though the running time of this algorithm may be $O(I)$ for some agents, the total (amortized) running time of these calculations for all the agents combined is still $O(I)$. So overall the algorithm requires amortized $O(1)$ number of calculations for each agent and it takes linear time ($O(I)$) to calculate eCPM and eQ for all the agents, given the sorted list of agents based on their bids. Since we need to sort the agents by their bids at the beginning, the total running time of each iteration in computing the filtering probabilities is $O(I \log(I))$. This improvement in the running time (from $O(I2^I)$ to $O(I)$) is crucial for simulating the outcome of the auction, specially in the auctions where the number of agents is big.
\section{Bid recommendation tool.} To help the advertisers, the platform also provides a bid-recommendation platform-provided bid recommendation. The agents in this market are real-estate agents, who often don't have the data or the analytic tools to do a good job optimizing their bid. In addition, the market participants are real estate agents for whom Zillow may not be the main channel through which they get the ``client leads''. As a result, some agents may be reluctant to engage in active exploration of optimal bidding in the auction market for user impressions. In order to facilitate the bidding for those agents, the platform has developed a tool that recommends the bid for a given bidder based on this bidder's monthly budget. The tool was designed to set the bid that maximizes the expected number of impressions that a given bidder gets given her budget. We now outline the design details of this tool.
For each actual realization of the group of competing bidders, we define the cost function $CPM_j(b_j)$ as a mapping from the bid of bidder $j$ to the price she pays per impression in an auction. Recall that we
defined $Q_j(b_j)$ as a probability of being allocated an impression as an outcome of an auction conditioned on that the agent wasn't filtered.
Note that without the effect of filtering both functions are step functions: whenever bidder $j$ outbids bidder ranked $i$ but ranks below bidder ranked $i-1$, then this bidder $j$ pays the price determined by $i$ $b_j$ between $b_i$ and $b_{i-1}$. In figures \ref{fig2},\ref{fig3} we illustrate the concept of price and the impression probability using the bidders in one of our 6 markets and take the bidder ranked 4 in the first week in the market and bid of 30 (recall that the price units were resealed not to reflect the actual market prices). The figure shows the price and the impression share that this bidder gets if all other bidders are made eligible for this impression.
\begin{figure}
\caption{Spent (agent's bid in red)}
\label{fig2}
\caption{Impression share (agent's bid in red)}
\label{fig3}
\end{figure}
We note that the bidder under consideration has a per impression budget significantly below the cost of an impression (as much as a factor of 10 below).
If the eligibility status for impressions was recorded by the system, then we could compute the empirical fraction of impressions where this given bidder was made eligible for an auction: divide the number of impressions where a given bidder was made eligible for an auction by the total number of arrived impressions. Depending on the impression volume, this can be computed using all impressions from the beginning of the month or some smaller window of time (e.g. the week before). This would be our estimated probability $\pi_j$ of actually getting displayed for an impression.
\paragraph{\textbf{Expected spent and expected impression allocations}} If many bidders are affected by the budget smoothing than the participation of bidders in an auction is random (where randomness is activated by the budget smoothing mechanism). The for each impression instead of the actual spent and the impression share we will have the expected spent and the impression share, where the expectation is taken with respect to the randomness of participation of competing bidders. Given the filtering probabilities, the expected spent and expected impression share are represented by the expectation of spent and impression shares in all possible bidder configurations which are then weighted by the probabilities of those configurations. The participation of each bidder $i$ in a given auction can be represented by a binary variable where $1$ indicates that the bidder is made eligible for the auction and $0$ means that the bidder was made ineligible due to budget smoothing. Then the set of all possible participating bids can be represented by an array of $2^I$ binary $I$-digit numbers from $\{0,0,\ldots,0\}$ to $\{1,1,\ldots,1\}$. Call the set of elements in this array $\mathcal N$. Then a subset of elements of $\mathcal N$ where $i$-th digit is equal to 1 corresponds to the configurations where bidder $i$ is made eligible for an auction. Call this set ${\mathcal N}_i$. Then the expected cost and the expected impression share are computed as \begin{equation}\label{eCPM} eCPM_i(b_i)=\sum\limits_{N \in {\mathcal N}_i}\gamma_i^N\prod_{j \neq i} \pi_j^{n^N_j}(1-\pi_j)^{n^N_j}\,\mbox{PRICE}^N_i(b_i), \end{equation} and $$ eQ_i(b_i)=\sum\limits_{N \in {\mathcal N}_i}\prod_{j \neq i} \pi_j^{n^N_j}(1-\pi_j)^{n^N_j}\,\gamma_i^N, $$ where $N$ corresponds to the index of the set of eligible of bidders.
\paragraph{\textbf{Computation of filtering probabilities}} If the impression allocations are not available, then the probabilities of participation of bidders in impressions have to be computed. We can consider the actual budget smoothing as an iterative process: we continuously evaluate the actual spent for each bidder and when the spent exceeds the allocated budget, then the bidder is made ineligible for some impressions. This iterative process reaches the steady state when the expected spent in an impression for a given bidder becomes equal to the budget: $$ \pi_i \times eCPM_i(b_i)=\mbox{Budget}_i. $$ We can simulate this iterative process for the bidder in one of our selected markets. Note that that bidder has a very low per impression budget of 3.87. We start the process assuming that all the bidders are eligible for an impression. Then using the spent in that impression, we compute the filtering probabilities for all by dividing the budget by the spent and then iterate the process to set $$ \pi_i=\frac{\mbox{Budget}_i}{eCPM_i(b_i)} $$ using the previous iteration values of the eligibility probabilities. The algorithm for computing the probabilities of being displayed on the page is the following.
\paragraph{Iteration 0:} Initialize probabilities of being eligible for an impression at $\pi_{i}^{(0)}=1$. \paragraph{Iteration k:} Take the probabilities of being eligible for an impression $\pi_{i}^{(k-1)}$ computed from the previous iteration. Compute eCPM from (\ref{eCPM}) for each bidder $i=1,\ldots,I$. If $eCPM_i(b_i)=0$, then set the probability $\pi_i=1$ (bidder is always displayed, this bidder never gets any impressions as an outcome of the auction). If $eCPM_i(b_i)>0$, then set $$ \pi_{i}^{(k)}=\min\left\{1,\,\frac{\mbox{Budget}_i}{eCPM_i(b_i)}\right\}. $$
\paragraph{Stopping criterion:} Stop when the probabilities become close across the iterations: $\max_i|\pi_{i}^{(k)}-\pi_{i}^{(k-1)}|<\epsilon$, for a given tolerance criterion.
We illustrate the trajectory across the iterations the bidder of interest on the following figure.
\begin{figure}
\caption{Iteration path for probability of eligibility, eCPM and expected spent}
\end{figure} At the end of the iterative process the expected spent approaches the budget due to the increase in the filtering probability. Note that the expected CPM significantly increases in response to the change in the filtering probabilities for all other bidders.
The randomness increases the eCPM and the probability of allocation into an impression as compared to the fully deterministic case (i.e. when bidders are not randomly removed from impressions due to budget smoothing). The figure below demonstrates the expected CPM and expected fraction of impressions (after budget smoothing) for bidder with bid of 30 and per impression budget of 3.87.
\begin{figure}
\caption{Spent (actual bid in red)}
\caption{Impression fraction (actual bid in red)}
\end{figure}
Note that an increase in the auction outcomes (probability of being allocated an impression and the eCPM is compensated with a decrease in the probability of being eligible for an auction).
\paragraph{\textbf{Computation of the optimal bid for impression ROI optimizers}} We note that the eCPM and allocation probabilities are monotone functions of the bid. As a result, if a given bidder maximizes the probability of appearing in an impression as a function of the bid, the optimal bid will be set such that (a) the expected spent does not exceed the per impression budget; (b) an increase in the bid will result in an increase in the spent exceeding the budget. Note that the spent is (non-strictly) monotone increasing until it reaches the level of the per impression budget and then it stays at the level equal to the budget due to budget smoothing. Assuming that the bidders do not have the ``values of residual budget", this means that the bidder whose budget per impression exceeds any other budgets should set the bid at the level equal to the budget. Note that the deviation from this strategy will not be optimal: a decrease in the bid for such a bidder results in ``budget savings" that have no value for this bidder, but at the same time it will result in a (weak) decrease in the number of impressions.
The tool that computes the optimal best response for such a bidder proceeds in the following way.
\paragraph{Construction of the grid of bids} We construct the grid of bids of opponent bidders. These are the points where the spent function exhibits jumps.
\paragraph{Construction of the eCPM curve} We construct the eCPM curve. By choosing a small $\epsilon$ (smaller than the minimum distance between the closest score-weighted bids), we evaluate the changes in the eCPM after a given bidder outbids and under-bids the opponent by $\epsilon$.
\paragraph{Computation of the optimal bid} Set the bid to the level where the eCPM curve intersects the horizontal line corresponding to the budget.
\paragraph{Adjustment of the bid for top/bottom bidders} The top bidder sets the bid at the level equal to the per impression budget, the bottom bidder sets the bid to the maximum level that makes the spent positive.
Note that if there are $I$ bidders, this approach amounts to $2\times I$ evaluations of the eCPM function. The picture below demonstrates the shift to the optimal bid for the bidder under consideration by equating this bidder's eCPM with the per impression budget.
\begin{figure}
\caption{Optimization of impression ROI}
\caption{The impact of budget smoothing on expected spent}
\end{figure}
We note what happens to the actual spent of the bidder whenever the bid exceeds the recommended level. Given that at the recommended level the bidder's spent is at or below the per impression budget, if the bid increases then the budget smoothing gets initiated. The overall spent per impression is equal to the product of the probability of being eligible for an auction ($\pi_i$) and the expected outcome of an auction ($eCPM_i(b_i)$) $$ \pi_i\,eCPM_i(b_i). $$ Thus, whenever the budget smoothing is initiated ($\pi_i<1$) then the spent is exactly equal to the budget. Thus the spent as a function of the bid will become flat once the optimal bid has been exceeded.
The probability of being allocated an impression is equal to the product of the probability of being eligible for an auction ($\pi_i$) and the probability of being allocated an impression as an outcome of an auction ($eQ_i(b_i)$). We note that since the GSP is monotone, the probability of being allocated an impression as an outcome of an auction increases in the bid: the higher the bid, the higher the probability of being displayed. When the budget smoothing is not initiated, then $\pi_i=1$ and the probability of appearing on the page is simply $eQ_i(b_i)$ (increasing in the bid). When the bid exceeds the optimal level, then the probability of being eligible for an impression is $ \pi_i=\mbox{Budget}_i /eCPM_i(b_i)$ leading to the probability of being allocated an impression of $$ \mbox{Budget}_i \times \frac{eQ_i(b_i)}{eCPM_i(b_i)}. $$ This function decreases as a function of bid. This means that if the per impression budget does not warrant a given bidder the top position without filtering, then the probability of getting an impression increases up to the optimal bid level and then decreases whenever the bid starts exceeding the optimal level.
\paragraph{\textbf{Budget and bid recommendations based on the impression targets}} We can use the ``expected impression" model to make the recommendations for the choice of the monthly budget and the corresponding bid that meet a given impression target. Note that due to the budget smoothing, the expected spent in a given impression $$ \mbox{Spent}_i(b_i)=\pi_i\,eCPM_i(b_i) \leq \mbox{Budget}_i. $$ The inequality may not be binding due to the possible jumps in the $eCPM$ curve. We note that the expected probability of appearing in the impression is $$ \mbox{Prob}_i\left(b_i,\,\mbox{Budget}_i\right)=\pi_i\,eQ_i(b_i). $$ Consider this probability as a function of the bid and the budget, taking into account our model of filtering due to budget smoothing. $$ \mbox{Prob}_i\left(b_i,\,\mbox{Budget}_i\right)= \left\{ \begin{array}{ll} eQ_i(b_i),&\;\;\mbox{if}\; eCPM_i(b_i) \leq \mbox{Budget}_i,\\ \mbox{Budget}_i\frac{eQ_i(b_i)}{eCPM_i(b_i) },&\;\;\mbox{if}\; eCPM_i(b_i) > \mbox{Budget}_i.\\ \end{array} \right. $$ The expected impression count is obtained by multiplying the probability of appearing on the page $\mbox{Prob}_i\left(b_i,\,\mbox{Budget}_i\right)$ by the total projected impression inventory. Note that function $\mbox{Prob}_i\left(b_i,\,\mbox{Budget}_i\right)$ is increasing in the bid $b_i$ up to the bid $b_i^*(\mbox{Budget}_i)$ such that $$ eCPM_i\left(b_i^*(\mbox{Budget}_i)\right) = \mbox{Budget}_i $$ and decreases when the bid is greater than $b_i^*(\mbox{Budget}_i)$. Therefore, the expected number of impressions is maximized for a given budget at $b_i=b_i^*(\mbox{Budget}_i)$.
Let $\mbox{Inventory}$ be the total impression inventory and $\mbox{Goal}_i$ be the impression target for bidder $i$. Then the optimum bid for a given budget is set as $$ \mbox{Prob}_i\left(b_i,\,\mbox{Budget}_i\right) \leq\frac{\mbox{Goal}_i}{\mbox{Inventory}}. $$ The minimum budget per impression for which the impression goal is met is $$ \mbox{Budget}_i=eCPM_i(b_i), $$ leading to $$ eQ_i(b_i)=\frac{\mbox{Goal}_i}{\mbox{Inventory}}. $$ To determine the recommendations of the bid and the budget based on the expressions above, we formulate the following problem: find the profile of the probabilities of eligibility for an auction $\pi_1,\pi_2,\ldots,\pi_I$ and the optimal bid of bidder $i$ $b_i^*$ such that \begin{enumerate} \item $\pi_i=1$ for bidder of interest $i$; \item $eQ_i(b_i^*)=\frac{\mbox{Goal}_i}{\mbox{Inventory}}$. \end{enumerate} Note that this is equivalent to solving a system of equations \begin{equation}\label{one bidder} \begin{array}{ll} \pi_j=\min\left\{\frac{\mbox{Budget}_j}{eCPM_j(b_j)},\,1 \right\},\,j \neq i,\\ \pi_i=1,\\ eQ_i(b_i^*)=\frac{\mbox{Goal}_i}{\mbox{Inventory}}.\\ \end{array} \end{equation} with unknowns $\pi_j$, $j \neq i$ and $b_i^*$. The recommended bid is the solution $b_i^*$ and the budget recommendation is given by multiplying the per impression budget $$ \mbox{Budget}^*_i=eCPM_i(b_i^*) $$ by the projected impression inventory.
We also need to account for possible corner solutions that do not allow the equality $eQ_i(b_i^*)=\frac{\mbox{Goal}_i}{\mbox{Inventory}}$ to be satisfied. First, suppose that for the grid of bids $\{b_k^g=\frac{s_k\,b_k}{s_i}\}^I_{k=1}$ we observe $$ \max_k\,eQ_i(b^g_k)<\frac{\mbox{Goal}_i}{\mbox{Inventory}}. $$ Then the optimal bid $b_i^*=\max_k\,b^g_k$. In that case we set the budget $\mbox{Budget}^*_i=b_i^*>eCPM_i(b_i^*)$. The rationale for this is that the top bidder does not have an incentive to set the bid below the budget as that would lead to a weak decrease in the expected number of impressions while not decreasing the expected spent.
Second, suppose that $$ \min_k\,eQ_i(b^g_k)>\frac{\mbox{Goal}_i}{\mbox{Inventory}}. $$ In that case for any bid level the bidder will be subject to budget smoothing. Thus the optimal bid will correspond to $$ b_i^*=\min_k\,b^g_k. $$ The recommended budget will correspond to $$ \mbox{Budget}_i^*=\frac{\mbox{Goal}_i}{\mbox{Inventory}}\, \frac{eCPM_i(b_i^*) }{eQ_i(b_i^*)}. $$
\paragraph{\textbf{Simultaneous optimization for multiple bidders}} The automated optimization for multiple bidders is based on a simple generalization of the single bidder problem. We note that for all bidders whose bids and budgets are optimized to meet the impression goals, we need to solve a system of equations equivalent to (\ref{one bidder}) to find bids $b_j^*$. Let $\mathcal J$ be the subset of bidders who use the bid and budget recommendation. Then we find the set of recommended bids $\{b_j^*,\,j \in {\mathcal J}\}$ by solving the system of equations
\begin{equation}\label{many bidders} \begin{array}{l} \pi_k=\min\left\{\frac{eCPM_k(b_k)}{\mbox{Budget}_k},\,1 \right\},\,k \not\in {\mathcal J},\\ \pi_k = 1,\,k \in {\mathcal J}\\ eQ_k(b_k^*)=\frac{\mbox{Goal}_k}{\mbox{Inventory}},\,k \in {\mathcal J}.\\ \end{array} \end{equation}
We also take into account the ``corner solutions" corresponding to very low and very high impression goals relative to the available inventory.
\paragraph{\textbf{Formal integrity tests for tool performance}} The structure of the rank-based auction leads to the set of properties that have to be satisfied by the optimal solutions for bids and budgets. We can use these properties to construct the tests for the performance of the recommendation tool. \begin{enumerate} \item For any $\tau>0$, if $b_k,\,k=1,\ldots,I$ is the solution of (\ref{many bidders}), then if the filtering probabilities are fixed, then the replacement of $b_k$ with $\tau\, b_k$ does not change the predicted impression counts $eQ_k(\tau\,b_k)\times \mbox{Inventory}$. \item For any $\tau>0$, if $b_k,\,k=1,\ldots,I$ is the solution of (\ref{many bidders}), then if the filtering probabilities are fixed, then the replacement of $b_k$ with $\tau\, b_k$ leads to the proportional increase in the predicted total spent $eCPM_k(\tau\,b_k)\times \mbox{Inventory}=\tau\,eCPM_k(b_k)\times \mbox{Inventory}$. \item For any $\tau>0$, if the inventory changes to $\tau\,\mbox{Inventory}$ and all impression goals change to $\tau\,\mbox{Goal}_i$, then the optimal bids and per impression recommended budgets remain the same. \item The ratio $\frac{eQ_i(b_i)}{eCPM_i(b_i)}$ is a (weakly) monotone decreasing function of the bid. In other words, for grid points $\{b_k^g\}^I_{k=1}$, $b^g_m>b^g_n$ should lead to $\frac{eQ_i(b_m^g)}{eCPM_i(b_m^g)}<\frac{eQ_i(b_n^g)}{eCPM_i(b_n^g)}$. \end{enumerate}
\section{Methodology for estimation of values and regret.} Recall the notion of regret and rationalizable set from Section \ref{sec:trust}. The structure of the rationalizable set for 9 of the bidders most frequently changing bids in each of the 6 markets we analyzed is shown in Figures \ref{Fig:HistBidChangeFrequencies:12}-\ref{Fig:HistBidChangeFrequencies:56}. The rationality assumption of the inequality (\ref{eqn:eps-regret}) models players who may be learning from experience while participating in the game. We assume that the strategies $b_{it}$ and environment parameters $\theta^t$ are input simultaneously, so agent $i$ cannot pick his strategy dependent on the state of nature $\theta^t$ or the strategies of other agents $b_{-i,t}$. This makes the standard of a single best strategy $b$ natural, as chosen strategies cannot depend on $\theta^t$ or $b_{-i,t}$. Beyond this, we do not make any assumption on what information is available for the agents, and how they choose their strategies.
We can specialize the definition of the rationalizable set in (\ref{eqn:eps-regret}) to auctions for randomly arriving impressions by introducing functions \begin{equation} \Delta \mbox{eCPM}_i(b')= \frac{1}{T} \sum_{t=1}^{T}\left( \mbox{eCPM}_{it}(b')-\mbox{eCPM}_{it}(b_{it})\right),\; \mbox{and}\; \Delta eQ_i(b')=\frac{1}{T} \sum_{t=1}^{T} \left(eQ_{it}(b')-eQ_{it}(b_{it})\right), \end{equation} corresponding to an aggregate outcome in $T$ time periods from switching to a fixed bid $b'$ from the actually applied bid sequence $\{b_{it}\}^T_{t=1}$. The $\epsilon$-regret condition reduces to: \begin{equation}\label{eqn:halfplanes} \forall b'\in {\mathbb R}_+: v_i\cdot \Delta eQ_i(b') \leq \Delta \mbox{eCPM}_i(b') + \epsilon_i \end{equation} for each bidder $i$. Hence, the rationalizable set ${\mathcal NR}$ is an envelope of the family of half planes obtained by varying $b \in {\mathbb R}_+$ in Equation \eqref{eqn:halfplanes}.
Under suitable assumptions regarding the expected auction outcomes $eQ_{it}(\cdot)$ and $eCMP_{it}(\cdot)$ in bidder $i$'s bid, such as continuity and monotonicity, one can establish basic geometric properties of the rationalizable set, such as its convexity and closedness. \cite{NST:2015} find a simple geometric characterization of the ${\mathcal NR}$ set that also implies an efficient algorithm for computing that set. Since closed convex bounded sets are fully characterized by their boundaries, we can use the notion of the {\it support} function to represent the boundary of the set ${\mathcal NR}$. The support function of a closed convex set $X$ is $ h(X,u)=\sup_{x \in X}\langle x,u\rangle, $ where in our case $X={\mathcal NR}$ is a subset of ${\mathbb R}^2$ or value and error pairs $(v_i,\epsilon_i)$, and then $u$ is also an element of ${\mathbb R}^2$.
An important property of the support function is the way it characterizes closed convex bounded sets. Denote by $d_H(A,B)$ the Hausdorf distance between convex compact sets $A$ and $B$. Recall that the Hausdorf norm for subsets $A$ and $B$ of the metric space $E$ with metric $\rho(\cdot,\cdot)$ is defined as $$ d_H(A,B)=\max\{\sup\limits_{a \in A}\inf\limits_{b \in B}\rho(a,b),\, \sup\limits_{b \in B}\inf\limits_{a \in A}\rho(a,b)\}. $$
It turns out that $d_H(A,B)=\sup_u|h(A,u)-h(B,u)|$. Therefore, if we find $h({\mathcal NR},u)$, this will be equivalent to characterizing ${\mathcal NR}$ itself. The following result fully characterizes the support function of the set ${\mathcal NR}$ based on the aggregate auction ouctomes $\Delta \mbox{eCPM}_i(\cdot)$ and $\Delta eQ_i(\cdot)$: \begin{theorem} Under monotonicity of $\Delta \mbox{eCPM}_i(\cdot)$ and $\Delta eQ_i(\cdot)$ the support function of ${\mathcal NR}$ is function $h\,:\,\{(u_1,u_2)\,:\,u_1,u_2 \in {\mathbb R},\;u_1^2+u_2^2=1\} \mapsto {\mathbb R}_+$ such that $$ h({\mathcal NR},u)=\left\{\begin{array}{ll}
|u_2| \Delta eQ_i\left(\Delta \mbox{eCPM}^{-1}_i\left(\frac{u_1}{|u_2|}\right)\right),\,\mbox{if}\, u_2<0,&\; \mbox{if}\,\frac{u_1}{|u_2|}\in \left[\inf_b \Delta \mbox{eCPM}_i,\, \sup_{b}\Delta \mbox{eCPM}_i(b)\right]\;\; \\ +\infty, & \mbox{otherwise}. \end{array} \right. $$ \end{theorem}
This theorem is the identification result for valuations and algorithm parameters for $\epsilon$-regret learning algorithms. Unlike equilibrium settings that we discussed above, we cannot pin-point the values of players. At the same time, the characterization of the set ${\mathcal NR}$ reduces to evaluation of two one-dimensional functions. We can use efficient numerical approximation for such an evaluation. The shape of the set ${\mathcal NR}$ will generally depend on the parameters of a concrete algorithm used for learning. Thus the analysis of the geometry of ${\mathcal NR}$ can help us not only to estimate valuations of players but the learning algorithm as well.
The inference for the set ${\mathcal NR}$ reduces to the characterization of its support functions which only requires to evaluate the function $\Delta e Q_i\left(\Delta \mbox{eCPM}_i^{-1}\left(\cdot\right)\right)$. It is a one-dimensional function and can be estimated from the data via direct simulation.
Since our object of interest is the set ${\mathcal NR}$ we need to characterize the distance between the true set ${\mathcal NR}$ and the set $\widehat{{\mathcal NR}}$ that is obtained from subsampling the data. \cite{NST:2015} show that the characterization of the properties of the estimated rationalizable set reduces to the description of the properties of a single dimensional function $ f(\cdot)=\Delta eQ_i\left(\Delta \mbox{eCPM}_i^{-1}\left(\cdot\right)\right) $. and let $\widehat{f}(\cdot)$ be its empirical analog recovered from the data. The set ${\mathcal NR}$ is characterized by its support function $h({\mathcal NR},u)$. Then using the relationship between the Hausedorf norm and the sup-norm of the support functions we can write $$
d_H(\widehat{{\mathcal NR}},\,{\mathcal NR})=\sup\limits_{\|u\|=1}|h(\widehat{{\mathcal NR}},u)-h({\mathcal NR},u)|
\leq \sup\limits_{z}\left| \widehat{f}\left(z\right)-
{f}\left(z\right)\right|, $$ The empirical analog of function $f(\cdot)$ can be directly estimated from the data via subsampling of auctions. The properties of the estimated set $\widehat{{\mathcal NR}}$ are thus determined by the properties of function $f(\cdot)$. In particular, if function $f$ has derivative up to order $k \geq 0$ and for some $L \geq 0$, $
|f^{(k)}(z_1)-f^{(k)}(z_2)| \leq L|z_1-z_2|^{\alpha}, $ then with probability approaching 1 the Hausedorf distance between the true and estimated rationalizable set can be bounded as $$ d_H(\widehat{{\mathcal NR}},\,{\mathcal NR}) \leq O((N^{-1}\log\,N)^{\gamma/(2\gamma+1)}),\;\;\gamma=k+\alpha $$ with probability approaching 1 as $N \rightarrow \infty$, where $N=n \times T$ is the total number of samples available (with $T$ auctions and $n$ players in each).
\section{Additional tables and figures.} \begin{figure}
\caption{Average fraction of time agents follow the recommended bid separated by clusters and regions.}
\label{Fig:FollowingSugBidRegions}
\end{figure}
\begin{figure}
\caption{Rationalizable set for 9 agents most frequently changing bids}
\label{Fig:HistBidChangeFrequencies:12}
\end{figure}
\begin{figure}
\caption{Rationalizable set for 9 agents most frequently changing bids}
\label{Fig:HistBidChangeFrequencies:34}
\end{figure}
\begin{figure}
\caption{Rationalizable set for 9 agents most frequently changing bids}
\label{Fig:HistBidChangeFrequencies:56}
\end{figure}
\end{document} | arXiv |
\begin{document}
\title{Homoclinic tangencies and singular hyperbolicity for three-dimensional vector fields}
\author{Sylvain Crovisier \and Dawei Yang \footnote{S.C was partially supported by the ANR projects \emph{DynNonHyp} BLAN08-2 313375 and \emph{ISDEEC} ANR-16-CE40-0013. by the Balzan Research Project of J. Palis and by the ERC project 692925 \emph{NUHGD}. D.Y. was partially supported by NSFC 11271152, ANR project \emph{DynNonHyp} BLAN08-2313375 and and A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD).}}
\date{\today}
\maketitle
\begin{abstract} We prove that any vector field on a three-dimensional compact manifold can be approximated in the $C^1$-topology by one which is singular hyperbolic or by one which exhibits a homoclinic tangency associated to a regular hyperbolic periodic orbit. This answers a conjecture by Palis~\cite{Pal00}.
During the proof we obtain several other results with independent interest: a compactification of the rescaled sectional Poincar\'e flow and a generalization of Ma\~n\'e-Pujals-Sambarino theorem for three-dimensional $C^2$ vector fields with singularities. \end{abstract}
{\small \tableofcontents }
\section{Introduction} \subsection{Homoclinic tangencies and singular hyperbolicity} A main problem in differentiable dynamics is to describe a class of systems as large as possible. This approach started in the 60's with the theory of \emph{hyperbolic systems} introduced by Smale and Anosov, among others. A flow $(\varphi_t)_{t\in \RR}$ on a manifold $M$, generated by a vector field $X$, is hyperbolic if its chain-recurrent set~(defined in \cite{Con}) is the finite union of invariant sets $\Lambda$ that are hyperbolic: each one is endowed with an invariant splitting into continuous sub-bundles
$$TM|{_\Lambda}=E^s\oplus (\RR X) \oplus E^u$$ such that $E^s$ (resp. $E^u$) is uniformly contracted by $D\varphi_T$ (resp. $D\varphi_{-T}$) for some $T>0$. The dynamics of these systems has been deeply described.
The set of hyperbolic vector fields is open and dense in the space $\cX^r(M)$ of $C^r$-vector fields when $M$ is an orientable surface and $r\geq 1$~\cite{Pe} or when $M$ is an arbitrary surface and $r=1$~\cite{Pu}. Smale raised the problem of the abundance of hyperbolicity for higher dimensional manifolds. Newhouse's work \cite{Ne1} for surface diffeomorphisms implies that hyperbolicity is not dense in the spaces $\cX^r(M)$, $r\ge 2$, once the dimension of $M$ is larger or equal to three. Indeed a bifurcation called \emph{homoclinic tangency} leads to a robust phenomenon for $C^2$-vector fields which is expected to be one of the main obstructions to hyperbolicity. A vector field $X$ has a homoclinic tangency if there exist a hyperbolic non-singular periodic orbit $\gamma$ and an intersection $x$ of the stable and unstable manifolds of $\gamma$ which is not transverse (i.e. $T_xW^s(\gamma)+T_xW^u(\gamma)\neq T_xM$). It produces rich wild behaviors.
The flows with singularities may have completely different dynamics. The class of three-dimensional vector fields contains a very early example of L. N. Lorenz \cite{Lo}, that he called ``butterfly attractors''. Trying to understand this example, some robustly non-hyperbolic attractors (which are called ``geometrical Lorenz attractors'') were constructed by \cite{ABS,Gu,GW}. The systems cannot be accumulated by vector fields with homoclinic tangencies. While they are less wild than systems in the Newhouse domain, this defines a new class of dynamics, where the lack of hyperbolicity is related to the presence of a singularity.
Morales, Pacifico and Pujals \cite{MPP} have introduced the notion of \emph{singular hyperbolicity} to characterize these Lorenz-like dynamics. A compact invariant set $\Lambda$ is called singular hyperbolic if either $X$ or $-X$ satisfies the following property. There exists an invariant splitting into continuous sub-bundles
$$TM|{_\Lambda }=E^{s}\oplus E^{cu}$$ and a constant $T>0$ such that: \begin{itemize} \item[--] domination:
$\forall x\in\Lambda, u\in E^s(x)\setminus\{0\}, v\in E^{cu}(x)\setminus\{0\}, \; \frac{\|{D\varphi_{T}}.u\|}{\|u\|}\leq 1/2\frac{\|{D\varphi_{T}}.v\|}{\|v\|},$
\item[--] contraction: $\forall x\in\Lambda, \|{D\varphi_T}|_{E^s}(x)\|\leq 1/2,$
\item[--] sectional expansion: $\forall x\in\Lambda, \forall P \in \operatorname{Gr}_2(E^{cu}(x)),\; |\text{Jac}({D\varphi_{-T}}|_{P})|\leq 1/2.$ \end{itemize} When $\Lambda\cap {\operatorname{Sing}}(X)=\emptyset$ this notion coincides with the hyperbolicity. A flow is \emph{singular hyperbolic} if its chain-recurrent set is a finite union of singular-hyperbolic sets. This property defines an open subset in the space of $C^1$ vector fields. Such flow has good topological and ergodic properties, see~\cite{AP}. Note that hyperbolicity implies singular hyperbolicity by this definition.
Palis ~\cite{palis,Pal00,Pal05,Pal08} formulated conjectures for typical dynamics of diffeomorphisms and vector fields. He proposed that homoclinic bifurcations and Lorenz-like dynamics are enough to characterize the non-hyperbolicity. For three-dimensional manifolds, this is more precise (see also~\cite[Conjecture 5.14]{BDV}):
\begin{Conj*}[Palis] For any $r\geq 1$ and any three dimensional manifold $M$, every vector field in $\cX^r(M)$ can be approximated by one which is hyperbolic, or by one which display a homoclinic tangency, a singular hyperbolic attractor or a singular hyperbolic repeller. \end{Conj*}
In higher topologies $r>1$, such a general statement is for now out of reach, but more techniques have been developed in the $C^1$-topology~\cite{C-asterisque}. This allows us to prove the conjecture above for $r=1$. This has been announced in~\cite{CY}.
\begin{maintheorem} On any three-dimensional compact manifold $M$, any $C^1$ vector field can be approximated in $\cX^1(M)$ by singular hyperbolic vector fields, or by ones with homoclinic tangencies. \end{maintheorem}
An important step towards this result was the dichotomy between hyperbolicity and homoclinic tangencies for surface diffeomorphisms by Pujals and Sambarino~\cite{PS1}. Arroyo and Rodriguez-Hertz then obtained~\cite{ARH} a version of the theorem above for vector fields without singularities. The main difficulty of the present paper is to address the existence of singularities.
The Main Theorem allows to extend Smale's spectral theorem for the $C^1$-generic vector fields far from homoclinic tangencies. Recall that an invariant compact set $\Lambda$ is \emph{robustly transitive} for a vector field $X$ if there exists a neighborhood $U$ of $\Lambda$ and $\cU\subset \cX^1(M)$ of $X$ such that, for any $Y\in \cU$, the maximal invariant set of $Y$ in $U$ is transitive (i.e. admits a dense forward orbit).
\begin{Corollary}\label{c.main} If $\operatorname{dim}(M)=3$, there exists a dense open subset $\cU \subset \cX^1(M)$ such that, for any vector field $X\in \cU$ which can not be approximated by one exhibiting a homoclinic tangency, the chain-recurrent set is the union of finitely many robustly transitive sets. \end{Corollary}
When $\operatorname{dim}(M)=3$, there exists Newhouse domains in $\cX^r(M)$, $r\geq 2$. But we note that there is no example of a non empty open set $\cU\subset \cX^1(M)$ such that homoclinic tangencies occur on a dense subset of $\cU$. This raises the following conjecture.
\begin{Conjecture} If $\operatorname{dim}(M)=3$, any vector field can be approximated in $\cX^1(M)$ by singular hyperbolic ones. \end{Conjecture}
Even for non-singular vector fields, the conjecture above is open. It claims the density of hyperbolicity and it has a counterpart for surface diffeomorphisms, sometimes called Smale's conjecture.
The chain-recurrent set naturally decomposes into invariant compact subsets that are called \emph{chain-recurrence classes} (see~\cite{Con} and Section~\ref{ss.chain}). The conjecture holds if one shows that for $C^1$-generic vector fields, any chain-recurrence class has a dominated splitting (see Theorem~\ref{t.GY} below). An important case would be to rule out for $C^1$-generic vector fields the existence of non-trivial chain-recurrence classes containing a singularity with a complex eigenvalue.
Note that the conjecture also asserts that for typical $3$-dimensional vector fields, the non-trivial singular behaviors only occur inside Lorenz-like attractors and repellers.
\subsection{Dominated splittings in dimension 3} The first step for proving the hyperbolicity or the singular hyperbolicity is to get a dominated splitting for the tangent flow $D\varphi$. For surface diffeomorphisms far from homoclinic tangencies, this has been proved in~\cite{PS1}. For vector fields, it is in general much more delicate. Indeed one has to handle with sets which may contain both regular orbits (for which $\RR X$ is a non-degenerate invariant sub-bundle) and singularities: for instance it is not clear how to extend the tangent splittings at a singularity and alongs its stable and unstable manifolds.
Since the flow direction does not see any hyperbolicity, it is fruitful to consider another linear flow that has been defined by Liao~\cite{Lia63}. To each vector field $X$, one introduces the singular set ${\rm Sing}(X)=\{\sigma\in M:~X(\sigma)=0\}$ and the \emph{normal bundle} $\cN$ which is the collection of subspaces ${\cal N}_x=\{v\in T_x M:~\left<X(x),v\right>=0\}$ for $x\in M\setminus{\rm Sing}(X)$. One then defines the \emph{linear Poincar\'e flow} $(\psi_t)_{t\in \RR}$ by projecting orthogonally on $\cN$ the tangent flow:
$$\psi_t(v)={\rm D}\varphi_t(v)-\frac{\left<{\rm D}\varphi_t(v),X(\varphi_t(x))\right>}{\|X(\varphi_t(x))\|^2}X(\varphi_t(x)).$$
If $\Lambda\subset M$ is a (not necessarily compact) invariant set, an invariant continuous splitting
$TM|_{\Lambda}=E\oplus F$ is dominated if it satisfies the first item of the definition of singular hyperbolicity stated above. We also say that the linear Poincar\'e flow over $\Lambda\setminus{\rm Sing}(X)$ admits a \emph{dominated splitting}
when there exists a continuous invariant splitting ${\cal N}|_{\Lambda\setminus{\rm Sing}(X)}={\cal E}\oplus {\cal F}$ and a constant $T>0$ such that
$$\forall x\in\Lambda\setminus{\rm Sing}(X), u\in {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\setminus\{0\}, v\in {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)\setminus\{0\}, \quad \frac{\|{\psi_{T}}.u\|}{\|u\|}\leq \frac 1 2\;\frac{\|{\psi_{T}}.v\|}{\|v\|}.$$
A dominated splitting on $\Lambda$ for the tangent flow $D\varphi$ always extends to the closure of $\Lambda$: for that reason, one usually considers compact sets. But the dominated splittings of the linear Poincar\'e flow can not always be extended to the closure of the invariant set $\Lambda$ since the closure of $\Lambda$ may contain singularities, where the linear Poincar\'e flow is not defined. It is however natural to consider the linear Poincar\' e flow: for vector fields away from systems exhibiting a homoclinic tangency, the natural splitting of a hyperbolic saddles is dominated for $\psi$ (see~\cite{GY}), but this is not the case in general for $D\varphi$. In particular the existence of dominated splitting for the linear Poincar\'e flow does not imply the existence of a dominated splitting for the tangent flow.
However the equivalence between these two properties holds for $C^1$-generic vector fields on chain-transitive sets (whose definition is recalled in Section~\ref{ss.chain}).
\begin{theoremalph}\label{Thm-domination} When $\operatorname{dim}(M)=3$, there exists a dense G$_\delta$ subset $\cG\subset\cX^1(M)$ such that for any $X\in\cG$ and any chain-transitive set $\Lambda$ (which is not reduced to a periodic orbit or a singularity), the linear Poincar\'e flow over $\Lambda\setminus {\rm Sing}(X)$ admits a non-trivial dominated splitting if and only if the tangent flow over $\Lambda$ does. \end{theoremalph}
The Main Theorem will then follow easily: as already mentioned, far from homoclinic tangencies, the linear Poincar\'e flow is dominated, hence the tangent flow is also. The singular hyperbolicity then follows from the domination of the tangent flow, as it was shown in~\cite{ARH} for chain-recurrence classes without singularities and in a recent work by Gan and Yang~\cite{GY} for the singular case.
Theorem A is a consequence of a similar result for (non-generic) $C^2$ vector fields.
\begin{Theorem A'}[Equivalence between dominated splittings] When $\operatorname{dim} M=3$, Consider a $C^2$ vector field $X$ on $M$ with a flow $\varphi$ and an invariant compact set $\Lambda$ with the following properties: \begin{itemize} \item[--] Any singularity $\sigma\in \Lambda$ is hyperbolic, has simple real eigenvalues; the smallest one is negative and its invariant manifold satisfies $W^{ss}(\sigma)\cap \Lambda=\{\sigma\}$. \item[--] For any periodic orbit in $\Lambda$, the smallest Lyapunov exponent is negative. \item[--] There is no subset of $\Lambda$ which is a repeller supporting a dynamics which is the suspension of an irrational circle rotation. \end{itemize} Then the tangent flow $D\varphi$ on $\Lambda$ has a dominated splitting
$TM|_{\Lambda}=E\oplus F$ with $\operatorname{dim}(E)=1$ if and only if the linear Poincar\'e flow on $\Lambda\setminus {\rm Sing}(X)$ has a dominated splitting. \end{Theorem A'}
\subsection{Compactification of the normal flow}\label{ss.compactification} In this paper, we use techniques for studying flows that may be useful for other problems.
\paragraph{Local fibered flows.} In order to analyze the tangent dynamics and to prove the existence of a dominated splitting over a set $\Lambda$, one needs to analyze the local dynamics near $\Lambda$. For a diffeomorphism $f$, one usually lifts the local dynamics to the tangent bundle: for each $x\in M$, one defines a diffeomorphism $\widehat f_x\colon T_xM\to T_{f(x)}M$, which preserves the $0$-section (i.e. $\widehat f_x(0_x)=0_{f(x)}$ and is locally conjugated to $f$ through the exponential map. It defines in this way a local fibered system on the bundle $TM\to M$. For flows one introduces a similar notion.
\begin{Definition}[Local fibered flow]\label{d.local-flow} Let $(\varphi_t)_{t\in \RR}$ be a continuous flow over a compact metric space $K$, and let $\cN\to K$ be a continuous Riemannian vector bundle. A \emph{local $C^k$-fibered flow} $P$ on $\cN$ is a continuous family of $C^k$-diffeomorphisms $P_{t}\colon \cN_x\to \cN_{\varphi_t(x)}$, for $(x,t)\in K\times \RR$, preserving $0$-section with the following property.
There is $\beta_0>0$ such that for each $x\in K$, $t_1,t_2\in \RR$, and $u\in \cN_x$ satisfying
$$\|P_{s.t_1}(u)\|\leq \beta_0 \text{ and } \|P_{s.t_2}(P_{t_1}(u))\|\leq \beta_0 \text{ for each }s\in[0,1],$$ then we have $$P_{t_1+t_2}(u)=P_{t_2}\circ P_{t_1}(u).$$ \end{Definition}
For a vector field $X$, a natural way to lift the dynamics is to define the \emph{Poincar\'e map} by projecting the normal spaces $\cN_x$ and $\cN_{\varphi_t(x)}$ above two points of a regular orbit using the exponential map\footnote{When $x$ is periodic and $t$ is the period of $x$, this map is defined by Poincar\'e to study the dynamics in a neighborhood of a regular periodic orbit.}. Then the Poincar\'e map $P_t$ defines a local diffeomorphism from $\cN_x$ to $\cN_{\varphi_t(x)}$. The advantage of this construction is that the dimension has been dropped by $1$.
\paragraph{Extended flows.} A new difficulty appears: the domain of the Poincar\'e maps degenerate near the singularities. For that reason one introduces the \emph{rescaled sectional Poincar\'e flow}:
$$P^*_t(u)=\|X(\varphi_t(x))\|^{-1}\cdot P_t(\|X(x)\|\cdot u).$$
In any dimension this can be compactified as a fibered lifted flow, assuming that the singularities are not degenerate.
\begin{theoremalph}[Compactification]\label{t.compactification} Let $X$ be a $C^k$-vector field, $k\geq 1$, over a compact manifold $M$. Let $\Lambda\subset M$ be a compact set which is invariant by the flow $(\varphi_t)_{t\in\RR}$ associated to $X$ such that $DX(\sigma)$ is invertible at each singularity $\sigma\in \Lambda$.
Then, there exists a topological flow $(\widehat \varphi_t)_{t\in \RR}$ over a compact metric space $\widehat \Lambda$, and a local $C^k$-fibered flow $(\widehat P^*_t)$ over a Riemannian vector bundle $\widehat {\cN M}\to \widehat \Lambda$ whose fibers have dimension $\operatorname{dim}(M)-1$ such that: \begin{itemize} \item[--] the restriction of $\varphi$ to $\Lambda\setminus {\rm Sing}(X)$ embeds in $(\widehat \Lambda, \widehat \varphi)$ through a map $i$,
\item[--] the restriction of $\widehat {\cN M}$ to $i(\Lambda\setminus {\operatorname{Sing}}(X))$ is isomorphic to the normal bundle $\cN M|_{\Lambda\setminus {\operatorname{Sing}}(X)}$ through a map $I$, which is fibered over $i$ and which is an isometry along each fiber, \item[--] the fibered flow $\widehat P^*$ over $i(\Lambda\setminus {\operatorname{Sing}}(X))$ is conjugated by $I$ near the zero-section to the rescaled sectional Poincar\'e flow $P^*$: $$\widehat P^*=I\circ P^*\circ I^{-1}.$$ \end{itemize} \end{theoremalph}
The linear Poincar\'e flow introduced by Liao~\cite{Lia63} was compactified by Li, Gan and Wen \cite{lgw-extended}, who called it \emph{extended linear Poincar\'e flow}. Liao also introduced its rescaling~\cite{Lia89}. Gan and Yang~\cite{GY} considered the rescaled sectional Poincar\'e flow and proved some uniform properties.
\paragraph{Identifications structures for fibered flows.} Since $P^*$ is defined as a sectional flow over $\varphi$, the holonomy by the flow gives a projection between fibers of points close. The fibered flow thus comes with an additional structure, that we call \emph{$C^k$-identification}: let $U$ be an open set in $\Lambda \setminus {\rm Sing}(X)$, for any points $x,y\in U$ close enough, there is a $C^k$-diffeomorphism $\pi_{y,x}\colon \cN_y\to \cN_x$ which satisfies $\pi_{z,x}\circ \pi_{y,z}=\pi_{y,x}$. These identifications $\pi_{x,y}$ satisfy several properties (called \emph{compatibility with the flow}), such as some invariance. See Section~\ref{ss.identifications} for precise definitions.
\subsection{Generalization of Ma\~n\'e-Pujals-Sambarino's theorem for flows} Let us consider an invariant compact set $\Lambda$ for a $C^2$ flow $\varphi$ such that the linear Poincar\'e flow on $\Lambda\setminus {\rm Sing}(X)$
admits a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Under some assumptions, Theorem A' asserts that the tangent flow is then dominated. The existence of a dominated splitting $TM|_{\Lambda}=E\oplus F$ with $\operatorname{dim}(E)=1$ and $X\subset F$ is equivalent to the fact that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted by the rescaled linear Poincar\'e flow (see Proposition \ref{p.mixed-domination}). It is thus reduced to prove that the one-dimensional bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ of the splitting of the two-dimensional bundle $\cN$ is uniformly contracted by the extended sectional Poincar\'e flow $P^*$.
For $C^2$ surface diffeomorphisms, the existence of a dominated splitting implies that the (one-dimensional) bundles are uniformly hyperbolic, under mild assumptions: this is one of the main results of Pujals and Sambarino \cite{PS1}. A result implying the hyperbolicity for one-dimensional endomorphisms was proved before by Ma\~n\'e~\cite{Man85}.
Our main technical theorem is to extend that technique to the case of local fibered flows with $2$-dimensional dominated fibers. As introduced in section~\ref{ss.compactification} we will assume the existence of identifications compatible with the flow, over an open set $U$. We will assume that, on a neighborhood of the complement $\Lambda\setminus U$, the fibered flow contracts the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$: this is a non-symmetric assumption on the splitting ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. See Section~\ref{s.fibered} for the precise definitions.
\begin{theoremalph}[Hyperbolicity of one-dimensional extremal bundle]\label{Thm:1Dcontracting} Consider a $C^2$ local fibered flow $(\cN,P)$ over a topological flow $(K,\varphi)$ on a compact metric space such that: \begin{enumerate} \item there is a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ have one-dimensional fibers, \item there exists a $C^2$-identification compatible with $(P_t)$ on an open set $U$, \item ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on an open set $V$ containing $K\setminus U$. \end{enumerate} Then, one of the following properties occurs: \begin{itemize}
\item[--] there exists a periodic orbit $\cO\subset K$ such that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}|_{\cO}$ is not uniformly contracted, \item[--] there exists a normally expanded irrational torus, \item[--] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted above $K$. \end{itemize} \end{theoremalph}
This theorem is based on the works initiated by Ma\~n\'e~\cite{Man85} and Pujals-Sambarino~\cite{PS1}, but we have to address additional difficulties:
\begin{itemize}
\item[--] The time of the dynamical system is not discrete. This produces some shear between pieces of orbits that remain close. In the non-singular case, Arroyo and Rodriguez-Hertz~\cite{ARH} already met that difficulty.
\item[--] Pujals-Sambarino's theorem does not hold in general for fibered systems. In our setting, the existence of an identification structure is essential.
\item[--] We adapt the notion of ``induced hyperbolic returns" from~\cite{CP}: this allows us to work with the induced dynamics on $U$ where the identifications are defined.
\item[--] In the setting of local flows, we have to replace some global arguments in \cite{PS1,ARH}. \item[--] The role of the two bundles ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is non-symmetric. In particular we do not have the topological hyperbolicity of ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. The construction of Markovian boxes (Section~\ref{s.markov}) then requires other ideas, which can be compared to arguments in~\cite{CPS}.
\end{itemize}
\subsection*{Structure of the paper} In Section~\ref{s.compactification}, we compactify the rescaled sectional Poincar\'e flow and prove Theorem~\ref{t.compactification}. Local fibered flows are studied systematically in Section~\ref{s.fibered}. The proof of Theorem~\ref{Thm:1Dcontracting} occupies Sections~\ref{s.topological-hyperbolicity} to~\ref{s.uniform}. The Theorem A' is obtained in Section~\ref{s.MPS-theo}. The proofs of global genericity results, including the Main Theorem, Corollary~\ref{c.main} and Theorem~\ref{Thm-domination} are completed in Section~\ref{s.generic}.
\paragraph{Acknowledgements.} We are grateful to S. Gan, R. Potrie, E. Pujals and L. Wen for discussions related to this work. We also thank the Universit\'e Paris 11, Soochow University and Pekin University for their hospitality.
\section{Compactification of the sectional flow}\label{s.compactification} In this section we do not restrict $\operatorname{dim} (M)$ to be equal to $3$ and we prove Theorem~\ref{t.compactification} (see Theorem~\ref{t.compactified2}). Let $X$ be a $C^k$ vector field, for some $k\geq 1$, and let $(\varphi_t)_{t\in\RR}$ be its associated flow. We also assume that $DX(\sigma)$ is invertible at each singularity. In particular ${\rm Sing}(X)$ is finite.
Several flows associated to $\varphi$ have already been used in~\cite{Lia89,lgw-extended,GY}. We describe here slightly different constructions and introduce the ``extended rescaled sectional Poincar\'e flow".
\subsection{Linear flows} We associate to $(\varphi_t)_{t\in\RR}$ several $C^{k-1}$ linear and projective flows.
\paragraph{The \emph{tangent flow} $(D\varphi_t)_{t\in \RR}$} is the flow on the tangent bundle $TM$ which fibers over $(\varphi_t)_{t\in\RR}$ and is obtained by differentiation.
\paragraph{The \emph{unit tangent flow} $(U\varphi_t)_{t\in\RR}$} is the flow on the unit tangent bundle $T^1M$ obtained from $(D\varphi_t)_{t\in\RR}$ by normalization:
$$U\varphi_t.v=\frac{D\varphi_t.v}{\|D\varphi_t.v\|} \text{ for } v\in T^1M.$$ Sometimes we prefer to work with the projective bundle $PTM$. The unit tangent flow induces a flow on this bundle, that we also denote by $(U\varphi_t)_{t\in\RR}$ for simplicity.
\paragraph{The \emph{normal flow} $(\cN \varphi_t)_{t\in\RR}$.} For each $(x,u)\in T^1M$ we denote $\cN T^1_xM$ as the vector subspace orthogonal to $\RR.u$ in $T_xM$. This defines a vector bundle $\cN T^1M$ over the compact manifold $T^1M$. We define the normal flow $(\cN \varphi_t)_{t\in\RR}$ on $\cN T^1M$ which fibers above the unit tangent flow as the orthogonal projection $\cN \varphi_t.v$ of $D\varphi_t.v$ on $(D\varphi_t.u)^\perp$.
\paragraph{The \emph{linear Poincar\'e flow} $(\psi_t)_{t\in\RR}$.} The normal bundle $\cN(M\setminus {\rm Sing}(X))$ over the space of non-singular points $x$
is the union of the vector subspaces $\cN_x=X(x)^\perp$. It can be identified with the restriction of the bundle $\cN T^1M$ over the space of pairs $(x,\frac{X(x)}{\|X(x)\|})$ for $x\in M\setminus {\rm Sing}(X)$. The linear Poincar\'e flow $(\psi_t)_{t\in\RR}$ is the restriction of $(\cN \varphi_t)$ to $\cN(M\setminus {\rm Sing}(X))$.
\subsection{Lifted and sectional flows}\label{ss.def-flow} \paragraph{The \emph{sectional Poincar\'e flow} $(P_t)_{t\in\RR}$.} There exists $r_0>0$ such that the ball $B(0,r_0)$ in each fiber of the bundle $\cN(M\setminus {\rm Sing}(X))$ projects on $M$ diffeomorphically by the exponential map. For each $x\in M\setminus {\rm Sing}(X)$, there exists $r_x\in (0,r_0)$ such that for any $t\in [0,1]$, the holonomy map of the flow induces a local diffeomorphism $P_t$ from $B(0_x,r_x)\subset \cN_x$ to a neighborhood of $0_{\varphi_t(x)}$ in $B(0_{\varphi_t(x)},r_0)\subset \cN_{\varphi_t(x)}$. This extends to a local flow $(P_t)_{t\in\RR}$ in a neighborhood of the $0$-section in $\cN(M\setminus {\rm Sing}(X))$, that is called the sectional Poincar\'e flow. It is tangent to $(\psi_t)_{t\in\RR}$ at the $0$-section of $\cN(M\setminus {\rm Sing}(X))$.
The normal bundle and the sectional Poincar\'e flow are $C^k$.
\paragraph{The \emph{lifted flow} $(\cL\varphi_t)_{t\in\RR}$.} Similarly, for each $t\in [0,1]$ and $x\in M$, the map $$\cL\varphi_t\colon y\mapsto \exp^{-1}_{\varphi_t(x)}\circ \varphi_t\circ \exp_x(y)$$ sends diffeomorphically a neighborhood of $0$ in $T_xM$ to a neighborhood of $0$ in $B(0,r_0)\subset T_{\varphi_t(x)}M$. This extends to a local flow $(\cL\varphi_t)_{t\in\RR}$ in a neighborhood of the $0$-section of $TM$, that is called the lifted flow. It is tangent to $(D\varphi_t)_{t\in\RR}$ at the $0$-section.
\paragraph{The \emph{fiber-preserving lifted flow} $(\cL_0\varphi_t)_{t\in\RR}$.} We can choose not to move the base point $x$ and obtain a fiber-preserving flow $(\cL_0\varphi_t)_{t\in\RR}$, defined by: $$\cL_0\varphi_t(y)=\exp^{-1}_{x}\circ \varphi_t\circ \exp_x(y).$$ Since the $0$-section is not preserved, this is no ta local flow and it will be considered only for short times.
\subsection{Rescaled flows} \paragraph{The \emph{rescaled sectional} and \emph{linear Poincar\'e flows} $(P^*_t)_{t\in\RR}$, $(\psi^*_t)_{t\in\RR}$.} Since $DX(\sigma)$ is invertible at each singularity, there exists $\beta>0$ such that at any $x\in M\setminus {\rm Sing}(X)$
$$r_x>\beta \|X(x)\|.$$ We can thus rescale the sectional Poincar\'e flow. We get for each $x\in M\setminus {\rm Sing}(X)$ and $t\in [0,1]$ a map $P^*_t$ which sends diffeomorphically $B(0,\beta)\subset \cN_x$ to $\cN_{\varphi_t(x)}$, defined by:
$$P^*_t(y)=\|X(\varphi_t(x))\|^{-1}.P_t(\|X(x)\|.y).$$ Again, this induces a local flow $(P_t^*)_{t\in\RR}$ in a neighborhood of the $0$-section in $\cN(M\setminus {\rm Sing}(X))$, that is called the \emph{rescaled sectional Poincar\'e flow}. Its tangent map at the $0$-section defines the rescaled linear Poincar\'e flow $(\psi_t^*)$.
\paragraph{The \emph{rescaled lifted flow} $(\cL\varphi_t^*)_{t\in\RR}$ and the \emph{rescaled tangent flow} $(D\varphi_t^*)_{t\in\RR}$.} The rescaled lifted flow is defined on a neighborhood of the $0$-section in $TM$ by
$$\cL\varphi^*_t(y)=\|X(\varphi_t(x))\|^{-1}.\cL\varphi_t(\|X(x)\|.y).$$ Its tangent map at the $0$-section defines the rescaled tangent flow $(D\varphi_t^*)_{t\in\RR}$.
\paragraph{The \emph{rescaled fiber-preserving lifted flow} $(\cL_0\varphi_t^*)_{t\in\RR}$} is defined similarly:
$$\cL_0\varphi^*_t(y)=\|X(x)\|^{-1}.\cL_0\varphi_t(\|X(x)\|.y).$$
\subsection{Blowup}\label{ss.blow-up}
We will consider a compactification of $M\setminus {\rm Sing}(X)$ and of the tangent bundle
$TM|_{M\setminus \text{Sing}(X)}$ which allows to extend the line field $\RR X$. This is given by the classical blowup.
\paragraph{The manifold $\widehat M$.} We can blowup $M$ at each singularity of $X$ and get a new compact manifold $\widehat M$ and a projection $p\colon \widehat M\to M$ which is one-to-one above $M\setminus {\rm Sing}(X)$. Each singularity $\sigma\in{\rm Sing}(X)$ has been replaced by the projectivization $PT_\sigma M$.
More precisely, at each (isolated) singularity $\sigma$, one can add $T^1_\sigma M$ to $M\setminus \{\sigma\}$ in order to build a manifold with boundary. Locally, it is defined by the chart $[0, \varepsilon)\times T^1_\sigma M\to (M\setminus \{\sigma\})\cup T^1_\sigma M$ given by: $$ (s,u)\mapsto \begin{cases} &\exp(s.u) \text{ if }s\neq 0,\\ &u \text{ if } s=0. \end{cases} $$ One then gets $\widehat M$ by identifying points $(0,u)$ and $(0,-u)$ on the boundary.
It is sometimes convenient to lift the dynamics on $M\setminus \{\sigma\}$ near $\sigma$ and work in the local coordinates $(-\varepsilon, \varepsilon)\times T^1_\sigma M$. These coordinates define a double covering of an open subset of the blowup $\widehat M$ and induce a chart from the quotient $(-\varepsilon, \varepsilon)\times T^1_\sigma M /_{(s,u)\sim(-s,-u)}$ to a neighborhood of $p^{-1}(\sigma)$ in $\widehat M$.
\paragraph{The \emph{extended flow $(\widehat \varphi_t)_{t\in \RR}$}.} The following result is proved in~\cite[section 3]{takens}. \begin{Proposition} The flow $(\varphi_t)_{t\in \RR}$ induces a $C^{k-1}$ flow $(\widehat \varphi_t)_{t\in\RR}$ on $\widehat M$ which is associated to a $C^{k-1}$ vector field $\widehat X$. For $\sigma\in {\rm Sing}(X)$, this flow preserves $PT_\sigma M$, and acts on it as the projectivization of $D\varphi_t(\sigma)$; the vector field $\widehat X$ coincides at $u\in PT_\sigma M$ with $DX(\sigma).u$ in $T_{u}(PT_\sigma M)\cong T_\sigma M/\RR.u$. \end{Proposition}
In particular the tangent bundle $T\widehat M$ extends $TM|_{M\setminus \text{Sing}(X)}$, the linear flow $D\widehat \varphi$ extend $D\varphi$ and the vector field $\widehat X$ extends $X$. Note that each eigendirection $u$ of $DX(\sigma)$ at a singularity $\sigma$ induces a singularity of $\widehat X$.
\begin{Remark} In~\cite{takens}, the vector field and the flow are extended locally on the space $(-\varepsilon, \varepsilon)\times T^1_\sigma M$, but the proof shows that these extensions are invariant under the map $(s,u)\mapsto (-s,-u)$, hence are also defined on $\widehat M$. \end{Remark}
\paragraph{The \emph{extended bundle} $\widehat {TM}$ and \emph{extended tangent flow} $(\widehat {D\varphi_t})_{t\in \RR}$.} One associates to $\widehat M$ the bundle $\widehat {TM}$ which is the pull-back of the bundle $\pi\colon TM\to M$ over $M$ by the map $p\colon \widehat M\to M$. It can be obtained as the restriction of the first projection $\widehat M\times TM\to \widehat M$ to the set of pairs $(x,v)$ such that $p(x)=\pi(v)$. It is naturally endowed with the pull back metric of $TM$ and it is trivial in a neighborhood of preimages $p^{-1}(z)$, $z\in M$.
The tangent flow $(D\varphi_t)_{t\in \RR}$ can be pull back to $\widehat {TM}$ as a $C^{k-1}$ linear flow $(\widehat {D\varphi_t})_{t\in \RR}$ that we call extended tangent flow.
\paragraph{The \emph{extended line field} $\widehat {\RR X}$.} The vector field $X$ induces a line field ${\RR X}$ on $M\setminus \text{Sing}(X)$ which admits an extension to $\widehat {TM}$. It is defined locally as follows.
\begin{Proposition}\label{p.extended-field} At each singularity $\sigma$, let $U$ be a small neighborhood in $M$
and $\widehat U=(U\setminus \{\sigma\})\cup PT_\sigma M$ be a neighborhood of $PT_\sigma M$. Then, {the map $x\mapsto \frac{\exp_\sigma^{-1}(x)}{\|X(x)\|}$ on $U\setminus{\sigma}$ extends to $\widehat U$ as a $C^{k-1}$-map which coincides at $u\in PT_\sigma M$ with $\frac{u}{\|DX(\sigma).u\|}$,} and
the map $x\mapsto \frac{\|X(x)\|}{d(x,\sigma)}$ on $U\setminus \{\sigma\}$ extends to $\widehat U$ as a $C^{k-1}$-map which coincides at $u\in PT_\sigma M$
with $\|DX(\sigma).u\|$.
In the local coordinates $(-\varepsilon, \varepsilon)\times T^1_\sigma M$ associated to $\sigma\in \text{Sing}(X)$, the lift of the vector field $X_1:=X/{\|X\|}$ on $M\setminus \text{Sing}(X)$ extends as a (non-vanishing) $C^{k-1}$ section
$\widehat X_1\colon (-\varepsilon, \varepsilon)\times T^1_\sigma M\to \widehat{TM}$. For each $x=(0,u)\in p^{-1}(\sigma)$, one has $$\widehat X_1(x)=\frac{DX(\sigma).u}{\|DX(\sigma).u\|}.$$ \end{Proposition}
A priori, the extension of $X_1$ is not preserved by the symmetry $(-s,-u)\sim(s,u)$ and is not defined in $\widehat{TM}$. However, the line field $\RR \widehat X_1$ is invariant by the local symmetry $(s,u)\mapsto (-s,-u)$, hence induces a $C^{k-1}$-line field $\widehat {\RR X}$ on $\widehat {TM}$ invariant by $(\widehat {D\varphi_t})_{t\in \RR}$.
\begin{proof} In a local chart near a singularity, we have $$X(x)=\int_{0}^1DX(r.x).x\;dr.$$ Working in the local coordinates $(s,u)\in(-\varepsilon,\varepsilon)\times T^1_\sigma M$, we get $$X(x)=\int_{0}^1DX(rs.u)\;dr\;. \;s.u.$$ This allows us to define a $C^{k-1}$ section in a neighborhood of $p^{-1}(\sigma)$ defined by $$\bar X\colon(s,u)\mapsto \int_{0}^1DX(rs.u)\;dr. u.$$ This section is $C^{k-1}$, is parallel to $X$ (when $s\neq 0$)
and does not vanish. Consequently $\frac{\bar X}{\|\bar X\|}$ is $C^{k-1}$ and extends the vector field
$X_1:=X/{\|X\|}$ as required.
Since $\bar X$ extends as $DX(\sigma).u$ at $u\in PT_\sigma M$, then $X_1$ extends as $DX(\sigma).u/ \|DX(\sigma).u\|$.
Note also that for $s\neq 0$,
$\|\bar X(s.u)\|$ coincides with $\|X(x)\|/d(x,\sigma)$ where $su=x$ is a point of $M\setminus \{\sigma\}$ close to $\sigma$. Since $\bar X$ is $C^{k-1}$ and does not vanish,
$(s,u)\mapsto \|\bar X(s.u)\|$ extends as a $C^{k-1}$-function in the local coordinates $(-\varepsilon,\varepsilon)\times T^1_\sigma M$. It is invariant by the symmetry
$(s,u)\sim (-s,-u)$, hence the maps {$x\to \frac{\exp_\sigma^{-1}(x)}{\|X(x)\|}$ and} $x\mapsto \|X(x)\|/d(x,\sigma)$ for $x\in M\setminus \{\sigma\}$ close to $\sigma$ extends as a $C^{k-1}$ on a neighborhood of $PT_\sigma M$ in $\widehat M$. \end{proof}
\paragraph{The \emph{extended normal bundle} $\widehat {\cN M}$ and \emph{extended linear Poincar\'e flow} $(\widehat \psi_t)_{t\in \RR}$.} The orthogonal spaces to the lines of $\widehat {\RR X}$ define a $C^{k-1}$ linear bundle $\widehat {\cN M}$. Since $\widehat {\RR X}$ is preserved by the extended tangent flow, the projection of $(\widehat {D\varphi_t})_{t\in \RR}$ defines the $C^{k-1}$ extended linear Poincar\'e flow $(\widehat \psi_t)_{t\in \RR}$ on $\widehat {\cN M}$.
\paragraph{Alternative construction.} One can also embed $M\setminus {\rm Sing}(X)$ in $PTM$ by the map
$x\mapsto (x,\frac{X(x)}{\|X(x)\|})$, and take the closure. This set is invariant by the unit flow. This compactification depends on the vector field $X$ and not only on the finite set ${\rm Sing}(X)$. It is sometimes called \emph{Nash blowup}, see~\cite{No}.
When $DX(\sigma)$ is invertible at each singularity, Proposition~\ref{p.extended-field} shows that the closure is homeomorphic to $\widehat M$. The restriction of the normal bundle $\cN T^1M$ to the closure of $M\setminus {\rm Sing}(X)$ in $PTM$ gives the normal bundle $\widehat {\cN M}$. This is the approach followed in~\cite{lgw-extended} in order to compactify of the linear Poincar\'e flow.
\subsection{Compactifications of non-linear local fibered flows} The rescaled flows introduced above extend to the bundles $\widehat {TM}$ or $\widehat {\cN M}$. In the following, one will assume that $DX(\sigma)$ is invertible at each singularity and (without loss of generality) that the metric on $M$ is flat near each singularity of $X$.
{Related to the ``local $C^k$-fibered flow''} in Definition~\ref{d.local-flow}, we will also use the following notion.
\begin{Definition} Consider a continuous Riemannian vector bundle {$\cN$ over a compact metric space $K$.} A map $H\colon \cN\to \cN$ is \emph{$C^k$-fibered}, if it fibers over a homeomorphism $h$ of $K$ and if each induced map $H_x\colon \cN_x\to \cN_{h(x)}$ is $C^k$ and depends continuously on $x$ for the $C^k$-topology. \end{Definition} \paragraph{The extended lifted flow.} The following proposition compactifies the rescaled lifted flow $(\cL\varphi^*_t)_{t\in \RR}$ (and the rescaled tangent flow $(D\varphi_t^*)_{t\in \RR}$) as local fibered flows on $\widehat {TM}$. \begin{Proposition}\label{p.compactify-lifted} The rescaled lifted flow $(\cL\varphi_t^*)_{t\in \RR}$ extends as a local $C^k$-fibered flow on $\widehat{TM}$. The rescaled tangent flow $(D\varphi_t^*)_{t\in \RR}$ extends as a linear flow.
Moreover, there exists $\beta>0$ such that, for each $t\in [0,1]$, $\sigma\in {\rm Sing}(X)$ and $x=u\in p^{-1}(\sigma)$, on the ball $B(0_x,\beta)\subset \widehat {T_{x}M}$ the map $\cL\varphi_t^*$ writes as: \begin{equation}\label{e.extend}
y\mapsto \frac{\|DX(\sigma).u\|}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|}D\varphi_t(\sigma).y. \end{equation} \end{Proposition}
Before proving the proposition, one first shows: \begin{Lemma}\label{l.extend}
The function $(x,t)\mapsto \frac{\|X(x)\|}{\|X(\varphi_t(x))\|}$ on $(M\setminus \text{Sing}(X))\times \RR$ extends as a positive $C^{k-1}$ function $\widehat M\times \RR\to \RR_+$
which is equal to $\frac{\|DX(\sigma).u\|}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|}$ when $x=u\in p^{-1}(\sigma)$.
The map from $TM|_{M\setminus \text{Sing}(X)}$
into itself which sends $y\in T_xM$ to $\|X(x)\|.y$, extends as a continuous map of $\widehat {TM}$ which vanishes on the set $p^{-1}(\text{Sing}(X))$ and is $C^{\infty}$-fibered. \end{Lemma} \begin{proof} From Proposition~\ref{p.extended-field}, in the local chart of $0=\sigma\in{\rm Sing}(X)$, the map
$x\mapsto \frac{\|X(x)\|}{\|x\|}$
extends as a $C^{k-1}$ function which coincides at $u\in PT_\sigma M$ with $\|DX(\sigma).u\|$
and does not vanish. We also extend the map $(x,t)\mapsto \|\varphi_t(x)\|/\|x\|$ as a $C^{k-1}$ map on $\widehat M\times \RR$ which coincides with $\|D\varphi_t(\sigma).u\|$ when $x=u$. The proof is similar to the proof of Proposition~\ref{p.extended-field}. This implies the first part of the lemma.
For the second part, one considers the product of the $C^{k-1}$ function $x\mapsto \frac{\|X(x)\|}{\|x\|}$
with the $C^\infty$-fibered map which extends $y\mapsto \|x\|.y$. \end{proof}
\begin{proof}[Proof of Proposition~\ref{p.compactify-lifted}] In local coordinates, the local flow $(\cL \varphi_t^*)_{t\in \RR}$ in $T_xM$ acts like:
$$\cL \varphi_t^*(y)=\|X(\varphi_t(x))\|^{-1}\left(\varphi_t(x+\|X(x)\|.y)-\varphi_t(x)\right)$$ \begin{equation}\label{e.tangent-extension}
=\frac{\|X(x)\|}{\|X(\varphi_t(x))\|}
\int_0^1 D\varphi_t\left(x+r\|X(x)\|.y\right).y\;dr. \end{equation} By Lemma~\ref{l.extend},
$\frac{\|X(x)\|}{\|X(\varphi_t(x))\|}$ and $\|X(x)\|.y$ extend as a continuous map and as a $C^\infty$-fibered homeomorphism respectively; hence $(\cL \varphi_t^*)_{t\in \RR}$ extends continuously at $x=(0,u)\in p^{-1}(\sigma)$ as in~\eqref{e.extend}. The extended flow is $C^k$ along each fiber. Moreover, \eqref{e.tangent-extension} implies that it is $C^{k-1}$-fibered. For $x\in M\setminus {\rm Sing}(X)$, the $k^{th}$ derivative along the fibers is equal to
$$\frac{\|X(x)\|^k}{\|X(\varphi_t(x))\|}D^k\varphi_t(x+\|X(x)\|.y).$$
This converges to $\frac{\|DX(\sigma).u\|}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|}D\varphi_t(\sigma)$ when $k=1$ and to $0$ for $k>1$. The extended rescaled lifted flow is thus a local $C^{k}$-fibered flow defined on a uniform neighborhood of the $0$-section.
From Lemma~\ref{l.extend}, the rescaled linear flow $(D\varphi_t^*)_{t\in \RR}$ extends to $\widehat {TM}$ and coincides at $x=u\in \pi^{-1}(\sigma)$ with the map defined by~\eqref{e.extend}. From~\eqref{e.tangent-extension}, it coincides also with the flow tangent to $(\cL \varphi_t^*)_{t\in \RR}$ at the $0$-section.
In order to define $\cL \varphi_t^*$ on the whole bundle $\widehat {TM}$ (and get a fibered flow as in Definition~\ref{d.local-flow}), one first glues each diffeomorphism $\cL \varphi_t^*$ for $t\in [0,1]$ on a small uniform neighborhood of $0$ with the linear map $D\varphi_t^*$ outside a neighborhood of $0$ in such a way that $\cL \varphi_0^*=\operatorname{Id}$. One then defines $\cL \varphi_t^*$ for other times by: $$\cL \varphi_{-t}^*=\left( \cL \varphi_t^*\right)^{-1} \text{ for } t>0,$$ $$\cL \varphi_{n+t}^*=\cL \varphi_{t}^*\circ \cL \varphi_{1}^* \circ \dots \circ \cL \varphi_{1}^* \text{ ($n+1$ terms), for } t\in [0,1] \text{ and } n\in \NN.$$ \end{proof}
In a same way we compactify the rescaled fiber-preserving lifted flow $(\cL_0\varphi_t^*)$.
\begin{Proposition} The rescaled fiber-preserving lifted flow $(\cL_0\varphi_t^*)_{t\in \RR}$ extends as a local $C^{k}$-fibered flow on $\widehat {TM}$. More precisely, for each $x\in \widehat{TM}$, it defines a $C^k$-map $(t,y)\mapsto \cL_0\varphi_t^*(y)$ from $\RR\times\widehat {T_{x}M}$ to $\widehat{T_{x} M}$ which depends continuously on $x$ for the $C^k$-topology.
Moreover there exists $\beta>0$ such that, for each $t\in [0,1]$, $\sigma\in {\rm Sing}(X)$ and $x=u\in p^{-1}(\sigma)$, on the ball $B(0,\beta)\subset \widehat{T_{x}M}$ the map $\cL_0\varphi_t^*$ has the form: \begin{equation}\label{e.ext-L0}
y\mapsto D\varphi_t(\sigma).y +\frac{D\varphi_t(\sigma).u-u}{\|DX(\sigma).u\|}. \end{equation} \end{Proposition} \begin{proof} In the local coordinates the flow acts on $B(0,\beta)\subset T_xM$ as: \begin{align}\label{e.L0}
\cL_0\varphi_t^*(y)&=\|X(x)\|^{-1}\left(\varphi_t(x+\|X(x)\|.y)-x \right)\\
&= \int_0^1D\varphi_t(x+r\|X(x)\|.y).y\;dr \; +\;\frac{\varphi_t(x)-x}{\|X(x)\|}.\label{e.extend-L0} \end{align} Arguing as in Proposition~\ref{p.extended-field} and Lemma~\ref{l.extend},
{for each $t$, the map $x\mapsto \frac{\varphi_t(x)-x}{\|X(x)\|}$} with $x\neq \sigma$ close to $\sigma$ extends for the $C^0$-topology
by $\|DX(\sigma).u\|^{-1} (D\varphi_t(\sigma).u-u)$ at points $(0,u)\in PT_\sigma M$. Since $X$ is $C^k$, these maps are all $C^k$ and depends continuously with $x$ for the $C^k$-topology.
As before, the integral
$\int_0^1D\varphi_t(x+r\|X(x)\|.y).y\;dr$ extends as $D\varphi_t(\sigma).y$
at $p^{-1}(\sigma)$. For each $x$, the map $(t,y)\mapsto\int_0^1D\varphi_t(x+r\|X(x)\|.y).y\;dr$ is $C^k$ (this is checked on the formulas, considering separately the cases $x\in M\setminus {\rm Sing}(X)$ and $x\in p^{-1}( {\rm Sing}(X))$. Since $X$ is $C^k$, this map depends continuously on $x$ for the $C^{k-1}$-topology. The $k^\text{th}$ derivative with respect to $y$ is continuous in $x$, for the same reason as in the proof of Proposition~\ref{p.compactify-lifted}. For $x\in M\setminus {\rm Sing}(X)$, the derivative with respect to $t$
of the map above is $(t,y)\mapsto\int_0^1DX(\varphi_t(x+r\|X(x)\|.y)).y\;dr$ and it converges as $x\to \sigma$ towards $(x,y)\mapsto DX(\varphi_t(\sigma)).y$ for the $C^{k-1}$-topology (again using that $X$ is $C^k$). Hence the first term of~\eqref{e.extend-L0} is a $C^k$-function of $(t,y)$ which depends continuously on $x$ for the $C^k$-topology.
As in Proposition~\ref{p.compactify-lifted}, this proves that $(\cL_0\varphi_t^*)_{t\in \RR}$ extends as a local $C^{k}$-fibered flow having the announced properties. \end{proof}
\paragraph{The extended rescaled sectional Poincar\'e flow.} We also obtain a compactification of the rescaled sectional Poincar\'e flow $(P_t^*)_{t\in \RR}$ (and of the rescaled linear Poincar\'e flow $(\psi^*_t)_{t\in \RR}$). This implies Theorem~\ref{t.compactification}. \begin{Theorem}\label{t.compactified2} If $X$ is $C^k$, $k\geq 1$, and if $DX(\sigma)$ is invertible at each singularity, the rescaled sectional Poincar\'e flow $(P_t^*)_{t\in \RR}$ extends as a $C^k$-fibered flow on a neighborhood of the $0$-section in $\widehat {\cN M}$. Moreover, there exists $\beta'>0$ such that for each $t\in [0,1]$, $\sigma\in {\rm Sing}(X)$ and $x=u\in p^{-1}(\sigma)$, on the ball $B(0,\beta')\subset {\widehat {{\cal N}_{x} M}}$ the map $P^*_t$ writes as:
$$y\mapsto \;\frac{\|DX(\sigma).u\|}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|}D\varphi_{t+\tau}(\sigma).y\;
+\;\frac{D\varphi_{t+\tau}(\sigma).u-D\varphi_{t}(\sigma).u}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|},$$ where $\tau$ is a $C^{k}$ function of $(t,y)\in [0,1]\times B(0,\beta')$ which depends continuously on $x$ for the $C^k$-topology, such that $\tau(x,t,0)=0$.
As a consequence, the rescaled linear Poincar\'e flow $(\psi_t^*)$ extends as a continuous linear flow: for each $x\in\widehat M$, each $t\in\RR$ and each $v\in\widehat {\cN_{x}M}$ the image $\psi_t^*.v$ coincides with the normal projection of $D\varphi_t^*.v\in {\widehat {T_{\widehat \varphi_t(x)} M}}$ on $\widehat{\cN_{\widehat \varphi_t(x)} M}$. \end{Theorem} \begin{proof} For each singularity $\sigma$, we work with the local coordinates $(-\varepsilon, \varepsilon)\times T^1_\sigma M$ and prove that the rescaled sectional Poincar\'e flow extends as a local $C^k$-fibered flow. Since the rescaled sectional Poincar\'e flow is invariant by the symmetry $(s,u)\mapsto (-s,-u)$, this implies the result above a neighborhood of $p^{-1}(\sigma)$ in $\widehat M$, and hence above the whole manifold $\widehat M$.
The image of $y\in B(0,\beta)\subset \cN_x$ by the rescaled sectional Poincar\'e flow is the unique point of the curve in $T_{\varphi_t(x)}M$ $$\tau\mapsto \cL_0\varphi_\tau^*\circ \cL\varphi_t^*(y)$$ which belongs to $\cN_{\varphi_t(x)}$. In the local coordinates $x=(s,u)\in (-\varepsilon, \varepsilon)\times T^1_\sigma M$ near $\sigma$, it corresponds to the unique value $\tau=\tau(x,t,y)$ such that the following function vanishes: $$\Theta(x,t,y,\tau)=\left< \cL_0\varphi_\tau^*\circ \cL\varphi_t^*(y)\;,\;
\frac{X(\varphi_t(x))}{\|X(\varphi_t(x))\|}\right>.$$
From the previous propositions the map $(y,\tau)\mapsto \Theta(x,t,y,\tau)$ is $C^k$ and depends continuously on $(x,t)$ for the $C^{k}$-topology and is defined at any $x\in\widehat M$. When $x=u\in p^{-1}(\sigma)$, the first part of the scalar product in $\Theta$ is given by the two propositions above. According to Proposition~\ref{p.extended-field} the second part $\widehat X_1(x)$ becomes equal to
$$\frac{DX(\sigma)\circ D\varphi_t(\sigma).u}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|}.$$
\begin{Claim}
For any $t\in[0,1]$ and for $x\in p^{-1}(\sigma)$, the derivative $\frac{\partial \Theta}{\partial \tau}|_{\tau=0}(x,t,0)$ is non-zero. \end{Claim} \begin{proof} Indeed, by the previous proposition this is equivalent to
$$\left<\frac{\partial}{\partial\tau}|_{\tau=0}(D\varphi_\tau(\sigma).v)\;,\;
\frac{DX(\sigma)\circ D\varphi_t(\sigma).u}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|}\right>\neq 0,$$
where $v=\widehat \varphi_t(u)={D\varphi_t(\sigma).u}/{\|D\varphi_t(\sigma).u\|}$. Thus the condition becomes
$$\frac{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|^2}{\|DX(\sigma)\circ D\varphi_t(\sigma).u\|}\neq 0,$$ which is satisfied. \end{proof}
By the implicit function theorem and compactness, there exists $\beta'>0$ and, for each $x$, a $C^{k}$ map $(y,t)\mapsto\tau(x,t,y)$ which depends continuously on $x$ for the $C^k$-topology such that $$\Theta(x,t,y,\tau(x,t,y))=0,$$ for each $x\in\widehat M$ close to $PT_\sigma M$, each $t\in[0,1]$ and each $y\in B(0,\beta')$. The rescaled sectional Poincar\'e flow is thus locally given by the composition: \begin{equation}\label{e.rescaledpoincare} (x,t,y)\mapsto \cL_0\varphi_{\tau(x,t,y)}^*\circ \cL\varphi_t^*(y), \end{equation} which extends as a $C^{k}$-fibered flow. The formula at $x=u\in p^{-1}(\sigma)$ is obtained from the expressions in the previous propositions.
We now compute the rescaled linear Poincar\'e flow as the tangent map to the rescaled sectional Poincar\'e flow along the $0$-section. We fix $x\in\widehat M$ and its image $x'=\widehat \varphi_t(x)$. We take $y\in \cN_{x}$ and its image $y'\in \widehat {T_{\widehat \varphi_t(x)}M}$ by $\cL\varphi^*_t$. Working in the local coordinates $(-Â\varepsilon,\varepsilon)\times T^1_\sigma M$ and using Proposition~\ref{p.extended-field} and formulas~\eqref{e.ext-L0} and~\eqref{e.L0} we get
$$\frac{\partial}{\partial \tau}|_{\tau=0}\cL_0\varphi^*_\tau(0_{x'})=\widehat X_1(x').$$ {Note that }$\tau(x',t,0)=0$, we have: \begin{equation*} \begin{split} DP^*_t(0)&=
\left( \frac{\partial}{\partial y'|}_{y'=0} \cL_0\varphi^*_0(y') \right)\circ\left( \frac{\partial}{\partial y}|_{y=0}\cL\varphi_t^*(y)\right)+
\frac{\partial\tau}{\partial y}|_{y=0}.\left( \frac{\partial}{\partial \tau}|_{\tau=0}\cL_0\varphi^*_\tau(0_{x'})\right)\\
&= D\varphi_t^*(x)+\frac{\partial\tau}{\partial y}|_{y=0}.\widehat X_1(x'). \end{split} \end{equation*}
On the other hand from~\eqref{e.rescaledpoincare} and the definitions of $\Theta,\tau$ we have $$\left< DP^*_t(0),\widehat X_1(x')\right>=0,$$ hence $DP^*_t(0)$ coincides with the normal projection of $D\varphi_t^*(x)$ on the linear sub-space of $\widehat {T_{x'}M}$ orthogonal to $\widehat X_1(x')$, which is $\widehat {\cN_{x'}M}$. \end{proof} { \begin{proof}[Proof of Theorem~\ref{t.compactification}] Let $\Lambda$ be the compact invariant set in the assumption. Recall the blowup $\widehat M$ and the projection $p:~\widehat M\to M$. We define $\widehat\Lambda$ as the closure of $p^{-1}(\Lambda\setminus{\rm Sing}(X)))$ in $\widehat M$. Since the flow $\varphi$ on $M$ induces a flow $\widehat\varphi$ on $\widehat M$, we know that the restriction of $\varphi$ to $\Lambda\setminus{\rm Sing}(X)$ embeds in $({\widehat\Lambda},{\widehat\varphi})$ through the map $i=p^{-1}$.
The metric on the bundle $\widehat{TM}$ over $\widehat M$ is the pull back metric of $TM$. In other words, if $p({\widehat x})=x$, then $\widehat{T_xM}$ is isometric to $T_x M$ through a map $I$. By Proposition~\ref{p.extended-field}, $\RR.{\widehat X_1}$ is a well defined extension of $\RR.X$. Thus, the restriction of $\widehat{\cN M}$ to $i(\Lambda\setminus{\rm Sing}(X))$ is isomorphic to the normal bundle $\cN M|_{\Lambda\setminus{\rm Sing}(\Lambda)}$ through the map $I$.
Finally Theorem~\ref{t.compactified2} shows that the fibered flow ${\widehat P}^*$ is conjugated by $I$ near the zero-section to the rescaled sectional Poincar\'e flow $P^*$. \end{proof} }
\subsection{Linear Poincar\'e flow: robustness of the dominated splitting} As we mentioned at the end of Section~\ref{ss.blow-up}, the linear Poincar\'e flow has been compactified in~\cite{lgw-extended} as the normal flow acting on the bundle $\cN T^1M$ over $T^1M$. This allows in some cases to prove the robustness of the dominated splitting of the linear Poincar\'e flow.
\begin{Proposition}\label{p.robustness-DS} Let us consider {$X\in{\cal X}^1(M)$, where $\operatorname{dim} M=3$,} and an invariant compact set $\Lambda$ such that any singularity $\sigma\in \Lambda$ has real eigenvalues $\lambda_1<\lambda_2<0<\lambda_3$ and $W^{ss}(\sigma)\cap \Lambda=\{\sigma\}$.
If the linear Poincar\'e flow on $\Lambda\setminus {\operatorname{Sing}}(X)$ has a dominated splitting, then there exist neighborhoods $\cU$ of $X$ and $U$ of $\Lambda$ such that for any $Y\in \cU$ and any invariant compact set $\Lambda'\subset U$ for $Y$, the linear Poincar\'e flow of $Y$ on $\Lambda'\setminus {\operatorname{Sing}}(Y)$ has a dominated splitting. \end{Proposition} \begin{proof}
Let us consider the (continuous) unit tangent flow $U\varphi^X$ associated to $X$ and acting on $T^1M$. The set $M\setminus {\operatorname{Sing}}(X)$ embeds by the map $i_X\colon x\mapsto (x,X(x)/\|X(x)\|)$. We denote by $S(E)$ the set of unit vectors of a vector space $E$. Thus $S(T_xM)=T^1_xM$. We introduce the set $$\Delta_X:=i_X(\Lambda\setminus {\operatorname{Sing}}(X)) \cup \bigcup_{\sigma\in {\operatorname{Sing}}(X)\cap \Lambda} S(E^{cu}(\sigma)).$$ It is compact (by our assumptions on $X$ at the singularities in $\Lambda$) and invariant by $U\varphi$.
\begin{Lemma}\label{l.robust} Under the assumptions of the proposition, the closure of $i_Y(\Lambda'\setminus {\operatorname{Sing}}(Y))$ in $T^1M$ is contained in a small neighborhood of $\Delta_X$. \end{Lemma} \begin{proof} Let us define $B(\Lambda)$ as the set of points $(x,u)\in T^1M$ with $x\in \Lambda$ such that there exists sequences $Y_n\to X$ in $\cX^1(M)$ and $x_n\in M\setminus {\operatorname{Sing}}(X_n)$ such that \begin{itemize}
\item[--] $(x_n,Y_n(x_n)/\|Y_n(x_n)\|)\to (x,u)$, \item[--] the orbit of $x_n$ for the flow of $Y_n$ is contained in the $1/n$-neighborhood of $\Lambda$. \end{itemize} For each $\sigma\in {\operatorname{Sing}}(X)\cap \Lambda$, we have to show that $$B(\Lambda)\cap T^1_\sigma M\subset S(E^{cu}(\sigma)).$$ Now the property we want is exactly the same as~\cite[Lemma 4.4]{lgw-extended}. The definition of $B(\Lambda)$ and the setting differ but we can apply the same argument: {the assumptions of \cite[Lemma 4.4]{lgw-extended} can be replaced by that any $\sigma\in \Lambda$ has real eigenvalues $\lambda_1<\lambda_2<0<\lambda_3$ and $W^{ss}(\sigma)\cap \Lambda=\{\sigma\}$.}
At each $\sigma\in \Lambda\cap \operatorname{Sing}(X)$, one considers a chart and one fixes a point $z$ in $W^{ss}(\sigma)\setminus \{\sigma\}$. For each $Y$ close to $X$, each point $y$ close to the continuation $\sigma_Y$
and whose orbit $(\varphi^Y_t(y))$ lies in a neighborhood of $\Lambda$, let us assume by contradiction that $(y-\sigma_Y)/\|y-\sigma_Y\|$ is not close to the center-unstable plane of the singularity $\sigma_Y$ of $Y$. After some backward iteration $\varphi_t(y)$ is still close to $\sigma_Y$
and $(\varphi_t(y)-\sigma_Y)/\|\varphi_t(y)-\sigma_Y\|$ gets close to the strong stable direction of $\sigma_Y$. Iterating further in the past, one gets $\varphi_{s}(y)$ close to $z$: the distance $d(\varphi_s(y),z)$ can be chosen arbitrarily small if $Y$ is close enough to $X$ and if $y$ is close enough to $\sigma_Y$. Taking the limit, one concludes that $z$ belongs to $\Lambda$: this is in contradiction with $W^{ss}(\sigma)\cap \Lambda=\{\sigma\}$. \end{proof}
By definition (see Section~\ref{ss.def-flow}), if the linear Poincar\'e flow for $X$ is dominated on $\Lambda\setminus {\operatorname{Sing}}(X)$, then the normal flow $\cN\varphi^X$ for $X$ is dominated on $i_X(\Lambda\setminus {\operatorname{Sing}}(X))$. Note that $\cN\varphi^X$ is also dominated on $S(E^{cu}(\sigma))$ (the orthogonal projection of the splitting $E^{ss}\oplus E^{cu}\subset T_\sigma M$ on the fibers $\cN T^1_zM$ for $z\in S(E^{cu}(\sigma))$ is invariant). Consequently the dynamics of the linear cocycle $\cN\varphi^X$ above the compact set $\Delta_X\subset T^1M$ is also dominated.
By Lemma~\ref{l.robust} for $Y$ $C^1$-close to $X$ and $\Lambda'$ in a neighborhood of $U$, the set $i_Y(\Lambda'\setminus {\operatorname{Sing}}(Y))$ is contained in a neighborhood of $\Delta_X$; moreover the linear flow $\cN\varphi^Y$ associated to $Y$ is close to $\cN\varphi^X$. This shows that the dynamics of $\cN\varphi^Y$ above $i_Y(\Lambda'\setminus {\operatorname{Sing}}(Y))$ is dominated. By definition, this implies that the linear Poincar\'e flow associated to $Y$ above $\Lambda'\setminus {\operatorname{Sing}}(Y)$ is dominated. \end{proof}
\section{Fibered dynamics with a dominated splitting}\label{s.fibered}
In this section we introduce an identification structure (for local fibered dynamics) which formalizes the properties satisfied by the {rescaled} sectional Poincar\'e flow. We then discuss some consequences of the existence of a dominated splitting inside the fibers.
{\subsection{Dominated splitting for a local fibered flow}}
{ We consider local fibered flows as in Definition~\ref{d.local-flow}. The following notations will be used.}
\noindent {\bf Notations.} -- One sometimes denotes a point $u\in \cN_x$ as $u_x$ to emphasize the base point $x$. \hspace{-1cm}
\noindent -- The length of a $C^1$ curve $\gamma\subset \cN_x$ (with respect to the metric of $\cN_x$)
is denoted by $|\gamma|$.
\noindent -- A ball centered at $u$ and with radius $r$ inside a fiber $\cN_x$ is denoted by $B(u,r)$.
\noindent -- For $x\in K$, $t\in \RR$ and $u\in \cN_x$, one denotes by $DP_t(u)$ the derivative of $P_t$ at $u$ along $\cN_x$. In particular $(DP_t(0_x))_{t\in \RR, x\in K}$ defines a linear flow over the $0$-section of $\cN$.
\begin{Definition}\label{d.dominated} The local flow $(P_t)$ admits a \emph{dominated splitting} $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ if ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ are sub-bundles of $\cN$ that are invariant by the linear flow $(DP_t(0))$ and if there exists $\tau_0> 0$ such that for any $x\in K$, for any unit $u\in {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$ and $v\in {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$ and for any $t\geq \tau_0$ we have:
$$\|DP_t(0_x).u\|\leq \frac 1 2 \|DP_t(0_x).v\|.$$ Moreover we say that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is \emph{$2$-dominated} if there exists $\tau_0>0$ such that for any $x\in K$, any unit vectors $u\in {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$ and $v\in {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$, and for any $t\geq \tau_0$ we have:
$$\max(\|DP_t(0_x).u\|,\|DP_t(0_x).u\|^2)\leq \frac 1 2 \|DP_t(0_x).v\|.$$ \end{Definition}
When there exists a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $V\subset K$ is an open subset, one can prove that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted by considering the induced dynamics on $K\setminus V$ and checking that the following property is satisfied. \begin{Definition} The bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is \emph{uniformly contracted on the set $V$} if there exists $t_0>0$ such that for any $x\in K$ satisfying $\varphi_t(x)\in V$ for any $0\leq t\leq t_0$ we have:
$$\|DP_{t_0}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)}\|\leq \frac 1 2.$$ We say that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is \emph{uniformly contracted} if it is uniformly contracted on $K$. \end{Definition}
\subsection{Identifications}\label{ss.identifications} \subsubsection{Definition of identifications} We introduce more structures on the fibered dynamics.
\begin{Definition} A \emph{$C^k$-identification} $\pi$ on an open set $U\subset K$ is defined by two constants $\beta_0,r_0>0$ and a continuous family of $C^k$-diffeomorphisms $\pi_{y,x}\colon \cN_y\to \cN_x$ associated to pairs of points $x,y \in K$ with $x\in U$ and $d(x,y)< r_0$, such that:
For any $\{x,y,z\}$ of diameter smaller than $r_0$ with $x,z\in U$ and any $u\in B(0,\beta_0)\subset \cN_y$, $$\pi_{z,x}\circ \pi_{y,z}(u)=\pi_{y,x}(u).$$ \end{Definition} In particular $\pi_{x,x}$ coincides with the identity on $B(0,\beta_0)$.
\noindent {\bf Notations.} -- We will sometimes denote $\pi_{y,x}$ by $\pi_x$. Also the projection $\pi_{y,x}(0)=\pi_x(0_y)$ of $0\in \cN_y$ on $\cN_x$ will be denoted by $\pi_x(y)$.
\noindent -- We will denote by ${\rm Lip}$ be the set of orientation-preserving bi-Lipschitz homeomorphisms $\theta$ of $\RR$ (and by ${\rm Lip}_{1+\rho}$ the set of those whose Lipschitz constant is smaller than $1+\rho$).
\begin{Definition}\label{d.compatible} The identification $\pi$ on $U$ is \emph{compatible} with the flow $(P_t)$ if: \begin{enumerate}
\item \emph{Transverse boundary.} For any segment of orbit $\{\varphi_{s}(x), s\in [-t,t]\}$, $t>0$, contained in $K\setminus U$ we have $x\in K\setminus \overline U$.
\item \emph{No small period.} For any $\varkappa>0$, there is $r>0$ such that for any $x\in \overline{U}$
and $t\in [-2,2]$ with $d(x,\varphi_t(x))<r$, then we have $|t|< \varkappa$.
\item \emph{Local injectivity.} For any $\delta>0$, there exists $\beta>0$ such that for any $x,y\in U$:\\
if $d(x,y)<r_0$ and $\|\pi_{x}(y)\|\leq \beta$, then $d(\varphi_t(y),x)\leq \delta$ for some $t\in [-1/4,1/4]$.
\item \emph{Local invariance.} For any $x,y\in U$ and $t\in [-2,2]$ such that $y$ and $\varphi_t(y)$ are $r_0$-close to $x$, and for any $u\in B(0,\beta_0)\subset \cN_{y}$, we have $$\pi_{x}\circ P_t(u)=\pi_{x}(u).$$
\item \emph{Global invariance.} For any $\delta,\rho>0$, there exists $r,\beta>0$ such that:
For any $y,y'\in K$ with $y\in U$ and $d(y,y')<r$, for any $u\in \cN_y$, $u'\in \cN_{y'}$ with $\pi_y(u')=u$, and any intervals $I,I'\subset \RR$ containing $0$ and satisfying
$$\|P_s(u)\|<\beta\text{ and } \|P_{s'}(u')\|<\beta\text{ for any } s\in I \text{ and any } s'\in I',$$ there is $\theta\in {\rm Lip}_{1+\rho}$ such that $\theta(0)=0$ and
$d(\varphi_s(y),\varphi_{\theta(s)}(y'))<\delta$ for any $s\in I\cap \theta^{-1}(I')$. Moreover, for any $v\in \cN_y$ such that
$ \|P_s(v)\|<\beta$ for each $s\in I\cap \theta^{-1}(I')$ then \begin{itemize} \item[--] $v'=\pi_{y'}(v)$ satisfies
$\|P_{\theta(s)}(v')\|<\delta$ for each such $s$, \item[--] if $\varphi_s(y)\in U$ for some $s$, then $\pi_{\varphi_s(y)}\circ P_{\theta(s)}(v')=P_s(v)$. \end{itemize} \end{enumerate}
\end{Definition}
\begin{Remarks-numbered}\label{r.identification}\rm a) These definitions are still satisfied if one reduces $r_0$ or $\beta_0$. Their value will be reduced in the following sections in order to satisfy additional properties.
One can also rescale the time and keep a compatible identification: the flow $t\mapsto \varphi_{t/C}$ for $C>1$ still satisfies the definitions above, maybe after reducing the constant $r_0$.
The main point to check is that the time $t$ in the Local injectivity can still be chosen in $[-1/4,1/4]$. Indeed, this is ensured by the ``No small period" assumption applied with $\varkappa=1/4C$: if $r_0$ is chosen smaller and if both $d(\varphi_t(y),x),d(y,x)$ are less than $r_0$ for $t\in [-1/4,1/4]$, then $|t|$ is smaller than $\varkappa$. Now the time $t$ in the Local injectivity property belongs to $[-\varkappa,\varkappa]$ for the initial flow, hence to $[-1/4,1/4]$ for the time-rescaled flow.
\noindent b) The ``No small period" assumption (which does not involve the projections $\pi_x$) is equivalent to the non-existence of periodic orbits of period $\leq 2$ which intersect $\overline U$. In particular, by reducing $r_0$, one can assume the following property:
\emph{For any $x\in U$ and any $t\in [1,2]$, we have $d(x,\varphi_t(x))\ge r_0$.}
\noindent c) For $x\in U$, the Local injectivity prevents the existence of $y\in U$ that is $r_0$-close to $x$, is different from $\varphi_t(x)$ for any $t\in [-1/4,1/4]$, and such that $\pi_x(y)=0_x$. In particular:
\emph{If $x,\varphi_t(x)\in U$ and $t\not\in (-1/2,1/2)$ satisfy $\pi_x(\varphi_t(x))=0_x$, then $x$ is periodic.}\hspace{-1cm}\mbox{}
\noindent d) The Global invariance says that when two orbits $(P_s(u))$ and $(P_s(u'))$ of the local fibered flow are close to the $0$ section of $\cN$ and have two points which are identified by $\pi$, then they are associated to orbits of the flow $\varphi$ that are close (up to a reparametrization $\theta$). In this case, any orbit of $(P_t)$ close to the zero-section above the first $\varphi$-orbit can be projected to an orbit of $(P_t)$ above the second $\varphi$-orbit.
\noindent e) The Global invariance can be applied to pairs of points $y,y'$ where the condition $d(y,y')<r$ has been replaced by a weaker one $d(y,y')<r_0$. In particular, this gives:
\emph{For any $\delta,\rho>0$, there exist $\beta>0$ such that: if $y,y'\in K$, $u\in \cN_y$, $u'\in \cN_{y'}$ and the intervals $I,I'\subset \RR$ containing $0$ satisfy \begin{itemize} \item[--] $d(y,y')<r_0$ and $y\in U$, \item[--] $\pi_y(u')=u$,
$\|P_s(u)\|<\beta\text{ and } \|P_{s'}(u')\|<\beta\text{ for any } s\in I \text{ and any } s'\in I',$ \end{itemize}
there is $\theta\in {\rm Lip}_{1+\rho}$ such that $|\theta(0)|\leq 1/4$ and $d(\varphi_s(y),\varphi_{\theta(s)}(y'))<\delta$ for any $s\in I\cap \theta^{-1}(I')$.}
Indeed provided that $\beta>0$ has been chosen small enough, one can apply the Local injectivity and the Local invariance in order to replace $y'$ and $u'$ by $y''=\varphi_t(y')$ and $u''=P_s(u')$ for some $t\in [-1/4,1/4]$ such that $d(y,y'')<r$. The assumptions for the Global invariance then are satisfied by $y,y''$ and $u,u''$. It gives a $\theta\in {\rm Lip}_{1+\rho}$ satisfying $d(\varphi_s(y),\varphi_{\theta(s)}(y'))<\delta$ for $s\in I\cap \theta^{-1}(I')$ but the condition $\theta(0)=0$ has been replaced by
$\theta(0)=t$; in particular $|\theta(0)|<1/4$. \end{Remarks-numbered}
\noindent {\bf Fundamental example.} One may think that $(\varphi_t)$ is the compactified flow $\widehat \varphi$ on an invariant compact set $K\subset \widehat M$ as in Section~\ref{s.compactification}, that $\cN$ is the compactified normal bundle over $K$, and that $(P_t)$ is the extended rescaled sectional Poincar\'e flow.
\subsubsection{No shear inside orbits} The next property states that one cannot find two reparametrizations of a same orbit, that shadow each other, coincide for some parameter and differ by at least $2$ for another parameter.
\begin{Proposition}\label{p.no-shear} If $r_0$ is small enough, for any $x\in U$, any increasing homeomorphism $\theta$ of $\RR$, any interval $I$ containing $0$ satisfying $\varphi_{\theta(0)}(x)\in U$ and $d(\varphi_t(x),\varphi_{\theta(t)}(x))\leq r_0,\;\forall t\in I$, then: \begin{itemize} \item[--] $\theta(0)> 1/2$ implies that $\theta(t)> t+2$, $\forall t\in I$ such that $\varphi_t(x), \varphi_{\theta(t)}(x)\in U$; \item[--] $\theta(0)\in [-2,2]$ implies that $\theta(t) \in [t-1/2,t+1/2]$, $\forall t\in I$ such that $\varphi_t(x),\varphi_{\theta(t)}(x)\in U$; \item[--] $\theta(0)< -1/2$ implies that $\theta(t)<t-2$, $\forall t\in I$ such that $\varphi_t(x),\varphi_{\theta(t)}(x)\in U$. \end{itemize} \end{Proposition} \begin{proof}
Let $\Delta$ be the set of points $x$ such that $\varphi_t(x)\not\in U$ for every $|t|\leq 1/2$. By the ``Transverse boundary" assumption (and up to reduce $r_0$), it is compact and its distance to $\overline U$ is larger than $2r_0$. Let us choose $\varepsilon\in (0,1/2)$ small enough so that
$\varphi_s(\overline U)$ is disjoint from the $r_0$-neighborhood of $\Delta$ when $|s|\leq \varepsilon$. Still reducing $r_0$, one can assume that: \begin{itemize} \item[(a)] any piece of orbit $\{\varphi_s(y),s\in [0,b]\}\subset K\setminus U$, with $y,\varphi_b(y)$ in the $r_0$-neighborhood of $\Delta$ and $b\leq 1/2$, is disjoint from the $r_0$-neighborhood of $\overline U$,
\item[(b)] if $\varphi_s(y)$ is $r_0$-close to $y\in\overline U$ for $|s|\leq 2$, then $|s|\leq \varepsilon$. \end{itemize} The first condition is satisfied by small $r_0>0$ since otherwise letting $r_0\to 0$ one would construct $y,\varphi_b(y)\in \Delta$ and $\varphi_{s}(y)\in \overline U$ where $0\leq s\leq b\leq 1/2$, contradicting the definition of $\Delta$. The second condition is a consequence of the ``No small period" assumption.
\begin{Claim} If $\theta(0)\geq -2$ then $\theta(t)\geq t-1/2$ for any $t\in I$ satisfying $\varphi_t(x)\in U$. \end{Claim} \begin{proof} The case $t=0$ is a consequence of the ``No small period" assumption. We deal with the case {that} $t$ is positive. The case {that} $t$ is negative can be deduced by applying the positive case to $\theta^{-1}$ and to the point $\varphi_{\theta(0)}(x)$.
Let $J$ be the interval of $t\in I$ satisfying for all $s\in J$ one has either $\varphi_s(x)\not\in \overline U$, or $\theta(s)\geq s-1/2$. Let $t_1\in [0,t_0]\cap J$ be the largest time satisfying $\varphi_{t_1}(x)\in \overline U$. It exists since if $(t_k)$ is an increasing sequence in $[0,t_0]\cap J$ satisfying $\varphi_{t_k}(x)\in \overline U$, then we have $\theta(t_k)\geq t_k-1/2$. So the limit $\overline t$ satisfies $\theta(\overline t)\geq \overline t-1/2$ (and belongs to $J$) and $\varphi_{\overline t}(x)\in \overline U$.
By property (b), $\theta(t_1)\geq t_1-\varepsilon$. For $s>t_1$ close to $t_1$, we thus have $s\in J$. Since $t_1$ is maximal we also get $\varphi_{s}(x)\not\in \overline U$. Since $\varphi_{t_0}(x)\in U$, there exists a minimal $t_2\in [ t_1,t_0]$ such that $\varphi_{t_2}(x)$ belongs to the boundary of $U$. In particular $\varphi_{\theta(t_2)}(x)$ is $r_0$-close to the boundary of $U$. Note that $[t_1,t_2)\subset J$. By maximality of $t_1$ one has $\theta(t_2)<t_2-1/2$.
The ``No small period" assumption implies $\theta(t_2)<t_2-2$. In particular, $$t_2>\theta(t_2)+2>\theta(t_1)+2\geq t_1+2-\varepsilon.$$
This shows that $\varphi_{s}(x)\in \Delta$, $\forall s\in [t_1+1/2,t_2-1/2]$. Since $\varphi_{\theta(t_1+1/2)}(x)$ is $r_0$-close to $\varphi_{t_1+1/2}(x)\in \Delta$, and $\varphi_{t_1}(x)\in\overline U$, one has $|\theta(t_1+1/2)-t_1|> \varepsilon$ (by our choice of $\varepsilon$). Since $\theta(t_1)\geq t_1-\varepsilon$, this gives $\theta(t_1+1/2)> t_1+\varepsilon$. Hence $(\theta(t_1+1/2),t_1+1/2]$ has length smaller than $1/2$.
If $\theta(t_2)\in (\theta(t_1+1/2),t_1+1/2]$, since $\varphi_{\theta(t_1+1/2)}(x)$ belongs to the $r_0$-neighborhood of $\Delta$ and since $\varphi_{t_1+1/2}(x)\in \Delta$, the property (a) implies that $\varphi_{\theta(t_2)}(x)$ is disjoint from the $r_0$-neighborhood of $\overline U$. Otherwise $\theta(t_2)\in [t_1+1/2,t_2-1/2]$ and then $\varphi_{\theta(t_2)}(x)\in \Delta$ is $r_0$ far from $\overline U$. This is a contradiction since we have proved before that $\varphi_{\theta(t_2)}(x)$ is $r_0$-close to the boundary of $U$. \end{proof}
Arguing in a same way we deduce that $\theta(0)\leq 2$ implies $\theta(t)\leq t+1/2$ for any $t\in I$ such that $\varphi_t(x)\in U$. One deduces the second item of the proposition. Note that if $\theta(t_0)\leq t_0+2$ for some $t_0$ with $\varphi_{t_0}(x)\in U$, then one deduces that $\theta(t)\leq t+1/2$ for all other $t$ and in particular $\theta(0)\leq 2$; this gives the first item. The third one is similar. \end{proof}
\subsubsection{Closing lemmas} The following closing lemma is an example of properties given by identifications. \begin{Lemma}\label{l.closing0} Let us assume that $\beta_0,r_0$ are small enough. Let us consider: \begin{itemize} \item[--] $x\in U$ having an iterate $y=\varphi_T(x)$ in $U\cap B(x,r_0)$ with $T\geq 4$, \item[--] a fixed point $p\in \cN_x$ for $\widetilde P_T:=\pi_x\circ P_T$
such that $\|P_t(p)\|<\beta_0$ for each $t\in [0,T]$, \item[--] a sequence $(y_k)$ in a compact set of $U\cap B(x,r_0/2)$ such that $\pi_x(y_k)$ converges to $p$. \end{itemize} Then there exists a sequence $(s_k)$ in $[-1,1]$ such that $\varphi_{s_k}(y_k)$ converges to a periodic point $y$ of $K$ having some period $T'$ such that $\pi_x(y)=p$ and $$DP_{T'}(0_y)= D\pi_{x}(0_y)^{-1}\circ D\widetilde P_T(p) \circ D\pi_x(0_y).$$ \end{Lemma} \begin{proof} Up to extract a subsequence, $(y_{k})$ converges to a point $y\in U\cap B(x,r_0/2)$ such that $\pi_x(y)=p$. By the Local injectivity, since the sequence $(\pi_y(y_k))$ converges to $0_y$, there exists $(s_k)$ in $[-1,1]$ such that $\varphi_{s_k}(y_k)$ converges to $y$.
By the Global invariance, there exists $(T_k)$ satisfying $\frac 1 2 T\leq T_k\leq 2T$ such that $\varphi_{T_k}(y_k)$ is in $B(x,r_0/2)$ and projects by $\pi_x$ on $\widetilde P_T(\pi_x(y_k))$.
In particular $(\pi_x\circ \varphi_{T_k}(y_k))$ converges to $p$ and $(\pi_y\circ \varphi_{T_k}(y_k))$ converges to $0_y$. One deduces (up to modify $T_k$ by adding a real number in $[-1,1]$) that $\varphi_{T_k}(y_k)$ converges to $y$. Since $T\geq 4$, the limit value $T'$ of $T_k$ is larger than $1$ and one deduces that $y$ is $T'$-periodic. Moreover, $\pi_x(y)=p$ so that by the Global invariance $D\widetilde P_T(p)$ and $DP_{T'}(0_y)$ are conjugated by $D\pi_{x}(0_y)$. \end{proof}
For the next statement, we consider an open set $V$ containing $K\setminus U$ so that points in $K\setminus V$ are separated from the boundary of $U$ by a distance much larger than $r_0$. \begin{Corollary}\label{c.closing0} Assume that $\beta_0,r_0$ are small enough. If $x\in K\setminus V$ has an iterate $y=\varphi_T(x)$ in $B(x,r_0)$ with $T\geq 4$ and if there exists a subset $B\subset \cN_x$ containing $0$ such that \begin{itemize} \item[--] $P_t(B)\subset B(0_{\varphi_t(x)},\beta_0)$ for any $0<t<T$, \item[--] $\widetilde P_T:=\pi\circ P_T$ sends $B$ into itself, \item[--] the sequence $\widetilde P_T^k(0)$ converges to a fixed point $p\in B$, \end{itemize} then the positive orbit of $x$ by $\varphi$ also converges to a periodic orbit. \end{Corollary} \begin{proof} From the Global invariance, there exists a sequence $T_k\to +\infty$ such that $y_k:=\varphi_{T_k}(x)$ projects by $\pi_x$ on $\widetilde P_T^k(0)$
and $|T_{k+1}-T_k|$ is uniformly bounded in $k$.
Since $(\widetilde P_T^k(0)=\pi_x(y_k))$ converges to $p$, we can apply the previous lemma so that $(\varphi_{s_k}(y_k))$ converges to a $T'$-periodic point $y\in K$
for some $s_k\in [-1,1]$. Since $|T_{k+1}-T_k|$ is uniformly bounded in $k$, this proves that the $\omega$-limit set of $x$ is the orbit of $y$. \end{proof}
\subsubsection{Generalized orbits} The identifications $\pi$ allow us to introduce generalized orbits. In the case where $K$ is a non-singular invariant set and $(P_t)$ is the sectional Poincar\'e flow on $\cN$, these orbits correspond to the orbits of the flow contained in the maximal invariant set in a neighborhood of $K$.
\begin{Definition}[Generalized orbit] A (piecewise continuous) path $\bar u=(u(t))_{t\in \RR}$ in $\cN$ is a \emph{generalized orbit} if there is a sequence $(t_n)_{n\in\ZZ}$ in $\RR$ such that if $y(t)$ denotes the projection of $u(t)$ to $K$ by the bundle map $\cN\to K$, then for each $n\in \ZZ$: \begin{itemize}
\item[--] $t_{n+1}-t_n\geq 1$,
\item[--] $\|u(t)\|< \beta_0$ and $u(t)=P_{t-t_n}(u_n)$ for $t\in [t_n,t_{n+1})$,
\item[--] $\varphi_{t_{n+1}-t_n}(y_n),y_{n+1}$ belong to $U$, are $r_0$-close and $\pi_{y_{n+1}}(P_{t_{n+1}-t_n}(u_n))=u_{n+1}$.
\end{itemize} \emph{The projection of $u(t)$ from $\cN$ to $K$ defines a pseudo-orbit $(y(t))$ of $\varphi$ in $K$.} \end{Definition}
\begin{Remarks-numbered}\label{rk.generalized-orbit} a) If $(u(t))$ is a generalized orbit, then $(u(t+s))_{t\in \RR}$ is also for any $s\in \RR$.
\noindent b) The generalized orbits satisfying $u(t)=0_{y(t)}$ for any $t$ can be identified to the orbits of $\varphi$ on $K$ which meet $U$ for arbitrarily large positive and negative times $t_n$. \end{Remarks-numbered}
\begin{Definition}[Topology on generalized orbits] Let us fix $\bar u$. For $T>0$ large and $\eta>0$ small, we say that a generalized orbit $\bar u'$ is $(T,\eta)$-close to $\bar u$ if $u(t)$ and $u'(t)$ are $\eta$-close for each $t\in [-T,T]$. \end{Definition}
For the next notion, we fix an open set $V$ containing $K\setminus U$. \begin{Definition}[Neighborhood of K] A generalized orbit \emph{belongs to the $\eta$-neighborhood of $K$} (or of the $0$-section of $\cN$) if the additional conditions hold: \begin{itemize} \item[--] $d(y(t_{n+1}),\varphi_{t_{n+1}-t_n}(y(t_n)))\leq \eta$, for any $n\in \ZZ$, \item[--] $d(y(t_n), K\setminus V)<\eta$, for each $n\in \ZZ$ such that $y(t_{n})\neq \varphi_{t_{n}-t_{n-1}}(y(t_{n-1}))$,
\item[--] $\|u(s)\|< \eta$ for any $s\in\RR$. \end{itemize} \end{Definition}
\begin{Definition}[Generalized flow]\label{d.generalized-flow} We associate, to any generalized orbit $\bar u=(u(t))$ and any $s,t\in \RR$, a diffeomorphism $\bar P_t$ from a neighborhood of $u(s)$ in $\cN_{y(s)}$ to a neighborhood of $u(s+t)$ in $\cN_{y(s+t)}$ which for any $t,t'$ satisfies $\bar P_{t'}\circ \bar P_{t}=\bar P_{t+t'}$. It is defined by: \begin{itemize} \item[--] by $\bar P_t=P_t$ when $t_n\leq s\leq t+s<t_{n+1}$, \item[--] by $\bar P_t= P_{t+s-t_{n+1}}\circ \pi_{y(t_{n+1})}\circ P_{t_{n+1}-s}$ when $t_n\leq s< t_{n+1}\leq t+s<t_{n+2}$, \item[--] and by applying inductively the flow relation in the other cases. \end{itemize} \end{Definition}
The generalized flow acts on generalized orbits: $\bar P_t(\bar u)$ coincides at time $s$ with $\bar u(s+t)$. When $\bar u$ can be identified to an orbit of $\varphi$ (as in Remark~\ref{rk.generalized-orbit}), $\bar P_t$ coincides with the flow $P_t$.
\paragraph{Half generalized orbits.} The previous definitions may be extended to any (piecewise continuous) path $(u(t))_{t\in I}$ parametrized by an interval $I$ of $\RR$. When $I=[0,+\infty)$ or $(-\infty,0]$ one gets the notion of half generalized orbits. The generalized semi-flow $(\bar P_{t})_{t\geq 0}$ (resp. $(\bar P_{t})_{t\leq 0}$) acts on half generalized orbits parametrized by $[0,+\infty)$ (resp. $(-\infty,0]$).
\subsubsection{Normally expanded irrational tori}\label{ss:tori}
We give a setting of a dominated splitting ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ such that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is not uniformly contracted. \begin{Definition}A \emph{normally expanded irrational torus} is an invariant compact subset $\cT\subset K$ such that \begin{itemize}
\item[--] the dynamics of $\varphi|_{\cT}$ is topologically equivalent to an irrational flow on ${\mathbb T}^2$,
\item[--] there exists a dominated splitting $\cN|_{\cT}={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ has one-dimensional fibers, \item[--] for some $x\in U\cap \cT$ and $r>0$, $\pi_x(\{z\in K, d(x,z)<r\})$ is a $C^1$-curve tangent to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$. \end{itemize} \end{Definition}
The name is justified as follows. \begin{Lemma}\label{l.torus} For any normally expanded irrational torus $\cT$, the Lyapunov exponent along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ of the (unique) invariant measure of $\varphi$ on $\cT$ is equal to zero; in particular ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is uniformly expanded (i.e uniformly contracted by backward iterations). \end{Lemma} \begin{Remark-numbered}\label{r.torus} With the technics of Section~\ref{s.topological-hyperbolicity}, one can also prove that the $\alpha$-limit set of any point $z$ in a neighborhood coincides with $\cT$. \end{Remark-numbered} \begin{proof} Let us choose a global transversal $\Sigma\simeq {\mathbb T}^1$ containing $x$
for the restriction of $\varphi$ to $\cT$. The dynamics is conjugated to a suspension of an irrational rotation of $\Sigma$. We consider the sequence $(t_k)$ of positive returns of the orbit of $x$ inside a neighborhood of $x$ in $\Sigma$. Note that $|t_{k+1}-t_k|$ is uniformly bounded. For every $y\in \cT$ close to $x$
there exists a sequence $(t'_k)$ such that $|t_{k+1}-t_k|$ and $|t'_{k+1}-t'_k|$ are close and $\varphi_{t'_k}(y)$ is close to $\varphi_{t_k}(x)$ and belongs to $\Sigma$. In particular by choosing $y$ close enough to $x$, there exist $\varepsilon_1,\varepsilon_2>0$ arbitrarily small such that $$\varepsilon_1\leq d(\varphi_{t'_k}(y), \varphi_{t_k}(x))\leq \varepsilon_2.$$ Let $I\subset \Sigma$ be the interval bounded by $x,y$. The transversal $\Sigma$ is mapped homeomorphically inside an interval of the $C^1$-curve $\gamma=\pi_x(\{z\in K, d(x,z)<r\})$ and by the Global invariance $\pi_x\circ P_{t_k}$ sends $\pi_x(y)$ to $\pi_x(\varphi_{t'_k}(y))$ and similarly $\pi_x(I)\subset \gamma$ inside $\gamma$. Moreover, there exists $\varepsilon'_1,\varepsilon'_2>0$ arbitrarily small such that for each $k$,
$$\varepsilon'_1\leq |\pi_x\circ P_{t_k}\circ\pi_x(I)|\leq \varepsilon'_2.$$
Since $t_{k+1}-t_k$ is bounded, it implies that the Lyapunov exponent along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ vanishes. \end{proof}
\subsubsection{Contraction on periodic orbits and criterion for $2$-domination} When there exists a dominated splitting ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ where ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is one-dimensional, the uniform contraction of the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ above each periodic orbit of $K$, implies that it is $2$-dominated. \begin{Proposition}\label{p.2domination} Let us assume that \begin{itemize} \item[--] there exists a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and the fibers of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ are one-dimensional, \item[--] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on an open set $V$ containing $K\setminus U$. \end{itemize} Then either the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is $2$-dominated, or there exists a periodic orbit $\cO$ in $K$ whose Lyapunov exponents are all positive. \end{Proposition} \begin{proof} If there is no $2$-domination, there exists a sequence $(x_n)$ in $K$, such that
$$\|DP_n{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x_n)}\|^2\geq \frac 1 2 \|DP_n{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x_n)}\|.$$ One can extract a $\varphi$-invariant measure from the sequence $$\mu_n:=\frac 1 n \int_{t=0}^{n}\delta_{\varphi_{t}(x_n)}\; dt$$ and the maximal Lyapunov exponents $\lambda^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ satisfy $2\lambda^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\geq \lambda^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. In particular, $\lambda^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}> \lambda^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\geq \lambda^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}-\lambda^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>0$. Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is one-dimensional, one deduces that $\varphi$ admits an ergodic measure $\mu$ whose Lyapunov exponents are both positive. Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on $V$, the support of $\mu$ has to intersect $U$. For $\mu$-almost every point $x\in K$, there exists a neighborhood
$V_x$ of $0$ in $\cN_x$ such that $\|(DP_{-t})|_{V_x}\|$ decreases exponentially as $t\to +\infty$. In particular, one can take $x\in U$ recurrent and find a large time $T>0$ such that $\widetilde P_{-T}:=\pi_x\circ P_{-T}$ sends $V_x$ into itself as a contraction. {By} Corollary~\ref{c.closing0}, there is a periodic point $y$ in $K$ with some period $T'>0$ and a fixed point $p\in V_x$ for $\widetilde P_{-T}$ such that the tangent map $DP_{-T'}(0_y)$ is conjugate to the tangent map $D\widetilde P_{-T}(p)$ (by Lemma~\ref{l.closing0}). Hence the Lyapunov exponents of $y$ are all positive. \end{proof}
\subsection{Plaque families}\label{s.plaque}
We now introduce center-stable plaques $\cW^{cs}(x)$ that are candidates to be the local stable manifolds of the dynamics tangent to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. A symmetric discussion gives center-unstable plaques $\cW^{cu}$ tangent to ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
\subsubsection{Standing assumptions}\label{ss.assumptions} In this Section~\ref{s.plaque}, we consider: \begin{itemize} \item[--] a bundle $\cN$ with $d$-dimensional fibers, a local fibered flow $(\cN,P)$ over a topological flow $(\varphi,K)$ and an identification $\pi$ on an open set $U$, compatible with $(P_t)$, \item[--] a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ of the bundle $\cN$, \item[--] an open set $V$ containing $K\setminus U$. \end{itemize} Reducing $r_0$ we assume that the distance between $K\setminus V$ and $K\setminus U$ is much larger than $r_0$.
We also fix an integer $\tau_0\geq 1$ satisfying Definition~\ref{d.dominated} of the domination. We choose $\lambda>1$ such that $\lambda^{4\tau_0}<2$.
In particular for any $x\in K$, for any unit $u\in {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$ and $v\in {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$, we have: \begin{equation}\label{e.domination}
\forall t\geq\tau_0,~~~\|DP_t(0).u\|\leq \lambda^{-2t} \|DP_t(0).v\|. \end{equation}
In each space $\cN_x={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$, $x\in K$, we introduce the constant cone
$${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x):=\{u=u^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}+u^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\in \cN_x={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x), \|u^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\|> \|u^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\|\},$$ and ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$ in a symmetric way. They vary continuously with $x$. Moreover the dominated splitting implies that for any $t\geq \tau_0$ the cone fields are contracted: $$DP_t(0_x).\overline{{\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)}\subset {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_t(x)) \text{ and } DP_{-t}(0_x).\overline{{\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)}\subset {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{-t}(x)).$$
\subsubsection{Plaque family for fibered flows} \begin{Definition} A \emph{$C^k$-plaque family tangent to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$} is a continous fibered embedding $\psi\in {\rm Emb}^k({\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},N)$, that is a family of $C^k$-diffeomorphisms onto their image $\psi_x\colon {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\to \cN_x$ such that $\psi_x(0_x)=0_x$, the image of $D\psi_x(0_x)$ coincides with ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$ and such that $\psi_x$ depends continuously on $x\in K$ for the $C^k$-topology.
\emph{ For $\alpha>0$, we denote by $\cW^{cs}_{\alpha}(x)$ the ball centered at $0_x$ and of radius $\alpha$ inside $\psi_x({\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x))$ with respect to the restriction of the metric on $\cN_x$ and we denote by $\cW^{cs}(x)=\cW^{cs}_1(x)$.}
The plaque family $\psi$ is \emph{locally invariant} by the time-one map of the flow $(P_t)$ if there exist $\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>0$ such that for any $x\in K$ we have $$P_1(\cW^{cs}_{\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(x))\subset \cW^{cs}(\varphi_1(x)).$$ \end{Definition}
Hirsch-Pugh-Shub's plaque family theorem~\cite{hirsch-pugh-shub} generalizes to local fibered flows. \begin{Theorem}\label{t.plaque} For any local fibered flow $(\cN,P)$ admitting a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ there exists a $C^1$-plaque family tangent to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ which is locally invariant by $P_1$.
If the flow is $C^2$ and if ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is $2$-dominated, then the plaque family can be chosen $C^2$. \end{Theorem}
\subsubsection{Plaque family for generalized orbits}\label{ss.plaque-genralized}
The previous result extends to generalized orbits. \begin{Theorem}\label{t.generalized-plaques} For any local fibered flow $(\cN,P)$ admitting a compatible identification and a domination $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, there exists $\eta,\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>0$ with the following property.
For any $x\in K$ and any half generalized orbit $\bar u=(u(t))_{t\in [0,+\infty)}$ in the $\eta$-neighborhood of the zero-section with $u(0)\in \cN_x$, there exists a $C^1$-diffeomorphism onto its image $\psi_{\bar u}\colon {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\to \cN_x$ such that $\psi_{\bar u}(0_x)=u(0)$ and the image of $D\psi_{\bar u}(0_x)$ is contained in the cone ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$. Moreover $\psi_{\bar u}$ depends continuously for the $C^1$-topology on $\bar u$. {Denote by $\cW^{cs}(\bar u)=\psi_{\bar u}({\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x))$.}
The family of plaques is locally invariant by $\bar P_1$: $$\bar P_1(\cW^{cs}_{\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\bar u))\subset \cW^{cs}(\bar P_1(\bar u)),$$ where $\cW^{cs}_{\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\bar u)$ denotes as before the ball centered at $u(0)$ and of radius $\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
When the identification and the flow are $C^2$ and when ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is $2$-dominated, then the plaques can be chosen $C^2$ and the family $\psi_{\bar u}$ depends continuously on $\bar u$ for the $C^2$-topology. \end{Theorem}
The tangent space to $\cW^{cs}(\bar u)$ at $\bar u$ in $\cN_x$ is denoted by ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\bar u)$. By construction it varies continuously with $\bar u$ and coincides with ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$ when $\bar u$ is the half orbit $(0_{\varphi_{t}(x)})_{t\geq 0}$.
\begin{Remark} We have $D\bar P_t({\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\bar u))={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\bar P_t(\bar u))$ for any $t\in \RR$. \emph{Indeed, this holds for $t\in \NN$ by local invariance of the plaque family. The forward invariance of ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ implies that $\bar P_t({\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\bar u))$ is tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ for any $t\geq 0$. The dominated splitting implies that any vector $v$ tangent to $\cN_x$ at $\bar u$ whose forward iterates are all tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ belongs to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\bar u)$ (this can be also seen from the sequence of diffeomorphisms introduced in the next section). This characterization implies the invariance of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ for any time.} \end{Remark}
\subsubsection{Plaque family for sequences of diffeomorphisms} The proofs of Theorems~\ref{t.plaque} and~\ref{t.generalized-plaques} are very similar to~\cite[Theorem 5.5]{hirsch-pugh-shub}. It is a consequence of a more general result that we state now. We denote by $d=d^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}+d^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ the dimensions of the fibers of $\cN,{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and endow $\RR^d$ with the standard euclidean metric. For $\chi>0$ let us define the horizontal cone
$${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_\chi=\{(x,y)\in\RR^{d^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\times \RR^{d^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}, \chi\|x\|\geq \|y\|\}.$$ \begin{Definition}\label{d.sequence} A \emph{sequence of $C^k$-diffeomorphisms of $\RR^d$ bounded by constants $\beta,C>0$} is a sequence $\underbar F$ of diffeomorphisms $f_n\colon U_n\to V_n$, $n\in \NN$, where $U_n,V_n\subset \RR^d$ contain $B(0,\beta)$, such that $f_n(0)=0$ and such that the $C^k$-norms of $f_n, f_n^{-1}$ are bounded by $C$.
We denote by $\sigma(\underbar F)$ the shifted sequence $(f_{n+1})_{n\geq 0}$ associated to $\underbar F=(f_n)_{n\geq 0}$.
The sequence has a \emph{dominated splitting} if there exists $\tau_0\in \NN$ such that for any $n\in \NN$ and for any {$z\in B(0,\beta)\cap f_n^{-1}(B(0,\beta))\cap \dots \cap f_{n+\tau_0-1}^{-\tau_0}(B(0,\beta))$,} the cone ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_1$ is mapped by $D(f_{n+\tau_0-1}\circ \dots \circ f_n)^{-1}(z)$ inside the smaller cone ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_{1/2}$.
The center stable direction of the dominated splitting is \emph{$2$-dominated} if there exists $\tau_0\in \NN$ such that for any {$z\in B(0,\beta)\cap f_n^{-1}(B(0,\beta))\cap \dots \cap f_{n+\tau_0-1}^{-\tau_0}(B(0,\beta))$} and for any unit vectors $u,v$ satisfying $D(f_{n+\tau_0-1}\circ \dots \circ f_n)(z).u\in{\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_1$ and $v\in \RR^d\setminus {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_1$, then
$$ \|D(f_{n+\tau_0-1}\circ \dots \circ f_n)(z).u\|^2\leq \frac 1 2 \|D(f_{n+\tau_0-1}\circ \dots \circ f_n)(z).v\|. $$ \end{Definition}
\begin{Theorem}\label{t.generalizedplaque} For any $C,\beta,\tau_0$, there exists $\alpha\in (0,\beta)$, and for any sequence of $C^1$-diffeomor\-phisms $\underbar F=(f_n)$ of $\RR^d$ bounded by $\beta,C$ with a dominated splitting, associated to the constant $\tau_0$, there exists a $C^1$-map $\psi=\psi(\underbar F)\colon \RR\to \RR$ such that: \begin{itemize} \item[--] For any $z,z'$ in the graph $\{(x,\psi(\underbar F)(x)), x\in \RR^{d^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\}$, the difference $z'-z$ is contained in ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_{\frac 1 2}$.
\item[--] (Local invariance.) $f_0\left( \{(x,\psi(\underbar F)(x)), |x|<\alpha\}\right) \subset \{(x,\psi(\sigma(\underbar F))(x)), x\in \RR^{d^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\}$. \item[--] The function $\psi$ depends continuously on $\underbar F$ for the $C^1$-topology: for any $R,\varepsilon>0$, there exists $N\geq 1$ and $\delta>0$ such that if the two sequences $\underbar F$ and $\underbar F'$ satisfy
$$\|(f_n-f_n')|_{B(0,\beta)}\|_{C^1}\leq \delta \text{ for } 0\leq n \leq N$$
then $\|(\psi(\underbar F)-\psi(\underbar F'))|_{B(0,R)}\|_{C^1}$ is smaller than $\varepsilon$. \item[--] For sequences of $C^2$-diffeomorphisms $\underbar F$ such that the center stable direction of the dominated splitting is $2$-dominated (still for the constant $\tau_0$), the function $\psi(\underbar F)$ is $C^2$ and depends continuously on $\underbar F$ for the $C^2$-topology. \end{itemize} \end{Theorem} \noindent The proof of this theorem is standard. It is obtained by \begin{itemize} \item[--] introducing a sequence of diffeomorphisms $(\widehat f_n)$ defined on the whole plane $\RR^d$ which coincide with the diffeomorphisms $f_n$ on a uniform neighborhood of $0$ and with the linear diffeomorphism $Df_n(0)$ outside a uniform neighborhood of $0$, \item[--] applying a graph transform argument. \end{itemize}
Theorem~\ref{t.plaque} is a direct consequence of Theorem~\ref{t.generalizedplaque}: for each $x\in K$, we consider the sequence of local diffeomorphisms $P_1\colon \cN_{\varphi_n(x)}\to \cN_{\varphi_{n+1}(x)}$. There are bounded linear isomorphisms which identify $\cN_{\varphi_n(x)}$ with $\RR^d$ and send the spaces ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_n(x))$ and ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_n(x))$ to $\RR^{d^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\times \{0\}$ and $\{0\}\times \RR^{d^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$. Since the isomorphisms are bounded we get a sequence of diffeomorphisms as in Definition~\ref{d.sequence}. Theorem~\ref{t.generalizedplaque} provides a plaque in $\cN_x$ and which depends continuously on $x$.
Theorem~\ref{t.generalized-plaques} is proved similarly: let $\bar u$ be a half generalized orbit and $(y(t))_{t\in [0,\infty)}$ be its projection to $K$ and $\bar P$ the generalized flow; we consider the local diffeomorphisms $\bar P_1\colon \cN_{y(n)}\to \cN_{y(n+1)}$ in a neighborhood of the points $u(n)$ and $u(n+1)$ respectively, for each $n\in \NN$.
\subsubsection{Uniqueness} There is no uniqueness in Theorem~\ref{t.generalizedplaque}, but once we have fixed the way of choosing $\widehat f_n$, the invariant graph becomes unique (and this is used to prove the continuity in Theorem~\ref{t.generalizedplaque}). Also the following classical lemma holds.
\begin{Proposition}\label{p.uniqueness} In the setting of Theorem~\ref{t.generalizedplaque}, up to reduce $\alpha$, the following property holds. If there exists $z'$ in the graph of $\psi(\underbar F)$ and $z\in \RR^d$ such that for any $n\geq 0$ \begin{itemize} \item[--] the iterates of $z$ and $z'$ by $f_n\circ\dots\circ f_0$ are defined and belong to $B(0,\alpha)$, \item[--] $(f_n\circ\dots\circ f_0(z))-(f_n\circ\dots\circ f_0(z'))\in {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_1$, \end{itemize} then $z$ is also contained in the graph of $\psi(\underbar F)$. \end{Proposition} \begin{proof} Let us assume by contradiction that $z=(x,y)$ is not contained in the graph of $\psi$ and let us denote $\widehat z=(x,\psi(x))$. The line containing the iterates of $z$ and $\widehat z$ by the sequence $f_{n-1}\circ\dots\circ f_0$, $n\geq 1$, remains tangent to the cone $\RR^d\setminus {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_1$ (by the dominated splitting). The line containing the iterates of $z$ and $z'$ and the line containing the iterates of $\widehat z$ and $z'$ are tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}_1$ by our assumption, and by the two items of Theorem~\ref{t.generalizedplaque}.
The domination implies that the distances between the $n^\text{th}$ iterates of $z,\widehat z$ gets exponentially larger than their distance to the $n^\text{th}$ iterate of $z'$. This contradicts the triangular inequality. \end{proof}
\begin{Remark-numbered}\label{r.plaque-invariance} The plaque family $\cW^{cs}$ given by Theorem~\ref{t.plaque} is a priori only invariant by the time-$1$ map $P_1$ of the flow but the previous proposition shows that if $\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>0$ is small enough, for any $x\in K$ and $z\in \cW^{cs}(x)$ such that $P_{n}(z)\in \cW^{cs}_{\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\varphi_n(x))$ for each $n\in\NN$, then we have $P_{t}(z)\in \cW^{cs}(\varphi_t(x))$ for any $t>0$. Indeed, by invariance of the cone ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, the point $P_{t}(z)$ belongs to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_t(x))$ for any $t\geq 0$.
The same property holds for half generalized orbits parametrized by $[0,+\infty)$. \end{Remark-numbered}
\subsubsection{Coherence} The uniqueness allows us to deduce that when plaques intersect then they have to \emph{match}, i.e. to be contained in a larger sub-manifold.
\begin{Proposition}\label{p.coherence} Fix a plaque family $\cW^{cs}$ as given by Theorem~\ref{t.generalized-plaques}. Up to reduce the constants $\eta,\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>0$ the following property holds. Let us consider any half generalized orbits $\bar u,\bar u'$ (parametrized by $[0,+\infty)$), and any sets $X\subset \cW^{cs}_{\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\bar u)$, $X'\subset \cW^{cs}_{\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\bar u')$ such that: \begin{enumerate} \item $\bar u$, $\bar u'$ belong to the $\eta$-neighborhood of $K$ and satisfy $\bar u\in X$, $\bar u'\in X'$, \item the points $y,y'$ satisfying $u(0)\in \cN_y$, $u'(0)\in \cN_{y'}$ are $r_0$ close and $y$ belongs to the $2r_0$-neighborhood of $K\setminus V$, \item the projection $(y(t))_{t\in [0,+\infty)}$ of $\bar u$ to $K$ has arbitrarily large iterates in the $r_0$-neighborhood of $K\setminus V$, \item $\pi_{y}(X')\cap X\neq \emptyset$ and $\operatorname{Diam} (\bar P_t(X)), \operatorname{Diam} (\bar P_t(X'))\leq \alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ for all $t\geq 0$, \end{enumerate} then $\pi_{y}(X')$ is contained in $\cW^{cs}(\bar u)$. \end{Proposition} \begin{proof} Let us denote by $(y(t))$ and $(y'(t))$ the projections of $\bar u$ and $\bar u'$ to $K$. Proposition~\ref{p.uniqueness} gives $\alpha>0$. Provided $\eta,\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ are small enough, the proof consists in checking that some Global invariance extends to generalized orbits:
\begin{Claim} \label{c.coherence} There exist $T,T'>10$ such that \begin{itemize} \item[(a)] the points $y(T),y'(T')$ are $r_0$ close and $y(T)$ belongs to the $2r_0$-neighborhood of $K\setminus V$, \item[(b)] $\bar P_{T}(\pi_{y}(X'))=\pi_{y(T)}(\bar P_{T'}(X'))$, \item[(c)] $\operatorname{Diam} (\bar P_t(\pi_{y}(X')))\leq \alpha/2$ for any $t\in [0,T]$. \end{itemize} \end{Claim} \begin{proof}[Proof of the Claim] Let $C$ be a large constant which bounds: \begin{itemize} \item[--] the Lipschitz constant of the projections $\pi_z$, for $z$ in the $2r_0$-neighborhood of $K\setminus V$,
\item[--] the norms $\|DP_s\|$ for $|s|\leq 1$. \end{itemize} Since $\varphi$ is continuous, there exists $\tilde \eta>\eta$ such that for each $z,z'\in K$ satisfying $d(z,z')<\eta$ we have $\sup_{s\in[-1,1]}d(\varphi_s(z),\varphi_s(z'))<\tilde \eta$. Taking $\eta$ small allows to choose $\tilde \eta$ small as well.
We will apply a first time the Global invariance, with the constants $\rho=2$ and $\delta_1:=\min(r_0/4,\alpha/4)$: it gives us constants $\beta_1, r_1$. Then we will apply a second time the Global invariance (the version of Remark~\ref{r.identification}.(e)), with the constant $\rho=2$ and $\delta_2:=r_1/4$: it gives us constant $\beta_2$. Take $\eta$ and $\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ small so that $(1+C)^2(\tilde \eta+\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})<\min(\beta_1,\beta_2)$ and $\eta<r_0/4$, $\tilde \eta<r_1/2$.
For proving the claim, it is enough to prove the existence of $T,T'>0$ satisfying the properties (a), (b), (c) above and such that one of the following properties occurs: \begin{itemize} \item[--] $T,T'>10$, \item[--] $[0,T]$ contains a discontinuity of $(y(t))$, \item[--] $[0,T']$ contains a discontinuity of $(y'(t))$. \end{itemize} Indeed by definition the discontinuities of generalized orbits are separated in time by at least $1$. It is thus enough to apply the argument below 20 times in order to get a pair of times $(T,T')$ such that $T,T'>10$.
We now explain how to obtain the pair $(T,T')$. The diameters of $\{0_{y}\}\cup X$ and of $\{0_{y'}\}\cup X'$
are smaller than $\eta+\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. Moreover $\pi_y(X')$ meets $X$. Hence $\|\pi_y(y')\|<(1+C)(\eta+\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$. With our choice of $\eta,\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, this gives $\|\pi_y(y')\|<C^{-1}\min(\beta_1,\beta_2)$
and then $\|P_s\circ \pi_y(y')\|<\beta_2$ for any $s\in [-1, 1]$. The Global invariance (Remark~\ref{r.identification}.(e)) gives $\theta_2\in \operatorname{Lip}_{1+\rho}$ such that
$|\theta_2(0)|\leq 1/4$, and $d(\varphi_{s}(y),\varphi_{\theta_2(s)}(y))<\delta_2$ for any $s\in [-1, 1]$.
\noindent \paragraph{\it First case: $\theta_2(0)\geq 0$.} We estimate $$d(y, y'(\theta_2(0))\leq d(y, \varphi_{\theta_2(0)}(y'))+d(\varphi_{\theta_2(0)}(y'), y'(\theta_2(0))).$$ We have $d(y, \varphi_{\theta_2(0)}(y'))= d(\varphi_0(y), \varphi_{\theta_2(0)}(y'))<\delta_2$. Note that either $\varphi_{\theta_2(0)}(y')=y'(\theta_2(0))$, or, the generalized orbit $\bar u'$ has one discontinuity at some time $s$ which belongs to $[0, \theta_2(0)]\subset [-1,1]$. Since $\bar u'$ is in the $\eta$-neighborhood of $K$, we have $d(\varphi_s(y'), y'(s))<\eta$. One thus gets that $d(\varphi_{\theta_2(0)}(y'), y'(\theta_2(0)))<\tilde \eta$ and $d(y, y'(\theta_2(0))<\delta_2+\tilde \eta<r_1$.
We can thus apply the Global invariance to the points $y\in U$ and $y'(\theta_2(0))$ and to points $v$ in $X$, $v'\in X'$ such that $\pi_y(v')=v$. Using the local invariance and a previous estimate we get:
$\|\pi_y(y'(\theta_2(0))\|=\|\pi_y(y')\|<\beta_1$. Consequently, there exists $\theta_1\in \operatorname{Lip}_{1+\rho}$ and an interval $[0,a)$ such that $d(\varphi_t(y),\varphi_{\theta_1(t)}(y'(\theta_2(0)))<\delta_1=r_0$ for any $t\in [0,a)$. Here $[0,a)$ is any interval such that
$\|P_t(v)\|<\beta_1$ and $\|P_{\theta_1(t)+\theta_2(0)}(v')\|<\beta_1$. Since $\bar u$, $\bar u'$ are in the $\eta$-neighborhood of $K$ and by the assumption (4) above, this is ensured if $[0,a)$ is the maximal interval of time $t$ such that $\varphi_t(y)=y(t)$ and $\varphi_{\theta_1(t)}(y'(\theta_2(0)))=y'(\theta_2(0)+\theta_1(t))$.
If $a<+\infty$, we set $T=a$ and $T'=\theta_2(0)+\theta_1(a)$. By definition of generalized orbits, either $y(T)$ or $y'(T')$ is in the $\eta$-neighborhood of $K\setminus V$ and they are at distance smaller than $\delta_1+2\eta<r_0$. In particular both $y(T)$ and $y'(T')$ belong to the $2r_0$-neighborhood of $K\setminus V$. Using again the condition $\|u(t)\|+\operatorname{Diam}(\bar P_t(X'))<\eta+\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}<\min (\beta_1,\beta_2)$, the Global invariance gives the condition (b) and $\|\bar P_t(w)\|\leq \delta_1=\alpha/4$ for each $t\in [0,T]$ and each $w\in \pi_y(X')$. The conditions on $T,T'$ are thus satisfied.
If $a=\infty$, we use the assumption (3) to find a large time $T$ such that $y(T)$ belongs to the $r_0$-neighborhood of $K\setminus V$ and we set $T'=\theta_2(0)+\theta_1(a)$. The conditions on $(T,T')$ are checked similarly (this case is simpler).
\noindent \paragraph{\it Second case: $\theta_2(0)< 0$.} We follow the argument of the first case. As above, $d(y',y(\theta_2^{-1}(0))<r_1$ and we apply the Global invariance to the points $y'\in U$ and $y(\theta_2^{-1}(0)$. This gives $\theta_1\in \operatorname{Lip}_{1+\rho}$. We choose $T'>0$ and set $T=\theta_2^{-1}(0)+\theta_1^{-1}(T)$ such that either $y(T)$ or $y'(T')$ is a discontinuity of the family $(y(t))$ or $(y'(t'))$, or $y(T)\in K\setminus V$. \end{proof}
Applying the Claim inductively, we find two increasing sequences of times $T_n,T'_n\to +\infty$. Indeed having defined $T_n,T'_n$, the generalized orbits $\bar P_{T_n}(\bar u)$ and $\bar P_{T'_n}(\bar u')$ satisfy the assumptions of Proposition~\ref{p.coherence} and the Claim associate a pair $T,T'$; we then set $T_{n+1}=T_n+T$ and $T'_{n+1}=T'_n+T'$.
In order to conclude Proposition~\ref{p.coherence}, one considers $z'\in \pi_y(X')\cap X$ and any $z\in \pi_y(X')$ and use Proposition~\ref{p.uniqueness}. Let us check that its assumptions are satisfied: \begin{itemize} \item[--] By our assumptions, and requiring $\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}<\alpha$ we have
$\|\bar P_t(z')-u(t)\|<\alpha/2$ for any $t\geq 0$. \item[--] Since $\bar P_t(z'), \bar P_t(z)\in \bar P_t(\pi_{y}(X'))$, the first item of the lemma implies that we also have
$\|\bar P_t(z)-u(t)\|\leq \alpha$ for any $t\geq 0$. \item[--] Since the complement of the cone field ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is invariant by forward iterations, it only remains to check that $\bar P_{t}(z)-\bar P_{t}(z')\in {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y(t))$ for a sequence of arbitrarily large times $t$.
From the second item of the lemma, the projections of the points $\bar P_{T_n}(z)$, $\bar P_{T_n}(z')$ by $\pi_{y'(T'_n)}$ belong to $\bar P_{T'_n}(\cW^{cs}(\bar u'))$, and hence to $\cW^{cs}(\bar P_{T'_n}(\bar u'))$ by Remark~\ref{r.plaque-invariance}. So their difference belongs to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}_{1/2}(y'(T'_n))$ (by Theorem~\ref{t.generalizedplaque}). The continuity of the cone field and the fact $d(y(T_n),y'(T'_n))<\delta_0$ gives $\bar P_{T_n}(z)-\bar P_{T_n}(z')\in {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y(T_n))$. \end{itemize} Hence Proposition~\ref{p.uniqueness} applies and concludes the proof of Proposition~\ref{p.coherence}. \end{proof}
\subsubsection{Limit dynamics in periodic fibers}
We state a consequence of the existence of plaque families. It will be used for the center-unstable plaques $\cW^{cu}$. Note that any $u\in \cN$ such that $\|P_{-t}(u)\|$ is small for any $t\geq 0$ is a half generalized orbit, hence has a plaque $\cW^{cu}(u)$.
\begin{Proposition}\label{p.fixed-point} For any local fibered flow $(P_t)$ on a bundle $\cN$ admitting a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ where ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is one-dimensional, there exists $\delta>0$ with the following property.
For any periodic point $z\in K$ with period $T$ and any $u\in \cN_z$ satisfying $\|P_{-t}(u)\|\leq \delta$ for all $t>0$ and $0_z\not\in \cW^{cu}(u)$, there exists $p\in \cN_z$ such that $P_{2T}(p)=p$ and $P_{-t}(u)$ converges to the orbit of $p$ when $t$ goes to $+\infty$. \end{Proposition} \begin{proof}
Let $\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ be the constants associated to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ as in Theorem~\ref{t.generalized-plaques}. The plaques $\cW^{cu}(u)$ and $\cW^{cs}(0_z)$ intersect at a (unique) point $y$.
Since $\|u\|$ is small, by the local invariance of the plaque families, $y$ is also the intersection between $P_{1}(\cW^{cs}_{\alpha_E}(\varphi_{-1}(z)))$ and $\cW^{cu}_{\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}(u)$. One deduces that $P_{-1}(y)$ is the (unique) intersection point between the plaques $\cW^{cu}(P_{-1}(u))$ and $\cW^{cs}(\varphi_{-1}(z))$. Repeating this argument inductively, one deduces that the backward orbit of $y$ by $P_{-1}$ remain in the plaques $\cW^{cu}(P_{-k}(u))$ and $\cW^{cs}(\varphi_{-k}(z))$. From Remark~\ref{r.plaque-invariance}, any backward iterate $P_{-t}(y)$ belongs to $\cW^{cu}(P_{-t}(u))$.
Since $y\neq 0_z$, the domination implies that $d(P_{-k}(u),P_{-k}(y))$ is exponentially smaller than $d(P_{-k}(y), 0_{\varphi_{-k}(z)})$ as $k\to +\infty$. So $d(P_{-k}(u),P_{-k}(y))$ goes to $0$. The same argument applied to $P_{-s}(u)$, $s\in [0,1]$ shows that the distance of $P_{-t}(u)$ to $\cW^{cs}(\varphi_{-t}(z))$ converges to $0$ as $t\to +\infty$. In particular, the limit set of the orbit of $u$ under $P_{-2T}$ is a closed subset $L$ of $\cW^{cs}(z)$. In the case $L$ is a single point $p$, the conclusion of the proposition follows.
We assume now by contradiction that $L$ is not a single point. There exists $q\neq 0_z$ invariant by $P_{2T}$ in $L$ such that $L$ intersects the open arc $\gamma$ in $\cW^{cs}(z)$ bounded by $0_z$ and $q$. Note that the forward iterates $P_k(\gamma)$ by $P_1$ remain small, hence in $\cW^{cs}(\varphi_k(z))$ by the local invariance of $\cW^{cs}$. From Remark~\ref{r.plaque-invariance}, any iterate $P_t(\gamma)$ is contained in $\cW^{cs}(\varphi_t(z))$, $t\in \RR$.
Up to replace $u$ by a backward iterate, one can assume that $y$ belongs to $\gamma$. This shows that $P_{-t}(y)$ is the intersection between $\cW^{cu}(P_{-t}(u))$ and $\cW^{cs}(P_{-t}(z))$ for any $t\geq 0$. The set $L$ is thus the limit set of the orbit of $y$ under $P_{-2T}$. This reduces to a one-dimensional dynamics for an orientation preserving diffeomorphism, and $L$ has to be a single point, a contradiction. \end{proof}
\subsubsection{Distortion control}
The following lemma restates the classical Denjoy-Schwartz argument in our setting.
\begin{Lemma}\label{Lem:schwartz} Let us assume that $(P_t)$ is $C^2$, that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is one-dimensional and that $\cW^{cs}$ is a $C^2$ locally invariant plaque family. Then, there is $\beta_S>0$ and for any $C_{Sum}>0$, there are $C_S,\eta_S>0$ with the following property.
For any $x\in K$, for any interval $I\subset \cW^{cs}(x)$ and any $n\in \NN$ satisfying $$ \forall m\in \{0,\dots,n\},\; P_m(I)\subset B(0,\beta_S) \text{ and }
\sum_{m=0}^{n} |P_{m}(I)|\leq C_{Sum},$$ then (1) for any $u,v\in I$ we have \begin{equation}\label{e.distortion}
C_S^{-1}\leq \frac {\|DP_n(u)|_{I}\|}{\|DP_n(v)|_{I}\|}\leq C_S; \end{equation}
in particular $\|DP_n(u)|_{I}\|\leq C_S \frac{|P_n(I)|}{|I|}$;
\noindent (2) any interval $\widehat I\subset \cW^{cs}(x)$ containing $I$
with $|\widehat I |\leq (1+\eta_S)|I|$ satisfies
$|P_{n}(\widehat I)|\leq 2|P_{n}(I)|$;
\noindent (3) any interval $\widehat I\subset \cW^{cs}(x)$ containing $I$
with $|P_{n}(\widehat I)|\leq (1+\eta_S)|P_{n}(I)|$ satisfies
$|\widehat I |\leq 2|I|$. \end{Lemma}
The proof is similar to~\cite[Chapter I.2]{dMvS}.
\subsection{Hyperbolic iterates}\label{ss:hyperbolicreturns} We continue with the setting of Section~\ref{s.plaque} and we fix two locally invariant plaques families $\cW^{cs}$ and $\cW^{cu}$ tangent to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ respectively, and two constants $\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ controlling the geometry and the dynamics inside these plaques as in the previous sections. The plaques are defined at points of $K$ but also at half generalized orbits $\bar u$ contained in a $\eta$-neighborhood of $K$ and parametrized by $[0,+\infty)$ and $(-\infty, 0]$ respectively. The quantities $\eta,\alpha_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>0$ may be reduced in order to satisfy further properties below.
In case ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is $2$-dominated, $\cW^{cs}$ will be a $C^2$-plaque family. Remember that $\tau_0,\lambda$ are the constants associated to the domination, as introduced in Section~\ref{ss.assumptions}.
\subsubsection{Hyperbolic points} We introduce a first notion of hyperbolicity. \begin{Definition} Let us fix $C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>1$. A piece of orbit $(x,\varphi_{t}(x))$ in $K$ is \emph{$(C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$} if for any $s\in (0,t)$, we have
$$\|DP_{s}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)}\| \leq C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{-s}.$$ A point $x$ is \emph{$(C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$} if $(x,\varphi_{t}(x))$ is $(C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ for any $t>0$. We have similar definitions for the bundle ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ (considering the flow $t\mapsto P_{-t}$). \end{Definition}
By the continuity of the fibered flow for the $C^1$-topology, the hyperbolicity extends to orbits close (the proof is easy and omitted). \begin{Lemma}\label{l.shadowingandhyperbolicity} Let us assume that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is one-dimensional. For any $\lambda'>1$, there exist $C',\delta,\rho>0$ such that for any $x,y\in K$, $t>0$ and $\theta\in \operatorname{Lip}_{1+\rho}$ satisfying
$\theta(0)=0$ and $$d(\varphi_s(x),\varphi_{\theta(s)}(y))<\delta \text{ for each } s\in [0,t],$$ then
$$\|DP_{\theta(t)}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)\|\leq C'{\lambda'}^{t}\|DP_t|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\|.$$ \end{Lemma}
Hyperbolicity implies summability for the iterations inside one-dimensional plaques.
\begin{Lemma}[Summability]\label{l.summability-hyperbolicity} Let us assume that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is one-dimensional and consider $\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>1$. Then, there exists $C'_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>1$ and $\delta_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>0$ with the following property.
For any piece of orbit $(x,\varphi_t(x))$ which is $(C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ for any interval $I\subset \cW^{cs}(x)$ containing $0$
whose length $|I|$ is smaller than $\delta_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, and for any interval $J\subset I$ one has
$$|P_t(J)|\leq C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{-t/2}\;|J| \text{ and }
\sum_{0\leq m\leq [t]} |P_m(J)|\leq C'_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\; |J|.$$ \end{Lemma} \begin{proof} Let $\eta>0$ be small such that $1+\eta<\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{1/2}$. Then, there exists $\delta_0$ such that for any $y\in K$ and any interval $I_0\subset \cW^{cs}(y)$ containing $0$ whose length is smaller than $\delta_0$, one has
$$\forall s\in[0,1],~~~|P_{s}(I_0)|\leq (1+\eta)\|DP_{s}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)}\|\; |I_0| .$$ Let us choose $\delta_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ satisfying $\delta_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W} C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{1/2}<\delta_0$. One checks inductively that the length of $P_k(I)$ is smaller than $\delta_0$ for each $k\in [0,t]$. The conclusion of the lemma follows. \end{proof}
\subsubsection{Pliss points} We introduce a more combinatorial notion of hyperbolicity (only used for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$).
\begin{Definition}\label{d.pliss} For $T\geq 0$ and $\gamma>1$, we say that a piece of orbit $(\varphi_{-t}(x),x)$ is a \emph{$(T,\gamma)$-Pliss string} (for the bundle ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$) if there exists an integer $s\in[0,T]$ such that
$$\text{for any integer } m\in \bigg[0,\frac {t-s}{\tau_0}\bigg],\quad \prod_{n=0}^{m-1}\|DP_{-\tau_0}|{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{-(n\tau_0+s)}(x))}\|\leq \gamma^{-m\tau_0}.$$ A point $x$ is $(T,\gamma)$-Pliss (for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$) if $(\varphi_{-t}(x),x)$ is a $(T,\gamma)$-Pliss string for any $t>0$.
For simplicity, a piece of orbit $(\varphi_{-t}(x),x)$ is a \emph{$T$-Pliss string} if it is a $(T,\lambda)$-Pliss string and $x$ is \emph{$T$-Pliss} if it is $(T,\lambda)$-Pliss, where $\lambda$ is the constant for the domination. \end{Definition}
For any $T>0$, there exists $C>1$ such that any piece of orbit which is a $T$-Pliss string is also $(C,\lambda)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Pliss lemma gives a kind of converse result.
\begin{Lemma}\label{Lem:hyperbolic-Pliss} Assume $\operatorname{dim}{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}=1$. For any $\lambda_1>\lambda_2>1$ and $C>1$, there is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>0$ such that any piece of orbit $(\varphi_{-t}(x),x)$ which is $(C,\lambda_1)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is also a $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_2)$-Pliss string. \end{Lemma} \begin{proof}
Let us define $a_i=\log\|DP_{-\tau_0}|{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{-i\tau_0}(x))}\|$ for $0\le i\le \frac t {\tau_0}-1$. Then by applying a usual Pliss lemma, one can get the conclusion (see for instance~\cite[Lemma 2.3]{MCY}). \end{proof}
Pliss strings extend to orbits close.
\begin{Lemma}\label{l.continuity-Pliss} If ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is one-dimensional, there exist $\beta>0$ and $T\geq 1$ with the following property: if $(x,\varphi_{t}(x))$ is a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string with $x\in U$ and $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq T$ and if $y$ satisfies: $$d(y,x)<r_0 \text{ and for any } 0\leq s \leq t \text{ one has }
\|P_s\pi_{x}(y)\|<\beta,$$ then there exists a homeomorphism $\theta\in{\rm Lip}_2$ such that \begin{itemize}
\item[--] $|\theta(0)|\leq 1/4$ and $d(\varphi_s(x),\varphi_{\theta(s)}(y))<r_0/2$ for each $s\in [-1,t+1]$, \item[--] $\pi_{\varphi_s(x)}\circ \varphi_{\theta(s)}(y)=P_s\circ \pi_x(y)$ when $s\in[0,t]$ and $\varphi_s(x)\in U$, \item[--] the piece of orbit $(y,\varphi_{\theta(t)+a}(y))$ is a $(2T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda^{1/2})$-Pliss string for any $a\in [-1,1]$. \end{itemize} \end{Lemma} \begin{proof} The continuity of the flow for the $C^1$-topology implies that there are $\delta>0$ and $\rho\in(0,1/20)$ such that for any $z,z'\in K$ and any $\theta\in{\rm Lip}_{1+\rho}$, if $d(\varphi_s(z),\varphi_{\theta(s)}(z'))<\delta$ for any $0\le s\le \tau_0$, then
$$\|DP_{\theta(0)-\theta(\tau_0)}|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{\theta(\tau_0)}(z'))\|\le \lambda^{\tau_0/5}\|DP_{-\tau_0}|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(z')\|.$$ The Global invariance associates to $\delta,\rho>0$ some constants $r,\beta>0$.
Consider $z,z'\in K$, $n\ge 0$ and $\theta\in {\rm Lip}_{1+\rho}$ such that $(z,\varphi_{n\tau_0}(z))$ is $0$-Pliss and $$d(\varphi_s(z),\varphi_{\theta(s)}(z'))<\delta,~~~\forall 0\le s\le n\tau_0.$$ By the choice of $\delta,\rho$, we have for any $0\le k\le n-1$,
$$\prod_{j=k+1}^{n}\|DP_{\theta((j-1)\tau_0)-\theta(j\tau_0)}|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{\theta(j\tau_0)}(z'))\|< \lambda^{-4/5(n-k)\tau_0}<\lambda^{-3/4 [\theta(n\tau_0)-\theta(k\tau_0)]}.$$
Thus there is $C>0$ depending on $\lambda$ and $\sup_{s\in [0,2\tau_0]} \|DP_{-s}\|$ such that
$$\|DP_{-s}|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{\theta(n\tau_0)-d}(z'))\|\le C\lambda^{-3/4 s},~~~\forall s\in[0,\theta(n\tau_0)-d+\tau_0], \; d\in [0,1].$$
Now, by our assumptions and the Global invariance, there is $\theta\in{\rm Lip}_{1+\rho}$ such that $$d(\varphi_s(x),\varphi_{\theta(s)}(y))<\delta,~~~\forall s\in[-1, t+1].$$ Consider $T$ associated to $\lambda^{3/4},\lambda^{1/2},C$ by Lemma~\ref{Lem:hyperbolic-Pliss}. For any $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>2T$, the fact that $(x,\varphi_{t}(x))$ is $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda)$-Pliss implies that there is an integer $b\in[0,T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}]$ such that $(x,\varphi_{t-b}(x))$ is $0$-Pliss. One chooses $d\in [0,1]$ such that $\theta(t)+a-\theta(t-b)+d$ is a nonnegative integer.
One deduces that for any $s\in [0,\theta(t-b)-d]$,
$$\|DP_{-s}|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{\theta(t-b)-d}(z))\|\le C \lambda^{-3/4 s}.$$ By Lemma~\ref{Lem:hyperbolic-Pliss}, there is an integer $e\in [0,T]$ such that $(z,\varphi_{\theta(t-b)-d-e}(z))$ is $(0,\lambda^{1/2})$-Pliss. Now $\theta(t-b)-d-e$ differs from $\theta(t)+a$ by an integer smaller than $(1+\rho)T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X} +2 +T$, which is smaller than $2T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Hence $(z,\varphi_{\theta(t)}(z))$ is $(2T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda^{1/2})$-Pliss for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. \end{proof}
The next proposition will allow us to find iterates that are Pliss points and belong to $U$.
\begin{Proposition}\label{l.summability} Let us assume that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is one-dimensional and let $W$ be a set such that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on $\bigcup_{s\in [0,1]}\varphi_s(W)$. Then, there exist $C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>1$ such that any $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq 0$ large enough has the following property.
If there are $x\in K$ and integers $0\leq k\leq \ell$ such that \begin{itemize} \item[--] $(\varphi_{-\ell}(x),\varphi_{-k}(x))$ is a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string, \item[--] for any $j\in \{1,\dots,k-2\}$ either $\varphi_{-j}(x)\in W$ or the piece of orbit $(\varphi_{-\ell}(x),\varphi_{-j}(x))$ is not a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string, \end{itemize} then $(\varphi_{-k}(x),x)$ is $(C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
Similarly if there are $x\in K$ and $k\geq 0$ such that \begin{itemize} \item[--] $\varphi_{-k}(x)$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss, \item[--] for any $j\in \{1,\dots,k-2\}$ either the point $\varphi_{-j}(x)$ is in $W$ or $\varphi_{-j}(x)$ is not $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss, \end{itemize} then $(\varphi_{-k}(x),x)$ is $(C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. \end{Proposition}
\begin{proof} The proof is essentially contained in \cite[Lemma 9.20]{CP}. Recall that $\lambda>1$ is the constant for the domination. There exist $C_0,\lambda_0>1$ such that for any piece of orbit $(y, \varphi_t(y))$ in $\bigcup_{s\in [0,1]}\varphi_s(W)$, one has
$\|DP_t{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)}\|\leq C_0\lambda_0^{-t}$. Let $C_1>1$ such that \begin{equation}\label{C1}
\forall y\in K,~\forall s\in [-\tau_0,\tau_0],~~~\|DP_{s}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)}\|\leq C_1\lambda^{-s}. \end{equation} One takes $C_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}=C_0^2C_1^3$ and $\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}>1$ smaller than $\min(\lambda,\lambda_0)$. One then chooses $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq 0$ large such that $C_0C_1\lambda^{-T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}<\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{-T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$.
Let $x\in K$, $0\leq k\leq \ell$ be as in the statement of the lemma. We introduce the set $$\cP=\bigg\{j\in\{1,\dots,k-2\},\; (\varphi_{-\ell}(x),\varphi_{-j}(x)) \text{ is a } T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\text{-Pliss string}\bigg\}.$$
The set $\cP$ decomposes into intervals $\{a_i,1+a_i,\dots,b_i\}\subset \{1,\dots,k-2\}$, with $i=1,\dots,i_0$, such that $b_i+1<a_{i+1}$. By convention we set $b_0=0$.
\begin{Claim-numbered} $b_i-a_i\geq T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ unless $\{a_i,\dots,b_i\}$ contains $1$ or $k-2$. \end{Claim-numbered} \begin{proof} By maximality of each interval, $(\varphi_{-\ell}(x),\varphi_{-b_i}(x))$ has to be a $0$-Pliss string. \end{proof}
\begin{Claim-numbered}\label{c.pliss} Consider $n_1,n_2\in \{0,\dots,k\}$ with $n_1<n_2$ and such that $(\varphi_{-\ell}(x),\varphi_{-n_2}(x))$ is a $0$-Pliss string and $(\varphi_{-\ell}(x),\varphi_{-j}(x))$ is not a $0$-Pliss string for $n_1<j<n_2$. Then for any $0\leq m < (n_2-n_1)/\tau_0$,
$$\|DP_{m\tau_0}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{-n_2}(x))}\|\leq \lambda^{-m\tau_0}.$$ \end{Claim-numbered} \begin{proof} One checks inductively that \begin{equation}\label{e.pliss}
\prod_{n=0}^{m-1}\|DP_{\tau_0}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{-n_2+n\tau_0}(x))}\|\leq \lambda^{m\tau_0}. \end{equation} Indeed if this inequality holds up to an integer $m-1$ and fails for $m$, the piece of orbit $(\varphi_{-n_2}(x),\varphi_{-n_2+m\tau_0}(x))$ is a $0$-Pliss string. It may be concatenate with $(\varphi_{-\ell}(x),\varphi_{-n_2}(x))$, implying that $(\varphi_{-\ell}(x),\varphi_{-n_2+m\tau_0}(x))$ is a $0$-Pliss string. This is a contradiction since $n_2-m\tau_0> n_1$.
The estimate of the claim follows from~\eqref{e.pliss} by domination. \end{proof}
The proposition will be a consequence of the following properties. \begin{Claim-numbered}\label{c.return0} \begin{enumerate}
\item The piece of orbit $(\varphi_{-b_i}(x), \varphi_{-b_{i-1}}(x))$ is $(C_0C_1,\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, for any $i\in \{1,\dots,i_0\}$, unless $i=i_0$ and $b_{i_0}=k-2$.
Moreover if $i\neq 1$, then $\|DP_{b_i-b_{i-1}}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{-b_i}(x))}\|\leq \lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{-(b_{i}-b_{i-1})}$.
\item If $b_{i_0}=k-2$, then $(\varphi_{-k}(x), \varphi_{-b_{i_{0}-1}}(x))$ is $(C_0C_1,\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
\item If $b_{i_0}<k-2$, then $(\varphi_{-k}(x), \varphi_{-b_{i_0}}(x))$ is $(C_0C_1,\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. \end{enumerate} \end{Claim-numbered} \begin{proof} In order to check the first item, one introduces the smallest $j\in \{a_i,\dots,b_i\}$ such that $(\varphi_{-\ell}(x),\varphi_{-j}(x))$ is a $0$-Pliss string: it exists unless $i=i_0$ and $b_{i_0}=k-2$. By our assumptions, the piece of orbit $(\varphi_{-b_i}(x), \varphi_{-j}(x))$ is contained in $\bigcup_{s\in [0,1]}\varphi_s(W)$, hence is $(C_0,\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. Then Claim~\ref{c.pliss} gives
$\|DP_{m\tau_0}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{-j}(x))}\|\leq \lambda^{-m\tau_0}$ for any $m\in \{0,\dots, (j-b_{i-1})/\tau_0\}$. One concludes the first part of item 1 by combining these estimates with~\eqref{C1}.
Note that one also gets the estimate:
$$\|DP_{b_i-b_{i-1}}|{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\varphi_{-b_i}(x))\|\leq C_0C_1\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{-(b_{i}-b_{i-1})}\bigg(\frac \lambda {\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\bigg)^{-(j-b_{i-1})}.$$ If $i\geq 2$, one gets $j-b_{i-1}\geq j-a_i= T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, hence by our choice of $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$:
$$\|DP_{b_i-b_{i-1}}|{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\varphi_{-b_i}(x))\|\leq C_0C_1\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{-(b_{i}-b_{i-1})}\bigg(\frac \lambda {\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\bigg)^{-T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}} \leq \lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{-(b_{i}-b_{i-1})}.$$ This gives the second part of item 1.
The proofs of items 2 and 3 are similar to the proof of item 1: for item 2, one introduces the smallest $j\geq a_{i_0}$ such that $(\varphi_{-\ell}(x),\varphi_{-j}(x))$ is a $0$-Pliss string; for item 3, one introduces the smallest $j\geq k$ such that $(\varphi_{-\ell}(x),\varphi_{-j}(x))$ is a $0$-Pliss string and use the fact that $(\varphi_{-\ell}(x),\varphi_{-k}(x))$ is a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string. \end{proof}
From Claim~\ref{c.return0}, one first checks that for each $i\in \{1,\dots,i_0\}$
$$\|DP_{k-b_i}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\varphi_{-k}(x))\|\leq C_0C_1\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{k-b_i}.$$ For each $m\in \{0,\dots,k\}$, either there exists $i\in \{1,\dots,i_0\}$ such that $b_{i-1}\leq m\leq b_i$ or $b_{i_0}\leq m \leq k$. Using items 1 or 3, one concludes
$$\|DP_{k-m}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\varphi_{-k}(x))\|\leq C_0^2C_1^2\lambda_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^{k-m}.$$
Combining with~\eqref{C1}, one gets the required bound on $\|DP_{k-t}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(\varphi_{-k}(x)\|$ for any $t\in [0,k]$.
This prooves the lemma for pieces of orbits $(\varphi_{-\ell}(x),x)$ such that $(\varphi_{-\ell}(x),\varphi_{-k}(x))$ is a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string. The proof for half orbits $\{\varphi_{-t}(x),t>0\}$ such that $\varphi_{-k}(x)$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss is similar. The proposition is proved. \end{proof}
\subsubsection{Hyperbolic generalized orbits} The hyperbolicity extends to half generalized orbits. (Recall that if $\bar u$ is parametrized by $(-\infty,0]$, one can define the space ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\bar u)$, see Subsection~\ref{ss.plaque-genralized}.)
\begin{Definition} Let us fix $C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>1$, $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq 0$ and consider a half generalized orbit $\bar u$ parametrized by $(-\infty,0]$.
\noindent
$\bar u$ is \emph{$(C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$} if for any $t\geq 0$, we have $\|D\bar P_{t}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\bar u)}\| \leq C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}^{-t}$.
\noindent $\bar u$ is \emph{$(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})$-Pliss (for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$)} if there exists an integer $s\in [0,T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}]$ such that for any $m\in \NN$,
$$ \prod_{n=0}^{m-1}\|D\bar P_{-\tau_0}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\bar P_{-(n\tau_0+s)}(\bar u))}\| \leq \lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}^{-m\tau_0}.$$
\end{Definition}
One can define similarly hyperbolicity and Pliss property for pieces of generalized orbits. By continuity and invariance of the spaces ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\bar u)$, one gets
\begin{Lemma}\label{l.cont3} For any $\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\in (1,\lambda)$ and $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq 0$, there exists $\eta>0$ such that for any $y\in K$ which is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss and for any half generalized orbit $\bar u$ parametrized by $(-\infty,0]$, if \begin{description} \item \quad \quad $\bar u$ is in the $\eta$-neighborhood of $K$ and its projection on $K$ is $(\varphi_{-t}(y))_{t\geq 0}$, \end{description} then $\bar u$ is $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})$-Pliss. \end{Lemma}
\begin{Lemma}\label{l.cont4} For any $C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}, \lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>1$, there exists $\eta>0$ such that for any piece of orbit $(\varphi_{-t}(y),y)$ which is $(C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}/2,\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}^2)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and for any half generalized orbit $\bar u$ parametrized by $(-\infty,0]$, if \begin{description} \item \quad \quad $\bar u$ is in the $\eta$-neighborhood of $K$ and $u(-s)\in \cN_{\varphi_{-s}(y)}$ for each $s\in (0,t)$, \end{description} then $(\bar P_{-t}(\bar u),\bar u)$ is $(C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. \end{Lemma} {The proofs of Lemmas~\ref{l.cont3} and~\ref{l.cont4} are standard by continuity, hence are omitted.}
\subsubsection{Unstable manifolds}\label{ss.unstable} Pliss points have uniform unstable manifolds in the plaques (see e.g.~\cite[Section 8.2]{abc-measure}).
\begin{Proposition}\label{p.unstable} Consider $\eta>0$ and a center-unstable plaque family $\cW^{cu}$ as given by Theorem~\ref{t.generalized-plaques}. For any $C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>1$, $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>0$ and $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq 0$, there exists $\alpha>0$ such that for any half generalized orbit $\bar u$ parametrized by $(-\infty, 0]$, in the $\eta$-neighborhood of $K$, if $\bar u$ is $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})$-Pliss, or if ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is one-dimensional and $\bar u$ is $(C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, then: $$\forall t\geq 0,\quad \operatorname{Diam}(\bar P_{-t}(\cW^{cu}_{\alpha}(\bar u)))\leq \beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}^{-t/2}.$$ In particular, from Remark~\ref{r.plaque-invariance}, the image $\bar P_{-t}(\cW^{cu}_{\alpha}(\bar u))$ is contained in $\cW^{cu}(\bar P_{-t}(\bar u))$. \end{Proposition}
\subsubsection{Lipschitz holonomy and rectangle distortion}\label{ss.rectangle} It is well-known that for a one-codimensional invariant foliation whose leaves are uniformly contracted, the holonomies between transversals are Lipschitz. In order to state a similar property in our setting we define the notion of rectangle.
\begin{Definition}\label{d.rectangle} A \emph{rectangle} $R\subset \cN_x$ is a subset which is homeomorphic to $[0,1]\times B_{d-1}(0,1)$ by a homeomorphism $\psi$, where $B_{d-1}(0,1)$ is the $\operatorname{dim}(\cN_x)-1$-dimensional unit ball such that: \begin{itemize} \item[--] the set $\psi(\{0,1\}\times B_{d-1}(0,1))$ is a union of two $C^1$-discs tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, and is called the \emph{${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-boundary $\partial^{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}R$}, \item[--] the curve $\psi([0,1]\times \{0\})$ is $C^1$ and tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. \end{itemize} A rectangle $R$ has \emph{distortion bounded by $\Delta>1$} if for any two $C^1$-curves $\gamma,\gamma'\subset R$ tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ with endpoints in the two connected components of $\partial^{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}R$, then
$$\Delta^{-1} |\gamma|\leq |\gamma'| \leq \Delta |\gamma|.$$ \end{Definition}
\begin{Proposition}\label{p.distortion} Assume that the local fibered flow is $C^2$. For any $C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>1$, there exist $\Delta>0$ and $\beta>0$ with the following property. For any $y,\varphi_{-t}(y)\in K$ and $R\subset \cN_y$ such that: \begin{itemize} \item[--] $(\varphi_{-t}(y),y)$ is $(C_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, \item[--] $P_{-s}(R)$ is a rectangle and has diameter smaller than $\beta$ for each $s\in [0,t]$, \item[--] if $D_1,D_2$ are the two components of $\partial^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(P_{-t}(R))$, then $$d(D_1,D_2)>10.\max(\operatorname{Diam}(D_1),\operatorname{Diam}(D_2)),$$ \end{itemize} then the rectangle $R$ has distortion bounded by $\Delta$. \end{Proposition} \begin{proof} It is enough to prove the version of this result stated for a sequence of $C^2$-diffeomorphisms with a dominated splitting. Then the argument is the same as~\cite[Lemma 3.4.1]{PS1}. \end{proof}
\begin{Remark-numbered}\label{r.distortion} In the previous statement, it is enough to replace the second condition by the weaker one: \emph{$R$ is contained in $B(0_{y},\beta)$}.
Indeed, the proof considers a backward iterate $P_{-s}(R)$ such that $d(P_{-s}(D_1),P_{-s}(D_2))$ is comparable to $\max(\operatorname{Diam}(P_{-s}(D_1),\operatorname{Diam}(P_{-s}(D_2))$. The second condition ensures that the backward iterates of $R$ exist and remain small until such a time.
If we know that $R\subset B(0_{y},\beta)$ for $\beta$ small, this can be verified as follow: by the hyperbolicity for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, the diameter of the ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-boundary of $P_{-s}(R)$ decreases exponentially as $s$ increases; by the domination, the ratio between the diameter of the ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-boundary and the distance between the two ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-boundaries also increases exponentially. One deduces that the diameter of $P_{-s}(R)$ remains small until the first time $s$ such that the ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-boundary of $P_{-s}(R)$ becomes much smaller than $d(P_{-s}(D_1),P_{-s}(D_2))$ \end{Remark-numbered}
\section{Topological hyperbolicity}\label{s.topological-hyperbolicity}
\noindent {\bf Standing assumptions.} In the whole section, $(\cN,P)$ is a $C^2$ local fibered flow over a topological flow $(K,\varphi)$ and $\pi$ is an identification compatible with $(P_t)$ on an open set $U$ such that: \begin{enumerate} \item[(A1)] there exists a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and the fibers of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ are one-dimensional, \item[(A2)] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on an open set $V$ containing $K\setminus U$, \item[(A3)] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over any periodic orbit $\cO\subset K$. \end{enumerate} From the last item and Proposition~\ref{p.2domination}, the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is $2$-dominated. By Theorem~\ref{t.plaque}, one can fix a $C^2$-plaque family $\cW^{cs}$ tangent to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and by Theorem~\ref{t.generalized-plaques}, there exists a $C^1$-plaque family $\cW^{cu}$ for half generalized orbits parametrized by $(-\infty,0]$ that are in a small neighborhood of $K$. Both are locally invariant by the time-one maps $P_1$ and $\bar P_{-1}$ respectively.
The goal of this section is to prove the following theorem (see Subsection~\ref{ss.conclusion-topological}):
\begin{Theorem}\label{Thm:topologicalcontracting} Under the assumptions above, one of the following properties occur: \begin{enumerate}
\item[--] There exists a non empty proper invariant compact subset $K'\subset K$ such that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}|_{K'}$ is not uniformly contracted. \item[--] $K$ is a normally expanded irrational torus. \item[--] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is \emph{topologically contracted}: there is $\varepsilon_0>0$ such that the image $P_t({\cal W}^{cs}_{\varepsilon_0}(x))$ is well-defined for any $t\ge 0$, $x\in K$, and
$$\lim_{t\to+\infty}\sup_{x\in K} |(P_t({\cal W}^{cs}_{\varepsilon_0}(x))|=0.$$ \end{enumerate} \end{Theorem}
\noindent {\bf Choice of constants.} Let us remark that by assumption (A2), the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is also uniformly contracted on a neighborhood of ${\rm Closure}(V)$. Hence, on the set $\bigcup_{s\in [0,\varepsilon]}\varphi_s(V)$, for some $\varepsilon>0$ small. By Remark~\ref{r.identification}.(a), one can rescale the time so that $\varepsilon=1$ and assume: \begin{enumerate} \item[(A2')] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on $\bigcup_{s\in [0,1]}\varphi_s(V)$, where $V$ is an open set containing $K\setminus U$. \end{enumerate}
As introduced in Subsection~\ref{ss.assumptions} we denote by $\tau_0\in \NN$ and $\lambda>1$ the constants associated to the $2$-domination ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Proposition~\ref{l.summability} associates to the set $W:=V$ the constants $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq 0$ defining Pliss points for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$ defining the hyperbolicity for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. We also choose arbitrarily $\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}\in (1,\lambda)$. Sections~\ref{s.plaque} gives some constants $\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ controlling size of the unstable manifold at $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-Pliss points of $K$, and also at half generalized orbits parametrized by $(-\infty,0]$ in the $\eta$-neighborhood of $K$, provided $\eta$ is small enough. There exists $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}>0$ such that any $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-Pliss generalized orbit $\bar u$ is also $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Proposition~\ref{p.unstable} gives $\alpha>0$ such that for any generalized orbit in the $\eta$-neighborhood of $K$ which is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, the backward iterates $P_{-t}(\cW_\alpha^{cu})$ have diameter smaller than $\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X} \lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{-t/2}$.
We also consider small constants $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},r,\delta_0,\alpha'>0$ which will be chosen in this order during this section: they control distances inside the spaces $\cN_x$, $K$, or $\cW^{cs}_x$.
\subsection{Topological stability and $\delta$-intervals}
\subsubsection{Dynamics of $\delta$-intervals} We introduce a crucial notion for this section.
\begin{Definition}\label{Def:deltainterval} Consider $\delta\in (0,\delta_0]$. A curve $I\subset {\cal W}^{cs}_x$ (not reduced to a single point), for $x\in K$, is called a \emph{$\delta$-interval} if $0_x\in I$ and for any $t\ge 0$, one has
$$|P_{-t}(I)|\le\delta.$$ \end{Definition}
One example of $\delta$-interval is given by a periodic point $z\in K$ together with a non-trivial interval in $\cW^{cs}(z)$ that is periodic for $(P_t)$ and contains $0_z$.
\begin{Definition} A $\delta$-interval $I$ is \emph{periodic} if it coincides with $P_{-T}(I)$ for some $T>0$.
\noindent We say that a $\delta$-interval $I$ at $x$ is \emph{contained in the unstable set of some periodic $\delta$-interval} if: \begin{enumerate} \item the $\alpha$-limit set $\alpha(x)\subset K$ of $x$ is the orbit of a periodic point $y$, \item $y$ admits a periodic $\delta$-interval $\widehat I_y$, \item $P_{-t}(I)$ accumulates as $t\to +\infty$ on the orbit of a (maybe trivial) interval $I_y\subset \widehat I_y$. \end{enumerate} \end{Definition}
The next property will be proved in Section~\ref{ss.Lyapunov}. \begin{Lemma}\label{l.periodic} {There is $\delta_0>0$ such that for any $\delta\in(0,\delta_0]$,} for any periodic $\delta$-interval $I\subset \cN_q$, there exists $\chi>0$ with the following property.
Let $z$ be close to $q$, let $L\subset \cN_z$ be an arc which is close to $I$ in the Hausdorff topology and contains $0_z$ and let $T>0$ such that $|P_{-t}(L)|\leq \delta$ for any $t\in [0,T]$. Then $|P_{-T}(L)|>\chi$. \end{Lemma} Proposition~\ref{Prop:dynamicsofinterval} describes dynamics of $\delta$-intervals. It is an analogue to ~\cite[Theorem 3.2]{PS2}.
\begin{Proposition}\label{Prop:dynamicsofinterval} {There is $\delta_0>0$ such that if there is a $\delta$-interval $I\subset \cW^{cs}_x$ for $\delta\in(0,\delta_0]$} then \begin{itemize} \item[--] either $K$ contains a normally expanded irrational torus, \item[--] or $I$ is contained in the unstable set of some periodic $\delta$-interval. \end{itemize} \end{Proposition}
\begin{Remark} In the first case one can even show that $\alpha(x)$ is a normally expanded irrational torus. We will not use it. \end{Remark}
\noindent {\it Strategy of the proof of Proposition~\ref{Prop:dynamicsofinterval}.} The next five subsections are devoted to the proof: \begin{itemize} \item[--] One introduces a \emph{limit} $\delta$-interval $I_\infty$ from the backward orbit of $I$ (Section~\ref{ss.limit}). \item[--] $I_\infty$ has returns close to itself (Section~\ref{ss.return-I-infty}). Under some ``non-shifting" condition one gets a periodic $\delta$-interval (Section~\ref{ss.criterion-periodic}) and the last case of the proposition holds. \item[--] If the ``non-shifting" condition does not hold, there exists a normally expanded irrational torus which attracts $x$, $I$ and $I_\infty$ by backward iterations (Sections~\ref{ss.aperiodic} and~\ref{ss.topological}). \end{itemize} The conclusion of the proof is given in Section~\ref{ss.Lyapunov}.
\subsubsection{Topological stability}\label{sss.lyapunov-stable} Before proving Lemma~\ref{l.periodic} and Proposition~\ref{Prop:dynamicsofinterval}, we derive a consequence.
\begin{Proposition}\label{Pro:lyapunovstablity} If there is no normally expanded irrational torus, then ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is \emph{topologically stable}: there is $\varepsilon_0>0$ and for any $\varepsilon_1\in(0,\varepsilon_0)$, there is $\varepsilon_2>0$ such that $$\forall x\in K~\textrm{and}~\forall t>0,~~~P_t({\cal W}^{cs}_{\varepsilon_2}(x))\subset {\cal W}^{cs}_{\varepsilon_1}(\varphi_t(x)).$$ \end{Proposition} \begin{proof} By Remark~\ref{r.plaque-invariance} it is enough to check that
$|P_t({\cal W}^{cs}_{\varepsilon_2}(x))|$ is bounded by $\varepsilon_1$. {One can choose $\delta_0$ small so that Proposition~\ref{Prop:dynamicsofinterval} holds.} We argue by contradiction. If the topological stability does not hold, then there exist {$\delta\in(0,\delta_0]$} a sequence $(x_n)$ in $K$ and, for each $n$, an interval $I_n\subset {\cal W}^{cs}(x_n)$ containing $0$ and a time $T_n>0$ such that: \begin{itemize}
\item[--] $|I_n|\to 0$ as $n\to +\infty$.
\item[--] $|P_{T_n}(I_n)|=\delta$
and $|P_{t}(I_n)|<\delta$ for all $0<t<T_n$. \end{itemize} Taking a subsequence, one can assume that $(\varphi_{T_n}(x_n))$ converges to a point $x\in K$ and $(P_{T_n}(I_n))$
to an interval $I$. We have $|I|=\delta$ and
$|P_{-t}(I)|\le\delta$ for all $t>0$, so that $I$ is an $\delta$-interval.
The second case of the proposition is satisfied by $I$ and $L_n=P_{T_n}(I_n)$, $n$ large. Lemma~\ref{l.periodic} implies that $P_{-T_n}(L_n)$ has length uniformly bounded away from $0$. This contradicts the fact that the length of $I_n=P_{-T_n}(L_n)$ goes to $0$ when $n\to \infty$. \end{proof}
\subsection{Limit $\delta$-interval $I_\infty$}\label{ss.limit} One can obtain infinitely many $\delta$-intervals with length uniformly bounded away from zero at points of the backward orbit of a $\delta$-interval. The goal of this section is to prove this property.
\begin{Proposition}\label{p.limit} If $\delta>0$ is small enough, for any $x\in K$ and any $\delta$-interval $I$, there exists an increasing sequence $(n_k)$ in $\NN$ and $\delta$-intervals $\widehat I_k$ at $\varphi_{-n_k}(x)$ such that: \begin{itemize} \item[--] $P_{-n_k}(I)$ and $P_{n_\ell-n_k}(\widehat I_{\ell})$ are contained in $\widehat I_{k}$ for any $\ell\leq k$, \item[--] $\varphi_{-n_k}(x)$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss and belongs to $K\setminus V$, for any $k\geq 0$, \item[--] $(\widehat I_k)$ converges to some $\delta$-interval $I_\infty$ at some point $x_\infty\in K\setminus V$. \end{itemize} \end{Proposition}
\subsubsection{Existence of hyperbolic returns} \begin{Lemma}\label{Lem:hyperbolicreturns} If $\delta>0$ is small enough, any point $x\in K$ which admits a $\delta$-interval has infinitely many backward iterates $\varphi_{-n}(x)$, $n\in \NN$, in $K\setminus V$ that are $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss. \end{Lemma}
\begin{proof} Since the backward iterates $P_{-n}(I)$, $n\in \NN$, of a $\delta$-interval $I$ are still $\delta$-intervals, it is enough to show that any point $x$ has at least one backward iterate by $\varphi_{1}$ in $K\setminus V$ that is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss. The proof is done by contradiction.
{Let $\chi>0$ such that $1+\chi<\min(\lambda,\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}).$} If $\delta_1>0$ is small enough, then for any $\delta$-interval $I$ with $\delta<\delta_1$, at any point $x$, we have
$$\|DP_{-t}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)}\|\leq (1+\chi)^t\;\frac{|P_{-t}(I)|}{|I|}\leq\frac{(1+\chi)^t\;\delta}{|I|},~~~\forall t\ge0.$$ With the domination estimate~\eqref{e.domination} of Section~\ref{ss.assumptions}, one gets for any $k\geq 0$:
$$\prod_{j=0}^{k-1}\|DP_{-\tau_0}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(\varphi_{-j\tau_0}(x))}\| \leq \frac{(1+\chi)^{k\tau_0}\;\lambda^{-2k\tau_0}\;\delta}{|I|}.$$ Using $(1+\chi)<\lambda$ and Pliss lemma (see~\cite[Lemma 11.8]{Man87}), there exists an arbitrarily large integer $i$ such that for any $k\geq 0$ one has:
$$\prod_{j=0}^{k-1}\|DP_{-\tau_0}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}({\varphi_{-(j+i)\tau_0}(x)})}\| \leq \lambda^{-k\tau_0}.$$ This proves that $x$ has arbitrarily large $0$-Pliss backward iterates by $\varphi_{\tau_0}$.
Let us fix any of these $0$-Pliss backward iterates $\varphi_{-k\tau_0}(x)$. By contradiction, we assume that there is no iterate $\varphi_{-n}(x)$ in $K\setminus V$ that is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss with $1\leq n\leq k\tau_0$. We can thus apply Proposition~\ref{l.summability} with $W=V$; for any such $n$, one gets
$$\|DP_{n}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{-k\tau_0}(x))}\|\leq C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{-n}.$$ As before, this implies that
$$|I|\leq |P_{-k\tau_0}(I)|C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{-k\tau_0}(1+\chi)^{k\tau_0}\leq \delta\; C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\; (1+\chi)^{k\tau_0}\; \lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{-k\tau_0}.$$
Since $(1+\chi)/\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}<1$ and $k$ is arbitrarily large, one gets $|I|=0$ which is a contradiction. \end{proof}
\subsubsection{Rectangles associated to $\delta$-intervals of Pliss points}
When $\delta>0$ is smaller than $\eta$, any point $u$ in a $\delta$ interval $I$ has a well defined backward orbit under $(P_t)$, which satisfies the definition of generalized orbit in the $\eta$-neighborhood of $K$. Consequently, it has a space ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(u)$ and a center-unstable plaque $\cW^{cu}(u)$. Moreover by Lemma~\ref{l.cont3}, if the point $x\in K$ such that $u\in \cN_x$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss, then $u$ is $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-Pliss.
We will need to build a rectangle $R(I)$ foliated by unstable plaques for each $\delta$-interval $I$ above a Pliss point $x\in K$. As before $d$ denotes the dimension of the fibers $\cN_x$.
\begin{Proposition}\label{p.rectangle} Fix $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\in (0,\beta_0/4)$. There exist $\alpha_{min},\delta_0,C_{R}>0$ such that for any $\delta\in (0,\delta_0]$, any $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss point $x\in K\setminus V$ and any $\delta$-interval $I$ at $x$, we associate a rectangle $R(I)$ which is the image of $[0,1]\times B_{d-1}(0,1)$ by a homeomorphism $\psi$ such that: \begin{enumerate} \item $\psi\colon [0,1]\times \{0\}\to \cN_x$ is a $C^1$-parametrization of $I$, \item each $u\in R(I)$ belongs to a (unique) leaf $\psi(\{z\}\times B_{d-1}(0,1))$ denoted by $W^u_{R(I)}(u)$; it is contained in $\cW^{cu}(u)$ (and $\cW^{cu}(\psi(z,0))$), and it contains a disc with radius $\alpha_{min}$,
\item ${\rm Volume}(R(I))\geq C_{R}.|I|$, \item for any $t>0$, ${\rm Diam}(P_{-t}(W^u_{R(I)}(u)))<\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{-t/2}$; hence $P_{-t}(R(I))\subset B(0,2\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})\subset \cN_{\varphi_{-t}(x)}$. \end{enumerate} Moreover, if $x,x'$ are $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss points with $\delta$-intervals $I,I'$ and if $t,t'>0$ satisfy: \begin{itemize} \item[--] $\varphi_{-t}(x),\varphi_{-t'}(x')$ belong to $K\setminus V$ and are $r_0$-close, \item[--] $P_{-t}(R(I))$ and the projection of $P_{-t'}(R(I'))$ by $\pi_{\varphi_{-t}(x)}$ intersect, \end{itemize} then, the foliations of $P_{-t}(R(I))$ and $\pi_{\varphi_{-t}(x)}\circ P_{-t'}(R(I'))$ coincide: if $P_{-t}(W^u_{R(I)}(u))$ and $\pi_{\varphi_{-t}(x)}P_{-t'}(W^u_{R(I')}(u'))$ intersect, they are contained in a same $C^1$-disc tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. \end{Proposition} \begin{Remark-numbered}\label{r.intersection} Assume {that} $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $r>0$ {are} small enough. By the Global invariance, under the assumptions of the last part of Proposition~\ref{p.rectangle}, and assuming $d(\varphi_{-t}(x),\varphi_{-t'}(x'))<r$, there exists $\theta\in \operatorname{Lip}$ such that $\theta(0)=0$ and \begin{itemize} \item[--] the distance $d(\varphi_{-\theta(s)-t'}(x'),\varphi_{-s-t}(x))$ for $s>0$ remains bounded (and arbitrarily small if $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}>0$ and $r$ have been chosen small enough), \item[--] $P_{-s}\circ \pi_{\varphi_{-t}(x)}=\pi_{\varphi_{-s-t}(x)}\circ P_{-\theta(s)}$ on $P_{-t'}(R(I'))$ when $\varphi_{-s-t}(x)$ and $\varphi_{-\theta(s)-t'}(x')$ are $r_0$-close and belong to $U$. \end{itemize} \end{Remark-numbered}
\begin{proof}[Proof of Proposition~\ref{p.rectangle}] By Proposition~\ref{p.distortion}, one associates to $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$ some constants $\Delta,\beta$. One can reduce $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ to ensure $2\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}<\beta$.
By Proposition~\ref{p.unstable} one associates to $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, some quantities $\alpha',\eta'$. We will assume that $\delta$ is smaller than $\eta'$ so that for any $u$ in the $\delta$-interval $I$ and any $t\geq 0$ we have an exponential contraction of the unstable plaque of size $\alpha'$: $$\forall t\geq 0,\quad \operatorname{Diam}(\bar P_{-t}(\cW^{cu}_{\alpha'}(\bar u)))\leq \beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{-t/2}.$$
One parametrizes the curve $I$ by $[0,1]$. From the plaque-family Theorem~\ref{t.generalized-plaques}, there exists a continuous map $\psi\colon [0,1]\times \RR^{d-1}\to \cN_x$ such that $\psi([0,1]\times \{0\})=I$. Up to rescale $\RR^{d-1}$, the image $\psi(\{s\}\times B_{d-1}(0,1))$ is small, hence is contained in $\cW^{cu}_{\alpha'}(\psi(s,0))$ for each $s\in [0,1]$.
Note that two plaques $\psi(\{s\}\times B_{d-1}(0,1)),\psi(\{s'\}\times B_{d-1}(0,1))$, for $s\neq s'$, can not be contained in a same $C^1$-disc tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ since they intersect the transverse curve $I$ at two different points. By coherence, they are disjoint: indeed since the backward orbit of $x$ has arbitrarily large backward iterates in $K\setminus V$ (see Lemma~\ref{Lem:hyperbolicreturns}), Proposition~\ref{p.coherence} applies.
Hence $\psi$ is injective on $[0,1]\times B_{d-1}(0,1)$ and is a homeomorphism on its image $R(I)$ by the invariance of domain theorem. In particular $R(I)$ satisfies Definition~\ref{d.rectangle} and is a rectangle. The coherence again implies that for any $u\in \psi(\{s\}\times B_{d-1}(0,1))$, the plaque $\cW^{cu}(u)$ contains $\psi(\{s\}\times B_{d-1}(0,1))$. By compactness, there exists $\alpha_{min}$ (which does not depend on $x$ and $I$) such that $\psi(\{s\}\times B_{d-1}(0,1))$ contains $\cW^{cu}_{\alpha_{min}}(\psi(s,0))$ for any $s\in [0,1]$.
The rectangle $R(I)$ has distortion bounded by $\Delta$ (from Proposition~\ref{p.distortion}), hence one bounds ${\rm Volume}(R(I))$ from below by using Fubini's theorem and integrating along curves tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. This gives the four items of the lemma.
The last part is a direct consequence of the coherence (Proposition~\ref{p.coherence}). \end{proof}
\subsubsection{Maximal $\delta$-intervals} In the setting of Proposition~\ref{p.limit}, let us consider all the integers $n_0=0< n_1<n_2<\cdots<n_k<\cdots$ such that the backward iterate $x_k:=\varphi_{-n_k}(x)$ belongs to $K\setminus V$ and is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss. We introduce inductively some maximal $\delta$-intervals $\widehat{I}_{k}$ at these iterates such that $I\subset \widehat I_0$ and $P_{n_k-n_{k+1}}(\widehat I_k)\subset \widehat I_{k+1}$. We denote $R_k=R(\widehat{I}_{k})$.
Lemma~\ref{l.summability-hyperbolicity} associates to $C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$ some constants $C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\delta_E$. Let us assume $\delta<\delta_E$. From the definition of the sequence $(n_k)$, Proposition~\ref{l.summability} implies that $(\varphi_{-n_{k+1}}(x),\varphi_{-n_{k}}(x))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and for each $k$. Lemma~\ref{l.summability-hyperbolicity} then gives \begin{equation}\label{e.bounded}
\sum_{m=n_k}^{n_{k+1}}|P_{n_k-m}(\widehat I_{k})|
\leq C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\; |\widehat I_{k+1}|. \end{equation}
\subsubsection{Non-disjointness}
\begin{Lemma}[Existence of intersections]\label{Sub:disjointcase} If $\delta_0>0$ is small enough, {then} for any $\delta\in (0,\delta_0]$, for any $r>0$ there exist $k<\ell$ arbitrarily large such that $d(x_k,x_\ell)<r$ and the interior of $\pi_{x_k}({R}_\ell)$ and the interior of $R_k$ intersect. \end{Lemma} \begin{proof} Since $(\pi_{x,y})$ is a continuous family of diffeomorphisms on the open set $U$ containing $K\setminus V$, the following property holds provided $r$ is small enough:
For any $x,y\in K\setminus V$ with $d(x,y)<r$, the projection $\pi_{y}\colon \cN_x\to\cN_y$ satisfies: $$\pi_{y}(B(0,\beta_0/2))\subset B(0,\beta_0),$$ $${\rm det}(D\pi_{x,y})(u)\leq 2 \text{ for any } u\in B(0,\beta_0/2).$$ By compactness, one can find a finite set $Z\subset U$ such that any $x\in K\setminus V$ satisfies $d(x,z)<r/2$ for some $z\in Z$. For each point $x_k$ we associate some $z_k\in Z$ such that $d(x_k,z_k)<r/2$. Since $R_k\subset B(0,2\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})\subset \cN_{x_k}$ with $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}<\beta_0/4$ and from Proposition~\ref{p.rectangle}, we have $$\pi_{z_{k}}(R_k)\subset B(0,\beta_0)\subset \cN_{z_k},$$
$${\rm Volume}(\operatorname{Interior}(\pi_{z_{k}}(R_k)))\geq \frac 1 2 {\rm Volume}(\operatorname{Interior}(R_k)) \geq \frac {C_R} {2} |\widehat I_k|.$$
Let us assume by contradiction that the statement of the lemma does not hold. One deduces that there exists $s\geq 0$ such that for any $z\in Z$ and any $k,\ell\geq s$ such that $z_k=z_\ell=z$ we have $$\pi_{z}(\operatorname{Interior}(R_k))\cap \pi_{z}(\operatorname{Interior}( R_\ell))=\emptyset.$$ In particular if $C_{Vol}$ denotes the supremum of $\text{Volume}(B(0_x,\beta_0))$ over $x\in K$,
$$\sum_{k=1}^\infty|\widehat{I}_k|\le 2C_R^{-1}\;C_{Vol}\text{Card}(Z).$$
With~\eqref{e.bounded} we get for any $k$: \begin{equation}\label{e.sum}
\sum_{m=0}^{+\infty}|P_{-m}(\widehat I_k)|\leq C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\sum_{\ell=k}^\infty|\widehat{I}_\ell|\leq C_{Sum}:=2C_R^{-1}\;C_{Vol}\text{Card}(Z)\; C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}. \end{equation}
By Denjoy-Schwartz Lemma~\ref{Lem:schwartz} one gets $\eta_S>0$ and for $k$ large one can introduce an interval $J\subset \cW^{cs}(x_k)$ containing $\widehat I_k$ and of length equal to $(1+\eta_S)|\widehat I_k|$. One gets
$$|P_{-m}(J)|\leq 2|P_{-m}(\widehat I_k)|,~~~\forall m\ge 0.$$
From~\eqref{e.sum}, $\sup_{m\geq 0} |P_{-m}(\widehat I_k)|$ is arbitrarily small for $k$ large, hence $|P_{-t}(J)|$ is smaller than $\delta$ for any $t>0$. This proves that $J$ is a $\delta$-interval, contradicting the maximality of $\widehat I_k$. \end{proof}
\subsubsection{Non-shrinking property} \begin{Lemma} If $\delta_0$ is small enough,
the length $|\widehat I_k|$ does not go to zero as $k\to \infty$. \end{Lemma} \begin{proof} We first introduce an integer $N\geq 1$ such that $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{N/2}$ is much smaller than $\alpha_{min}$.
We argue by contradiction. Assume that $|\widehat I_k|$ is arbitrarily small as $k$ is large. From~\eqref{e.bounded} we deduce that for any $\delta'\in (0,\delta)$, if $k$ is large enough, then $\widehat I_k$ is a $\delta'$-interval.
By Lemma~\ref{Sub:disjointcase}, there exist $k\neq \ell$ large such that $x_k,x_\ell$ are arbitrarily close and we have that $\pi_{x_k}(\operatorname{Interior}({R}_\ell))\cap \operatorname{Interior}({R}_k)\neq\emptyset$. By Remark~\ref{r.intersection}, the unstable foliations of $\pi_{x_k}({R}_\ell)$ and ${R}_k$ coincide on the intersection, hence one of the following cases occurs (see Figure~\ref{f.shrinking}). \begin{enumerate}
\item There exists an endpoint $u$ of $\widehat I_k$ such that $W^{u}_{R_k}(u)$ intersects $\pi_{x_k}(\widehat I_\ell)$ at a point which is not endpoint.
\item The endpoints of $\widehat I_k$ and $\pi_{x_k}(\widehat I_\ell)$ have the same unstable manifolds.
\end{enumerate}
\begin{figure}\label{f.shrinking}
\end{figure}
In the first case, by Remark~\ref{r.intersection}, there exists a homeomorphism $\theta$ of $[0,+\infty)$ such that $\varphi_{-t}(x_k)$ and $\varphi_{-\theta(t)}(x_\ell)$ are close for any $t>0$, and $P_{-t}(\pi_{x_k}(R( I_\ell)))$ remains in a neighborhood of $\varphi_{-t}(x_k)$ which is arbitrarily small if $\delta'$ and $d(x_k,x_\ell)$ are small enough. The rectangle $\pi_{x_k}(R(\widehat I_\ell))$ intersects $\cW^{cs}(x_k)$ along an interval $J$, which meets $\widehat I_k$. This proves that the union of $J$ with $\widehat I_k$ is a $\delta$-interval and contradicts the maximality since $J$ is not contained in $\widehat I_k$ in this first case.
In the second case, without loss of generality, we assume $n_\ell>n_k$ and set $T:=n_\ell-n_k$. We introduce the map $\widetilde P_{-T}:=\pi_{x_k}\circ P_{n_k-n_\ell}$. Since $r$ has been chosen small enough and since the endpoints of $\widehat I_k$ and $\pi_{x_k}(\widehat I_\ell)$ have the same unstable manifolds, the iterates $\widetilde P_{-T}^i(\widehat I_k)$, $i\geq 0$, are all contained in $R_k$. Hence by the Global invariance, there exists a sequence of times $0<t_1<t_2<\dots$ going to $+\infty$ such that \begin{itemize} \item[--] $\varphi_{-t_1}(x_k)=x_\ell$, \item[--] $(\varphi_{-t}(x_k))_{t_i\leq t\leq t_{i+1}}$ shadows $(\varphi_{-t}(x_k))_{0\leq t\leq t_{1}}$, \item[--] $\varphi_{-t_i}(x_k)$ is close to $x_k$ and projects by $\pi_{x_k}$ in $R_k$. \end{itemize} Note that the differences $t_{i+1}-t_i$ are uniformly bounded in $i$ by some constant $T_0$. Since $\delta'$ can be chosen arbitrarily small (provided $k$ is large), for $t=t_i$ arbitrarily large, the interval $J=\pi_{\varphi_{-t_i}(x_k)}(R_k)\cap \cW^{cs}(\varphi_{-t_i}(x_k))$ contains $0$ and is a $\delta/2$-interval. One can choose $t_i$ and a backward iterate $x_j$ such that $t_i\leq n_j\leq t_i+T_0$. Since $P_{t_i-n_j}(J)$ is a $\delta/2$-interval and $\widehat I_j$ is a $\delta'$-interval,
$\widehat I_j\cup P_{t_i-n_j}(J)$ is a $\delta$-interval. As $j$ can be chosen arbitrarily large, $|\widehat I_j|$
is arbitrarily small, whereas $|P_{t_i-n_j}(J)|$ is uniformly bounded away from zero (since $n_j-t_i$ is bounded). Consequently $\widehat I_j\cup P_{t_i-n_j}(J)$ is strictly larger than $\widehat I_j$, contradicting the maximality. \end{proof}
\subsubsection{Existence of limit intervals} The Proposition~\ref{p.limit} now follows easily from the previous lemmas, up to extract a subsequence from the sequence of hyperbolic times $(n_k)$. \qed
\subsection{Returns of $\delta$-intervals}\label{ss.return-delta}
\subsubsection{Definition of returns and of shifting returns}
We now introduce the times which will allow to induce the dynamics near a $\delta$-interval. \begin{Definition}\label{d.return} Let $x\in K\setminus V$ be a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss point and $I$ be a $\delta$-interval at $x$. A time $t>0$ is a \emph{return} of $I$ if \begin{itemize} \item[--] $\varphi_{-t}(x)$ and $x$ are $r_0$-close, and $\operatorname{Interior}(R(I))\cap \operatorname{Interior}(\pi_x\circ P_{-t}(R(I)))\neq \emptyset$, \item[--] for any $z,z'\in I$ such that $\pi_x\circ P_{-t}(W^{u}_{R(I)}(z))\cap W^{u}_{R(I)}(z')\neq \emptyset$, we have $$\pi_x\circ P_{-t}(W^{u}_{R(I)}(z))\subset W^{u}_{R(I)}(z').$$ \end{itemize} We then denote by $\widetilde P_{-t}$ the map $\pi_x\circ P_{-t}\colon R(I)\to \cN_x$.
\noindent A sequence of returns $(t_n)$ is \emph{deep} if $t_n\to +\infty$ and if one can find one sequence $(x_n)$ in $K$ with $\pi_x(x_n)\in R(I)$ such that { $\widetilde P_{-t_n}\circ \pi_x(x_n)\to 0_x$ as $n\to +\infty$}.
\end{Definition}
\begin{Remark}\label{r.deep} When $(t_n)$ is a sequence of deep returns, $\widetilde P_{-t_n}\circ \pi_x(R(I))$ gets arbitrarily close to $\cW^{cs}(x)$ as $n\to +\infty$. \emph{ Indeed, since $t$ is large, $\widetilde P_{-t_n}\circ \pi_x(R(I))$ is thin and contained in a small neighborhood of $\pi_x(\cW^{cs}(\varphi_{-t'_n}(x_n)))$, where $\widetilde P_{-t_n}\circ \pi_x(x_n)=\pi_x(\varphi_{-t'_n}(x_n))$. Moreover as $\widetilde P_{-t_n}\circ \pi_x(x_n)\to 0_x$, the plaque $\pi_x(\cW^{cs}(\varphi_{-t'_n}(x_n)))$ gets close to $\cW^{cs}(x)$.} \end{Remark}
\begin{Lemma}\label{l.return-existence} If $\delta_0$ is small enough, a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss point $x\in K\setminus V$ with a $\delta$-interval $I$ {for $\delta\in (0,\delta_0]$}, some $t>2\log(\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}/\delta_0)/\log(\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$ satisfying $d(x,\varphi_{-t}(x))<r_0$ and $z\in I$ satisfying $\pi_{x}\circ P_{-t}(z)\in \operatorname{Interior}(R(I))$ and $d(\pi_{x}\circ P_{-t}(z),I)<\delta_0$, {then the time $t$ is a return.} \end{Lemma} \begin{proof} The leaves $W^u_{R(I)}(z)$ are tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and have uniform size $\alpha_{min}$. The interval $I$ is tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and has size less than $\delta_0$, chosen much smaller than $\alpha_{min}$. Assuming that $t>0$ is large enough, the images $P_{-t}(W^u_{R(I)}(z))$ have diameter smaller than $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{t/2}<\delta_0$, so $P_{-t}(R(I))$ has diameter smaller than $2\delta_0$. If $d(\pi_{x}\circ P_{-t}(z),I)$ is smaller than $\delta_0$, the images $P_{-t}(W^u_{R(I)}(z))$ can not intersect the boundary of the leaves $W^u_{R(I)}(z')$. From the last part of Proposition~\ref{p.rectangle}, we get that $P_{-t}(W^u_{R(I)}(z))$ is disjoint or contained in $W^u_{R(I)}(z')$ for each $z,z'\in I$. \end{proof}
To each return, one associates a one-dimensional map $S_{-t}:I\to \cW^{cs}_x$ as follows. \begin{Proposition}\label{p.one-dimensional} Assume that $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},r,\delta_0$ are small enough and that $\delta$ is much smaller than $\delta_0$. Consider a return $t$ of a $\delta$-interval $I\subset \cN_x$ and $d(\varphi_{-t}(x),x)<r$.
Then there exists a $\delta_0$-interval $J\subset \cW^{cs}(x)$ containing $I$ and a continuous injective map $S_{-t}\colon I\to J$ such that for each $u\in I$, the point $\widetilde P_{-t}(u)$ belongs to $W^{u}_{R(J)}(S_{-t}(u))$. \end{Proposition} Notice that from Definition~\ref{d.return}, $S_{-t}(I)$ intersects $I$ along a non-trivial interval. \begin{proof} Assuming $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},r$ small enough, By Remark~\ref{r.intersection}, the backward orbits of $\varphi_{-t}(x)$ and $x$ stay at an arbitrarily small distance. The $\delta$-interval $P_{-t}(I)$ projects by $\pi_x$ on a set $X$ whose backward iterates by $P_{-s}$ are contained in $B(0_{\varphi_{-s}(x)},\delta_0/2)$ (using the Global invariance). Since $x$ is a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss point, any $u\in X$ is $(T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-Pliss (see Lemma~\ref{l.cont3}) and has an unstable plaque $\cW^{cu}_{\alpha}(u)$ whose backward iterates under $P_{-t}$ have diameter smaller than $\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{-t/2}$. One can thus project {the arc} $X$ along the plaques $\cW^{cu}_\alpha(u)$ of points $u\in X$ to a connected set $I'\subset \cW^{cs}(x)$ which intersects $I$. If $\delta_0$ is small enough, and $P_{-s}(I\cup I')$ has diameter smaller than $2\operatorname{Diam}(P_{-s}(R(I)\cup \widetilde P_{-t}(I)))<\delta_0$. Thus $J=I\cup I'$ is a $\delta_0$-interval.
By the coherence (Proposition~\ref{p.rectangle}, item 2), the plaques $\cW^{cu}_\alpha(u)$ of $u\in X$ intersect $R(J)$ along a leaf $W^u_{R(J)}(u)$. Note that each plaque intersect $I'\subset J$ by construction. Moreover since $d(x, \varphi_{-t}(x))$ is small, $X$ does not intersect the boundary of the leaves $W^u_{R(J)}(u)$. Thus $X\subset R(J)$.
Any point in $X=\widetilde P_{-t}(I)$ can thus be projected to $J$ along the leaves of $R(J)$. The map $S_{-t}$ is the composition of this projection with $\widetilde P_{-t}$. \end{proof}
Deepness has been introduced for the following statement. \begin{Lemma}\label{l.return} Let $(t_n)$ be a deep sequence of returns of a $\delta$-interval $I$ and let $J$ be a $\delta_0$-interval containing $I$ such that $S_{-t_n}(I)\subset J$ for each $n$. Then, there exists $n_0\geq 1$ with the following property. If $n(1),\dots,n(\ell)$ is a sequence of integers with $n(i)\geq n_0$ and if there exists $u$ in the interior of $J$ satisfying for each $1\leq i\leq \ell$ $$S_{-t_{n(i)}}\circ\dots\circ S_{-t_{n(0)}}(u)\in \operatorname{Interior}(J),$$ then there exists a return $t>0$ such that $S_{-t}=S_{-t_{n(\ell)}}\circ\dots\circ S_{-t_{n(0)}}$. \end{Lemma} The return $t$ will be called \emph{composition} of the returns $t_{n(0)},\dots,t_{n(\ell)}$. \begin{proof} Note first that the image $\widetilde P_{-t}(R(I))$ associated to a large return $t$ has diameter smaller than $2\delta$. So if this set contains a point $\widetilde P_{-t}(u)$ $\delta'$-close to $I$, then $\widetilde P_{-t}(R(I))$ is contained in the $2\delta+\delta'$-neighborhood of $I$.
The lemma is proved by induction. The composition $S_{-t_{n(\ell-1)}}\circ\dots\circ S_{-t_{n(0)}}$ is associated to a return $t'>0$. The point $\widetilde P_{-t'}(u)$ belongs to the image $\widetilde P_{-t_{n(\ell-1)}}(R(I))$, hence (since $t_{n(\ell-1)}$ is large by deepness and Remark~\ref{r.deep}), is $2\delta$-close to $I$. By the Local injectivity and Remark~\ref{r.intersection}, there exists an increasing homeomorphism
$\theta$ such that $|\theta(0)|\leq 1/4$ and $\varphi_{\theta(-s)-t'}(x)$ shadows $\varphi_{-s}(x)$. Hence for $t=t'+\theta(t_n(\ell))$ the point $\varphi_{-t}(x)$ is close to $x$ and $\widetilde P_{-t}=\widetilde P_{-t_{n(\ell)}}\circ \widetilde P_{-t'}$ by the Global invariance. The first item of Definition~\ref{d.return} is satisfied.
The second one is implied by Proposition~\ref{p.rectangle}: if $\widetilde P_{-t}(W^u_{R(I)}(z))$ intersects $W^u_{R(I)}(z')$, then these two discs match. The set $\widetilde P_{-t}(R(I))$ has diameter smaller than $2\delta$; it contains the point $\widetilde P_{-t_{n(\ell)}}\circ \widetilde P_{-t'}(u)$; this last point also belongs to $\widetilde P_{-t_{n(\ell)}}(R(I))$ which is included in the $2\delta$-neighborhood of $I$. Hence $\widetilde P_{-t}(R(I))$ is contained in the $4\delta$-neighborhood of $I$ and can not intersect the boundary of the disc $W^u_{R(I)}(z')$. This gives $\widetilde P_{-t}(W^u_{R(I)}(z))\subset W^u_{R(I)}(z')$.
This proves that $t$ is a return such that $\widetilde P_{-t}=\widetilde P_{-t_{n(\ell)}} \circ \widetilde P_{-t'}$: the one-dimensional map associated to $t$ coincides with the composition of the one-dimensional maps of the returns $t_{n(\ell)}$ and $t'$ as required. \end{proof}
\begin{Definition} A return $t$ is \emph{shifting} if the one-dimensional map $S_{-t}$ has no fixed point.
\emph{Let us fix an orientation on $\cW^{cs}(x)$. It is preserved by $S_{-t}$ when $t$ is shifting.}
\noindent A return \emph{shifts to the right} (resp. \emph{to the left}) if it is a shifting return and if there exists $u\in I$ that can be joined to $S_{-t}(u)$ by a positive arc (resp. negative arc) of $\cW^{cs}(x)$. \end{Definition}
\subsubsection{Criterion for the existence of periodic $\delta$-intervals}\label{ss.criterion-periodic}
The following proposition shows that under the setting of Proposition~\ref{p.limit}, if the interval $I$ has a large non-shifting return, then $I$ is contained in the unstable set of some periodic $\delta$-interval.
\begin{Proposition}\label{p.periodic-return} Let $I$ be a $\delta$-interval at a point $x\in K$ and let $J$ be a $3\delta$-interval at a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss point $y\in K\setminus V$ having large non-shifting returns. If $\pi_y\circ P_{-s}(I)$ intersects $R(J)$ for some $s\geq 0$, then $I$ is contained in the unstable set of some periodic $\delta$-interval. \end{Proposition} \begin{proof} By assumption there exist $t>0$ large and $u\in J$ such that $W^{u}_{R(J)}(u)$ is mapped into itself by $\widetilde P_{-t}$. This implies that $\widetilde P_{-t}$ has a fixed point $v$ in $R(J)$.
Since $t$ is a large return, assuming the $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is small enough, there exists (by the local injectivity) $t_1\in [t-1/4,t+1/4]$ such that $d(\varphi_{-t_1}(y),y)<r/2$. This allows (up to modify $t$) to assume that $d(\varphi_{-t}(y),y)<r/2$.
\begin{Claim} There is a periodic point $q\in K$ that is $r_0$ close to $y$ such that $\pi_y(q)$ is a fixed point of $\widetilde P_{-t}$ in $R(J)$. \end{Claim}
\begin{proof}[Proof of the claim.] We build inductively a homeomorphism $\theta$ of $[0,+\infty)$ such that: \begin{itemize} \item[--] for each $k\geq 0$ and each $s\in [0,t]$, we have $d(\varphi_{-\theta(s)}(y),\varphi_{-\theta(kt+s)}(y))<r_0/2$, \item[--] $d(\varphi_{-t_k}(y),y)<r$ where $t_k:=\theta(kt)$ and $r$ is the constant in Remark~\ref{r.intersection}, \item[--] we have $\pi_{y}\circ P_{-t_k}=(\widetilde P_{-t})^k$. \end{itemize} Since $d(\varphi_{-t}(y),y)<r/2$, one defines $t_1=t$ and $\theta(s)=s$ for $s\in [0,t]$.
Let us then assume that $\theta$ has been built on $[0,kt]$. Since $P_{-s}(v)\in P_{-s}(R(J))$ is $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-close to $0_{\varphi_{-s}(y)}$ for any $s\geq 0$ and since $\pi_{\varphi_{-t_k}(y)}(v)=P_{-t_k}(v)$, Remark~\ref{r.intersection} applies and defines the homeomorphism $\theta$ on $[kt,(k+1)t]$ such that $d(\varphi_{-s}(y),\varphi_{-\theta(kt+s)}(y))<r_0/4$ for $s\in [0,t]$. By the local injectivity, one can choose $t_{k+1}$ with
$|t_{k+1}-\theta(kt)|\leq 1/4$, $d(y,\varphi_{t_{k+1}}(y))<r$ and one can modify $\theta$ near $(k+1)t$ so that $\theta((k+1)t)=t_{k+1}$.
Since $t_1=t$ is large, and since $d(\varphi_{-\theta(kt+s)}(y),\varphi_{-\theta((k+1)t+s)}(y))<r_0$, the No shear property (Proposition~\ref{p.no-shear}) implies inductively $t_k-t_{k-1}\ge 2$ for each $k\ge 1$. In particular $t_k\to+\infty$.
By the dominated splitting, the limit set $\Lambda$ of the curves $\pi_{y}\circ P_{-t_k}(J)=\widetilde P_{-t}^k(J)$ is a union of (uniformly Lipschitz) curves in $\cN_{y}$, containing $v$ and tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)$.
One can apply Proposition~\ref{p.uniqueness} to a periodic sequence of diffeomorphisms $f_0,\dots,f_{[t]}$ where $f_i$ coincides with $P_{-1}$ on $\cN_{\varphi_{-m}(y)}$ for $0\leq i <t-1$ and $f_{[t]}$ coincides with $\pi_{y}\circ P_{t-[t]+1}$ on $\cN_{\varphi_{-[t]+1}(y)}$. We have $\widetilde P_{-t}=f_{[t]}\circ \dots\circ f_0$ and any curve in the limit set $\Lambda$ remain bounded and tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)$ under iterations by $(\widetilde P_{-t})^{-1}$. Consequently, they are all contained in a same Lipschitz arc, invariant by $\widetilde P_{-t}$. One deduces that a subsequence of $(\widetilde P_{-t})^k(0_y)$ converges in $\cN_{y}$ to a fixed point $p$ of $R(J)$.
By Lemma~\ref{l.closing0}, there exists a periodic point $q\in K$ such that $\pi_{y}(q)=p\in R(J)$ is the fixed point of $\widetilde P_{-t}$. \end{proof}
Now we prove that $I$ is contained in the unstable set of some periodic $\delta$-interval. Without loss of generality, we take $s=0$. Since $\pi_y(q)$ and $\pi_y(x)$ belong to $R(J)$, by choosing $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ small enough and by the local invariance, one can assume that the distances $d(y,q)$ and $d(y,x)$ are smaller than any given constant. In particular, the projection by $\pi_y$ of center-unstable plaques $\cW^{cu}(u)$ of point $u\in \cN_q$ is tangent to the cone ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
\begin{Claim} There exists a periodic point $q_0\in K$ such that $\alpha(x)$ is the orbit of $q_0$. \end{Claim} \begin{proof}[Proof of the Claim.] By the assumptions, there is $u_0\in I$ such that $\pi_y(u_0)$ is contained in $R(J)$.
Since $\beta_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $\delta$ are small, by the Global invariance, there is $\theta_y\in\operatorname{Lip}$ such that $|\theta_y(0)|\leq 1/4$ and $d(\varphi_{-s}(x),\varphi_{-\theta_y(s)}(y))<r_0/2$ for any $s>0$.
In a similar way, there exists $\theta_q\in\operatorname{Lip}$ such that $|\theta_q(0)|\leq 1/4$ and $d(\varphi_{-t}(y),\varphi_{-\theta_q(t)}(q))$ remains small for any $t>0$. By defining $\theta=\theta_q\circ \theta_y$, one deduces that $d(\varphi_{-\theta(t)}(q),\varphi_{-t}(x))$ is small too for any $t>0$.
By {using the} Global invariance twice,
$\|P_{-t}(\pi_{y}(x))\|$ remains small in the fiber $\cN_{\varphi_{-t}(y)}$ and
$\|P_{-t}(\pi_{q}(x))\|$ remains small in the fiber $\cN_{\varphi_{-t}(q)}$. In particular $(P_{-t}(\pi_q(x)))_{t>0}$ is a half generalized orbit and has a center unstable plaque $\cW^{cu}(\pi_q(x))$.
Let us first assume that $0_q\in\cW^{cu}(\pi_q(x))$. Since $\|P_{-t}(\pi_{q}(x))\|$ remains small when $t\to +\infty$, the local invariance of the plaque families implies that $0_{\varphi_{-t}(q)}\in P_{-t}(\cW^{cu}(\pi_q(x)))$ for any $t>0$. Projecting to the fiber of $\varphi_{-\theta_q^{-1}(t)}(y)$, and using the Global invariance, one deduces that $P_{-t}(\pi_y(x))$ and $P_{-t}(\pi_y(q))$ is connected by a small arc tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ for any $t>0$. By Proposition~\ref{p.uniqueness}, this shows that $\pi_y(x)$ and $\pi_y(q)$ belong to a same leaf $W^u_{R(J)}(u)$ of $R(J)$. Consequently, $d(P_{-t}(\pi_y(x)),P_{-t}(\pi_y(q)))\to 0$ as $t\to +\infty$. The Global invariance shows that $d(P_{-t}(\pi_q(x)),0)\to 0$ as $t\to +\infty$. The Local injectivity then implies that $\varphi_{-t}(x)$ converges to the orbit of $q$.
If $\cW^{cu}(\pi_q(x))$ does not contains $0_q$, by Proposition~\ref{p.fixed-point}, there exists a point $p\in \cN_q$ which is fixed by $P_{-2T}$ (where $T$ is the period of $q$), such that $P_{-t}(\pi_{q}(x))$ converges to the orbit of $p$ as $t\to +\infty$. By the Global invariance, $P_{-\ell T}(\pi_{q}(x))$ coincides with $\pi_q(\varphi_{-\theta^{-1}(\ell T)}(x))$ for $\ell\in\NN$. Lemma~\ref{l.closing0} implies that there exists a periodic point $q_0\in K$ such that $\varphi_{-\theta^{-1}(\ell T)}(x)$ converges to $q_0$ as $\ell\to +\infty$. Since $\theta$ is a bi-Lipschitz homeomorphism, one deduces that the backward orbit of $x$ converges to the orbit of the periodic point $q_0$. \end{proof}
Up to replace $x$ and $I$ by large backward iterates, there is $\theta\in \operatorname{Lip}$ s.t. $d(\varphi_{-t}(x),\varphi_{-\theta(t)}(q_0))$ is small for any $t>0$ and by the Global invariance, the intervals $P_{-t}\circ \pi_{q_0}(I)$ are curves tangent to the cone ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{-t}(q_0))$ which remain small for any $t>0$. When $t=\ell T'$ where $T'$ is the period of $q_0$, they converge to a limit set which is a union of curves tangent to the cone ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(q_0)$ and contain $0_{q_0}$. By Proposition~\ref{p.uniqueness}, this limit set is an interval $ I_{q_0}\subset \cW^{cs}(q_0)$ that is fixed by $P_{T'}$. By the Global invariance, $I_{q_0}$ is the limit of $\pi_y(P_{-\theta^{-1}(\ell T')}(I))$ as $\ell\to +\infty$. This implies that the backward orbit of $I$ converges to the orbit of the $\delta$-interval $I_{q_0}$. \end{proof}
\subsubsection{Returns of limit intervals}\label{ss.return-I-infty} We now prove that the interval $I_\infty$ has always returns. Moreover, in the case $I_{\infty}$ is not contained in a $3\delta$-interval having arbitrarily large non-shifting return (so that Proposition~\ref{p.periodic-return} can not be applied to some $\widehat I_k$ and an interval $J$ containing $I_{\infty}$), we also prove that there exist shifting returns for $I_\infty$, both to the right and to the left.
\begin{Proposition}\label{p.shifting-return} Under the setting of Proposition~\ref{p.limit}, \begin{itemize} \item[--] either $I_\infty$ is contained in a $3\delta$-interval having arbitrarily large non-shifting return, \item[--] or $I_{\infty}$ has {two deep sequences of shifting returns { one shifting to the left and the other one to the right.}} \end{itemize} \end{Proposition} \begin{proof} We assume that the first case does not hold and one chooses an orientation, hence an order on $\cW^{cs}_{x_\infty}$. Denote by $a_\infty<b_\infty$ the endpoints of $I_\infty$. There exists a $\frac 3 2\delta$-interval $L$ in $\cW^{cs}_{x_\infty}$ such that $R(L)$ contains all the $\pi_{x_\infty}(\widehat I_k)$, $k$ large. One can thus project $\pi_{x_\infty}(\widehat I_k)$ to $\cW^{cs}_{x_\infty}$ by the unstable holonomy and denotes $a_k<b_k$ the endpoints of this projection. In the same way one denotes $c_k\in [a_k,b_k]$ the projection of $\pi_{x_\infty}(x_k)$. By Lemma~\ref{l.return-existence}, for any $k<\ell$ such that $k$ and $\ell-k$ are large, there exists a (large) return $t>0$ such that $\widetilde P_{-t}(\pi_{x_\infty}(x_k))=\pi_{x_\infty}(x_\ell)$. Since all large returns of $L$ are shifting, such iterates $x_k,x_\ell$ of $x$ do not project by $\pi_{x_\infty}$ to a same unstable manifold. Hence one can assume without loss of generality that for each $k<\ell$ we have $c_k>c_\ell$.
\begin{Claim} $I_\infty$ admits a deep sequence of returns $t>0$ shifting to the left. \end{Claim} \begin{proof} For each $k<\ell$, there exists a return $t>0$ of $L$ such that $\widetilde P_{-t}(\pi_{x_\infty}(x_k))=\pi_{x_\infty}(x_\ell)$. This return shifts to the left. We claim that it is a return for $I_\infty$. Denote $a'_k=S_{-t}(a_k)$. The return $t$ has been chosen so that $\pi_{x_\infty}\circ P_{-n_\ell+n_k}(\widehat I_k)= \widetilde P_{-t}\circ\pi_{x_\infty}(\widehat I_k)$. Since $\widehat I_\ell$ contains $P_{-n_\ell+n_k}(\widehat I_k)$, this gives $a_\ell\leq a'_k$ and in particular $a_\ell<a_k$. Repeating this argument, one gets a decreasing subsequence $(a_i)$ containing $a_k$ and $a_\ell$. It converges to $a_\infty$ so that $a_\infty< a'_k<a_k$. This implies that both $a_k$ and $a'_k$ belong to $I_\infty$ and that $t>0$ is also a return for $I_\infty$. Note that when $k$ is large and $\ell$ much larger, the time $t>0$ is large and the intervals $\pi_{x_\infty}(\widehat I_k), \pi_{x_\infty}(\widehat I_\ell)$ are close to $\cW^{cs}_{x_\infty}$. In particular the sequence of returns $t>0$ one obtains by this construction is deep. \end{proof}
Let us fix a return $t$ shifting left as given by the previous claim. We then choose $k\geq 1$ large and $\ell$ much larger and build a return which shifts to the right. As explained above we have $a_\infty<a_\ell<a_k$. Let us denote by $\bar a_\infty<\bar a_\ell<\bar a_k$ their images by $S_{-t}$. Since $S_{-t}$ shifts left, for $k$ large $\bar a_k$ which is close to $\bar a_\infty$ satisfies $\bar a_k<a_\infty$. See Figure~\ref{f.return}. Let us denote $t'>0$ a time such that the pieces of orbits $\varphi_{[-t',0]}(x_k)$ and $\varphi_{[-t,0]}(x_\infty)$ remain close (up to reparametrization), so that $\pi_{x_\infty}\circ \varphi_{-t'}(x_k)=\widetilde P_{-t}\circ \pi_{x_\infty}(x_k)$.
\begin{figure}\label{f.return}
\end{figure}
The rectangle associated to the $3\delta$-interval $L\cup S_{-t}(L)$ contains $\pi_{x_\infty}\circ \varphi_{-t'}(x_k)$ and $\pi_{x_\infty}(x_\ell)$. In particular there exists a return $s>0$ such that $\widetilde P_{-s}(\pi_{x_\infty}\circ \varphi_{-t'}(x_k))=\pi_{x_\infty}(x_\ell)$. Since $\ell-k$ is large, $s$ is large and is a shifting return by our assumptions. We denote $a_\infty'<a_k'$ the images of $\bar a_\infty<\bar a_k$ by $S_{-s}$. Note that there exists a time $s'>0$ such that $\pi_{x_\infty}\circ \varphi_{-s'}(\varphi_{-t'}(x_k))=\widetilde P_{-s}\circ\pi_{x_\infty}(P_{-t'}(x_k))=\pi_{x_\infty}(x_\ell)$. By the local injectivity, one can choose $s'$ such that $t'+s'=n_\ell-n_k$. In particular $S_{-s}\circ S_{-t}$ coincides with the one-dimensional map $S_{-\widetilde t}$ associated to the return $\widetilde t>0$ sending $\pi_{x_\infty}(x_k)$ to $\pi_{x_\infty}(x_\ell)$. Note that $t'$ is a return as considered in the proof of the previous claim; in particular we have proved that $a_\infty< S_{-\widetilde t}(a_k)<a_k$. Hence $a_\infty< a'_k<a_k$.
We have obtained $\bar a_k<a_\infty<a'_k$, so that $S_{-s}$ shifts to the right and $a_\infty<a'_\infty$. On the other hand $a'_\infty<a'_k<a_k<b_\infty$. So this gives $a'_\infty\in (a_\infty, b_\infty)$ which implies that $S_{-s}$ is a return of $I_\infty$ which shifts to the right as required. { Since $\pi_{x_\infty}\circ \varphi_{-t'}(x_k)\in R(I)$ and $\pi_{x_\infty}(x_\ell)\to 0_{x_\infty}$, the sequence of returns $s$ one may build by this construction is deep.} \end{proof}
\subsection{Aperiodic $\delta$-intervals}\label{ss.aperiodic} We introduce the $\delta$-intervals which will produce normally expanded irrational tori.
\begin{Definition}\label{d.aperiodic} A $\delta$-interval $J$ at $x\in K\setminus V$ is \emph{aperiodic} if there exist returns $t_1,t_2>0$ and intervals $J_1,J_2\subset J$ such that: \begin{enumerate} \item[--] $J_1,J_2$ have disjoint interior and $J=J_1\cup J_2$, \item[--] $\widetilde P_{-t_1}(J_1), \widetilde P_{-t_2}(J_2)$ have disjoint interior and $J=\widetilde P_{-t_1}(J_1)\cup \widetilde P_{-t_2}(J_2)$, \item[--] any non-empty compact set $\Lambda\subset J$ such that $\widetilde P_{-t_1}\left(\Lambda\cap J_1\right)\subset \Lambda \text{ and } \widetilde P_{-t_2}\left(\Lambda\cap J_2\right)\subset \Lambda$ coincides with $J$. \end{enumerate} \end{Definition}
We prove here that the second case of the Proposition~\ref{p.limit} gives aperiodic $\delta$-intervals. \begin{Proposition}\label{p.aperiodic0} Let $x\in K\setminus V$ be a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss point and let $I$ be a $\delta$-interval whose large returns are all shifting and which admit a deep sequence of returns $(t_n^l)$ shifting to the left and another one $(t^r_n)$ shifting to the right.
Then $\alpha(x)$ contains a point $x'\in K\setminus V$ having an aperiodic $\delta$-interval. \end{Proposition}
We first select good returns for $I$. \begin{Lemma}\label{l.aperiodic0} Under the setting of Proposition~\ref{p.aperiodic0} there exists a $\delta$-interval $L\subset I$, returns $t_1,t_2>0$ and intervals $L_1,L_2\subset L$ such that \begin{itemize} \item[--] $L_1,L_2$ have disjoint interiors and $L=L_1\cup L_2$, \item[--] $S_{-t_1}(L_1), S_{-t_2}(L_2)$ have disjoint interior and $L=S_{-t_1}(L_1)\cup S_{-t_2}(L_2)$, \item[--] any non-empty compact set $\Lambda\subset L$ such that $S_{-t_1}\left(\Lambda\cap L_1\right)\subset \Lambda \text{ and } S_{-t_2}\left(\Lambda\cap L_2\right)\subset \Lambda$ coincides with $L$. \end{itemize} Moreover $t_1,t_2$ are composition of large returns inside the sequences $(t_n^l)$ and $(t^r_n)$. \end{Lemma} \begin{proof} Let us consider two large returns $s,t>0$ shifting to the right and to the left respectively. They can be chosen inside the sequences $(t_n^r)$ and $(t^l_n)$, hence by Lemma~\ref{l.return} the compositions of the maps $S_{-s}, S_{-t}$, when they are defined, have no fixed point in $I$. Let us denote $D_s=I\cap S_{-s}^{-1}(I)$ and $I_s=S_{-s}(D_s)$ and similarly let us denote $D_t, I_t$ the domain and the image of $S_{-t}$ in $I$.
\emph{Step 1.} One will reduce the interval $I$ so that the assumptions of the Proposition~\ref{p.aperiodic0} still hold but moreover either $D_s\cup D_t$ or $I_s\cup I_t$ coincides with $I$. See Figure~\ref{f.reduced-interval}.
\begin{figure}\label{f.reduced-interval}
\end{figure}
Note that both of these two sets contain the endpoints of $I$. If both are not connected one reduces $I$ in the following way. Without loss of generality $0_x$ does not belong to $D_t$. One can thus moves the right point of $I$ (and $D_t)$ inside $D_t$ to the left and reduce $I$ which still contains $0_x$. This implies that the right endpoints of $D_s,D_t,I_s,I_t,I$ move to the left whereas the left endpoints remain unchanged. At some moment one of the two intervals $D_s,D_t$ becomes trivial. We obtain this way new intervals $I',D'_s,D'_t,I'_s,I'_t$. Note that both can not be trivial simultaneously since otherwise $S_{-s}\circ S_{-t}$ preserves the right endpoint of the new interval: this one dimensional map is associated to a large return of $I$ which has a fixed point - a contradiction.
Let us assume for instance that $D'_t$ (and $I'_t$) has become a trivial interval (the case $D'_s$ is trivial is similar): the interval $I'$ is bounded by the left endpoint of $I$ and the left endpoint of $D_t$. Moreover the map $S_{-t}$ sends the right endpoint of $I'$ to its left endpoint. The map $S_{-t}\circ S_{-s}$ is associated to a return $t'$ of $I'$ which sends the right endpoint of $D'_s$ to the left endpoint of $I'$, hence which shifts left and whose domain coincides with $I'\setminus D'_s$. We keep the return $s$. We have shown that $D'_s\cup D'_{t'}$ coincides with $I'$.
\emph{Step 2.} One again reduces $I$ so that the assumptions of Proposition~\ref{p.aperiodic0} still hold, still $D_s\cup D_t$ or $I_s\cup I_t$ coincide with $I$ and furthermore $D_s,D_t$ (resp. $I_s,I_t$) have disjoint interior.
One follows the same argument as in step 1. For instance one moves the right endpoint of $I$ to the left. Different cases can occur: \begin{itemize} \item[--] the point $x$ becomes the right endpoint of $I$, we then exchange the orientation of $I$ and reduce $I$ again; this case will not appear anymore; \item[--] the new domains or the new images have disjoint interior; when this occurs either $D_s\cup D_t$ or $I_s\cup I_t$ coincide with $I$. \end{itemize} Note that the domains $D_s,D_t$ can not become trivial as in step 1. Indeed if for instance $D_t$ (and $I_t$) becomes trivial, since $I$ still coincides with $D_s\cup D_t$ or $I_s\cup I_t$, one gets that $I$ coincides with $D_s$ or $I_s$, proving that $R(I)$ contains a fixed unstable manifold - a contradiction.
\emph{Step 3.} We have now obtained a $\delta$-interval $L\subset I$ and two returns $t_1,t_2$, shifting to the left and the right respectively, whose domains $L_1,L_2$ and images $S_1(L_1)$, $S_2(L_2)$ by $S_1:=S_{-t_1}$ and $S_2:=S_{-t_2}$ have disjoint interior and which satisfy one of the two cases: \begin{description} \item[Case 1.] $L_1\cup L_2=L$, \item[Case 2.] $S_1(L_1)\cup S_2(L_2)=L$. \end{description} After identifying the two endpoints of $L$, one can define an increasing map $f$ on the circle which coincides with $S_1$ (resp. $S_2$) on the interior of $I_1$ (resp. $I_2$): in the first case it is injective and has one discontinuity (and $f^{-1}$ is continuous) whereas in the second case $f$ is continuous. By our assumptions on $s$ and $t$, $f$ has no periodic point. We will prove that $f$ is a homeomorphism conjugated to a minimal rotation. This will conclude the proof of the lemma.
We discuss the case 2 (the first case is very similar, arguing with $f^{-1}$ instead of $f$). Poincar\'e theory of orientation preserving circle homeomorphisms extends to continuous increasing maps. Since $f$ has no periodic point, there exists a unique minimal set $K$. Let us assume by contradiction that $K$ is not the whole circle. Let $J$ be a component of its complement. It is disjoint from its preimages $J_{-n}=f^{-n}(J)$ and (up to replace $J$ by one of its backward iterate), $f\colon J_{-n}\to J_{-n+1}$ is a homeomorphism for each $n\geq 0$. We now use the Denjoy-Schwartz argument to find a contradiction.
Let us fix $n\geq 0$. For each $0\leq k\leq n$, the interval $f^k(J_{-n})$ is contained in one of the domains
$L_1$ or $L_2$. Hence $f^k|_{J_{-n}}$ coincides with a composition of the maps $S_1$ and $S_2$ and is associated to a return $s_k>0$ of $L$. Note that $\widetilde P_{-s_k}(J_{-n})=J_{k-n}$ is a $C^1$-curve in $R(L)$ tangent to the cone field ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. By Proposition~\ref{p.distortion}, there exists $\Delta>0$ such that any sub-rectangle of $R(L)$, bounded by two leaves $W^u_{R(L)}(u), W^u_{R(L)}(u')$ has distortion bounded by $\Delta$. Hence
$\widetilde P_{-s_k}(J_{-n})$ has length bounded by $\Delta.|J_{k-n}|$. Consequently the sum $\sum_{k=0}^n |P_{-s_k}(J_{-n})|$ is uniformly bounded as $n$ increases. The difference $s_{k+1}-s_k$ is uniformly bounded also, so that there exists a uniform bound $C_{Sum}$ satisfying
$$\sum_{0\leq m <s_n} |P_{-m}(J_{-n})|<C_{Sum}.$$ From Lemma~\ref{Lem:schwartz}, there exists an interval $\widehat J_{-n}\subset \cW^{cs}(x)$ containing $J_{-n}$
satisfying $|\widehat J_{-n}|\leq 2|J_{-n}|$ such that each component of $P_{-s_n}(\widehat J_{-n}\setminus J_n)$ has length
$\eta_S|P_{-s_n}(J_{-n})|$, where $\eta_S>0$ is a small uniform constant. Note that the projection through unstable holonomy of $\widetilde P_{-s_n}(\widehat J_{-n})$ in $L$ contains a uniform neighborhood $\widehat J=S_{-s_n}(\widehat J_{-n})$ of $J$.
The large integer $n$ can be chosen so that the small interval $J_{-n}$ is arbitrarily close to $J$. Consequently, $\widehat J_{-n}$ is contained in $\widehat J$. This means $S_{-s_n}(\widehat J_{-n})\supset \widehat J_{-n}$ implying that $\widehat J_{-n}$ contains a fixed point of the map $S_{-s_n}$. This is a contradiction since we have assumed that all the returns are shifting. As a consequence $f$ is a minimal homeomorphism, which implies the lemma. \end{proof}
\begin{proof}[Proof of Proposition~\ref{p.aperiodic0}] Let us consider some intervals $L_1,L_2,L$ and some returns $t_1,t_2$ as in Lemma~\ref{l.aperiodic0}. We denote $\widetilde P_i=\widetilde P_{-t_i}$ and $S_i=S_{-t_i}$, $i=1,2$. Since $t_1,t_2$ are large inside deep sequences of returns, the compositions $S_{i_k}\circ\dots\circ S_{i_1}$, when they are defined, are associated to returns of $L$ (see Lemma~\ref{l.return}).
\begin{Claim} $\widetilde P_1$ and $\widetilde P_2$ commute. \end{Claim} \begin{proof} Both compositions $\widetilde P_1\circ \widetilde P_2$ and $\widetilde P_2\circ \widetilde P_1$ are associated to returns $s,s'>0$. Note that the common endpoint of $L_1$ and $L_2$ has the same image by $S_1\circ S_2$ and $S_2\circ S_1$. If the two compositions do not coincide, the two returns are different, for instance $s'>s$, but there exists two points $u,u'$ in $R(L)$ in a same unstable manifold such that $u=\pi_x\circ P_{-s'+s}\circ \pi_{\varphi_{-s}(x)}(u')$. Note that since $t_1,t_2$ can be obtained from deep sequences of returns, $u,u'$ are arbitrarily close to $L$. The time $s'-s$ can be assumed to be arbitrarily large: otherwise, we let $t_1,t_2$ go to $+\infty$ in the deep sequences of returns; if $s'-s$ remain bounded, the points $u,u'$ converge to a point of $L$ a point which is fixed by some limit of the $\pi_x\circ P_{-s'+s}$. This is a contradiction since the returns of $I$ are non-shifting. Since $s'-s$ is large, {Lemma~\ref{l.return-existence}} implies that there is a large return sending $u$ on $u'$, which is a contradiction since large returns are shifting. Consequently the two returns are the same and the compositions coincide. \end{proof}
By the properties on $S_1,S_2$, there is $(i_k)\in \{1,2\}^\NN$ such that for each $k$ $$\widetilde P_{i_k}\circ \dots\circ\widetilde P_{i_1}(x)\in R(L).$$ From Lemma~\ref{l.return}, for each $k$ there exists a return $s_k$ such that $$\widetilde P_{-s_k}=\widetilde P_{i_k}\circ \dots\circ\widetilde P_{i_1}.$$ The dynamics of $S_1,S_2$ is minimal in $L$, hence the iterates $x_k:=\varphi_{-s_k}(x)$ have a subsequence $x_{k(j)}$ converging to a point $x'\in K\setminus V$ such that $\pi_x(x')$ belongs to the unstable manifold of $x$. {We define the intervals $J,J_1,J_2$} as limits of the iterates of $L,L_1,L_2$ by the maps $P_{-s_{k(j)}}$. In particular $J$ is a $\delta$-interval and the first item of Definition~\ref{d.aperiodic} holds.
By Remark~\ref{r.intersection}, there exist times $t'_1,t'_2$ such that backward orbit of $x$ during time $[-t_i,0]$ is shadowed by the backward orbit of $x'$ during the time $[-t'_i,0]$. By the Global invariance, the maps $\pi_{x'}\circ P_{-t'_i}$, $i=1,2$, from a neighborhood of $0_{x'}$ to $\cN_{x'}$ are conjugated to $\widetilde P_i$ by the projection $\pi_x$ and will be still denoted by $\widetilde P_i$. In particular, the map {$P_{-s_{k(j)}}$} from a neighborhood of $0_x\in \cN_x$ to $\cN_{x'}$ coincides with $\widetilde P_{i_{k(j)}}\circ \dots\circ\widetilde P_{i_1}\circ \pi_{x'}$ and $\pi_{x'}\circ \widetilde P_{i_{k(j)}}\circ \dots\circ\widetilde P_{i_1}$.
\begin{Claim} $J$ is aperiodic. \end{Claim} \begin{proof} The projections by $\pi_{x'}$ of $L_1,L_2$ and $S_1(L_1), S_2(L_2)$ have images under the maps {$P_{-s_{k(j)}}$} which converge to subintervals of $J$: {by definition,} the first ones are $J_1,J_2$ whereas {by Global invariance} the last ones denoted by $J'_1,J'_2$ have disjoint interior and satisfy $J=J'_1\cup J'_2$. {Since $S_1(L_1)$ and $\widetilde P_1(L_1)$ have the same projection by unstable holonomy,} Note that $J'_1,J'_2$ are also the limits of $\widetilde P_1(L_1)$ and $\widetilde P_2(L_2)$ under the maps {$P_{-s_{k(j)}}$}. Since $\widetilde P_1$ and $\widetilde P_2$ commute this implies that $J'_1=\widetilde P_1(J_1)$ and $J'_2=\widetilde P_2(J_2)$. Consequently the second item of the Definition~\ref{d.aperiodic} holds.
Let us consider the projection $\varphi$ from $L$ to $J$ obtained as composition of $\pi_{x'}$ with the unstable holonomy. Note that it conjugates the orbit of $0_x$ in $L$ by $S_1,S_2$ with the orbit of $0_{x'}$ in $J$ by $\widetilde P_1,\widetilde P_2$. Passing to the limit $\varphi$ induces a conjugacy between the dynamics of $S_1,S_2$ on $L$ and $\widetilde P_1,\widetilde P_2$ on $J$. The third item is thus a consequence of Lemma~\ref{l.aperiodic0} \end{proof}
The proof of Proposition~\ref{p.aperiodic0} is now complete. \end{proof}
\subsection{Construction of a normally expanded irrational torus}\label{ss.topological} The whole section is devoted to the proof of the next proposition. \begin{Proposition}\label{p.aperiodic} If $x\in K\setminus V$ has an aperiodic $\delta$-interval $J$, then the orbit of $x$ is contained in a minimal set $\cT\subset K$ which is a normally expanded irrational torus, and $\pi_{x}(\cT\cap B(x,r_0/2))\supset J$. \end{Proposition}
\begin{proof} The definition of {aperiodic $\delta$-interval} is associated to two returns $t_1,t_2>0$. As before we denote $\widetilde P_1=\widetilde P_{-t_1}$ and $\widetilde P_2=\widetilde P_{-t_2}$.
Let us define ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ to be the compact set of points $z\in K$ such that $$d(z, x)\leq r_0/2 \text{ and } \pi_{x}(z)\in J.$$ The map $z\mapsto \pi_{x}(z)$ is continuous from ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ to $J$. We also define the invariant set $\cT$ of points whose orbit meets ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$.
For any $u\in J$, we define the sets $$\Lambda_u^+=\{v\in J,~\exists n\in\NN,~k(1),\dots,k(n)\in\{1,2\},~{\rm s.t.}~\widetilde P_{k(n)}\circ \widetilde P_{k(n-1)}\circ \widetilde P_{k(1)}u=v\},$$ $$\Lambda_u^-=\{v\in J,~\exists n\in\NN,~k(1),\dots,k(n)\in\{1,2\},~{\rm s.t.}~\widetilde P^{-1}_{k(n)}\circ \widetilde P^{-1}_{k(n-1)}\circ \widetilde P^{-1}_{k(1)}u=v\}.$$
By the definition of aperiodic interval and of $\Lambda_u^+$ and $\Lambda_u^-$, the set of accumulation points of $\Lambda_u^+$ and $\Lambda_u^-$ are $J$.
\begin{Claim-numbered}\label{c.surjective} The map $z\mapsto \pi_{x}(z)$ from ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ to $J$ is surjective. \end{Claim-numbered} \begin{proof} This follows from the fact that the closure of $\Lambda^+_{0_x}$ is $J$, and that $\widetilde P_{k(n)}\circ \widetilde P_{k(n-1)}\circ \widetilde P_{k(1)}.0_x$ belongs to the projection of the orbit of $x$ by $\pi_x$. \end{proof}
\begin{Claim-numbered}\label{c.return} For any non-trivial interval $J'\subset J$ there is $T>0$ such that any $z\in {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ has iterates $\varphi_{t^+}(z)$, $\varphi_{-t^-}(z)$ with $t^+,t^-\in (1, T)$ which belong to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ and project by $\pi_{x}$ in $J'$. \end{Claim-numbered} \begin{proof} For $z\in \cal C$, since $\Lambda_z^+$ is dense in $J$, there {are} $u_0=\pi_{x}(z)$, $u_1$,\dots, $u_{\ell(z)}$ in $J$ such that \begin{itemize}
\item[--] $u_{\ell(z)}$ belongs to the interior of $J'$,
\item[--] For $0\le i\le \ell(z)-1$, there is $k(i)\in\{1,2\}$ such that $\widetilde P_{k(i)}(u_i)=u_{i+1}$. \end{itemize}
Thus by compactness, there is a uniform $\ell$ such that $\ell(z)\le \ell$ for any $z\in{\cal C}$. Moreover by the Global invariance, there is $t(i)>0$ such that $\pi_x\varphi_{-t(i)}(z_i)$ is close to $u_i$ and $t(i+1)-t(i)$ is smaller than $2\max(t_1,t_2)$.
Since $\pi_{x}(z)\in J$ there exists a $2$-Lipschitz homeomorphism $\theta$ of $[0,+\infty)$ such that $\varphi_{-\theta(t)}(z)$ is close to $\varphi_{-t}(x)$ for each $t\geq 0$. This proves that $\varphi_{-\theta(t(\ell))}(z)$ projects by $\pi_{x}$ on $u_\ell$, hence belongs to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$. Since $\ell$ and the differences $t(i+1)-t(i)$ are bounded and since $\theta$ is $2$-Lipschitz, the time $t^-:=\theta(t(\ell))$ is bounded by a constant which only depends on $J'$, as required.
The time $t^+$ is obtained in a similar way since $\Lambda_z^-$ is dense in $J$ for any $z\in J$. \end{proof}
Set $\cT=\cup_{t\in\RR}\varphi_t({\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U})$. From the previous claim, there exists $T>0$ such that $\cT=\varphi_{[0,T]}({\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U})$. Since ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ is compact, $\cT$ is also compact. Note that (using Claim~\ref{c.return}) any point in $\cT$ has arbitrarily large forward iterates in ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$, whose projection by $\pi_x$ belongs to $J\subset \cN_x$. Since $x\in K\setminus V$, by choosing $\delta>0$ small enough, the local injectivity implies: \begin{Claim-numbered}\label{c.large-returns} Any point in $\cT$ has arbitrarily large forward iterates in the $r_0$-neighborhood of $K\setminus V$. \end{Claim-numbered}
\begin{Claim-numbered} There exists a curve $\overline \gamma\subset {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}\cap B(x,r_0/2)$ which projects homeomorphically by $z\mapsto \pi_{x}(z)$ on a non trivial compact interval of $\operatorname{Interior}(J)$ . \end{Claim-numbered} \begin{proof} We note that: \begin{enumerate} \item\label{i.delta} By the ``No small period" assumption, for $k\in\NN$, there exists $\varkappa_k>0$ such that for any $y,y'\in {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ and $t\in [0,1]$ satisfying $d(y,y')<\varkappa_k$ and $d(y,\varphi_t(y'))<\varkappa_k$, then for any $s\in [0,t]$ we have $d(y,\varphi_s(y'))<2^{-k-1}r_0$.
\item\label{i.epsilon} For each $\varkappa_k>0$ as in Item~\ref{i.delta}, there exists $\varepsilon_k>0$ with the following property. For any $y\in {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ satisfying $B(y,\varkappa_k)\subset B(x,r_0)$ and for any $u\in J$ such that $d(u,\pi_{x}(y))<\varepsilon_k$, then there is $y'\in B(y,\varkappa_k/2)\cap{\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ such that $\pi_x(y')=u$. Indeed, by the Local injectivity property, there is $\beta_k>0$ such that for any points $w\in B(y,r_0)$, if $\|\pi_y w\|<\beta_k$, then there is $t\in[-1/2,1/2]$ such that $d(\varphi_t(w),y)<\varkappa_k/2$. By the uniform continuity of the identification $\pi$, for $\varepsilon_k>0$ such that for any $v_1,v_2\in {\cal N}_x$, if $\|v_1-v_2\|<\varepsilon_k$, then $\|\pi_y(v_1)-\pi_y(v_2)\|<\beta_k$. Now for any $u\in J$ such that $d(u,\pi_{x}(y))<\varepsilon_k$, by Claim~\ref{c.surjective} there is $y_0\in B(x,r_0/2)\cap{\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ such that $\pi_x(y_0)=u$ and $d(\pi_x(y_0),\pi_x(y))<\varepsilon_k$. Hence $d(\pi_y(y_0),0_y)=d(\pi_y\circ\pi_x(y_0),\pi_y\circ\pi_x(y))<\beta_k$. By using the Local injectivity property, $d(\varphi_t(y_0),y)<\varkappa_k/2$ for some $t\in[-1/2,1/2]$. Set $y'=\varphi_t(y_0)$. By Local invariance, we have that $\pi_x(y')=u$.
\item\label{i.lift} By letting $k\to\infty$ and $\beta_k\to 0$, one deduces that for any $u\in J$ and any $y\in {\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ such that $\pi_{x}(y)$ is close to $u$, there exists $y'$ close to $y$ such that $\pi_{x}(y')=u$. \end{enumerate}
We build inductively an increasing sequence of finite sets $Y_k$ in ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ satisfying: \begin{itemize} \item[--] $\pi_{x}$ is injective on $Y_k$, \item[--] for any $y,y'\in Y_k$ such that $\pi_{x}(y)$, $\pi_{x}(y')$ are consecutive points of $\pi_{x}(Y_k)$ in $J$, $$d(\pi_{x}(y),\pi_{x}(y'))<\varepsilon_k.$$ \item[--] if moreover $y''\in Y_{k+1}$ satisfies that $\pi_{x}(y'')$ is between $\pi_{x}(y)$ and $\pi_{x}(y')$ in $J$, then $y''$ is $2^{-k}r_0$-close to $y$ and $y'$. \end{itemize} Let us explain how to obtain $Y_{k+1}$ from $Y_k$. We fix $y,y'\in Y_k$ such that $\pi_{x}(y)$, $\pi_{x}(y')$ are consecutive points of $\pi_{x}(Y_k)$ in $J$ and define the points in $Y_{k+1}$ which project between $\pi_{x}(y)$, $\pi_{x}(y')$. By items~\ref{i.epsilon}, we introduce a finite set $y''_1,y''_2,\dots,y''_j$ of points in $B(y, \varkappa_k/2)$ such that
\begin{itemize}
\item[--] $y_1''=y$ and $y_j''=y'$, \item[--] $\pi_{x}\{y_1'',y_2'',\dots,y_j''\}$ is $\varepsilon_{k+1}$-dense in the arc of $J$ bounded by $\pi_{x}(y)$ and $\pi_{x}(y')$, \item[--] $\pi_{x}(y''_i),\pi_{x}(y''_{i+1})$ are consecutive points of the projections in $J$ for $i\in \{1,\dots,j-1\}$,
\end{itemize}
The distance of $y_i''$ and $y_{i+1}''$ may be larger than $r_0 2^{-k-2}$. We need to modify the construction. By Item~\ref{i.epsilon}, there is $t_i\in[-1,1]$ such that $z_i=\varphi_{t_i}(y_i'')$ is $\varkappa_{k+1}/2$-close to $y_{i+1}''$. Choose $n\in\NN$ large enough. Consider the points $\{\varphi_{mt_i/n}(y_i'')\}_{m=0}^n$. By using the item~\ref{i.lift} there exists a finite collection $X_i$ of points that are arbitrarily close to the set $\{\varphi_{mt_i/n}(y_i'')\}_{m=0}^n$ such that they have distinct projection by $\pi_{x}$ and such that any two such point with consecutive projections are $2^{-k-2}r_0$-close. By the item~\ref{i.delta}, the set $X_i$ is contained in $B(y,2^{-k-1}r_0)$. Since $d(y,y')<2^{-k-1}r_0$, it is also contained in $B(y',2^{-k}r_0)$. The set of points of $Y_{k+1}$which project between $\pi_{x}(y)$, $\pi_{x}(y')$ is the union of the $\{y_i'',y_{i+1}''\}\cup X_i$ for any $i$.
Let us define $Y=\cup Y_k$. The restriction of $\pi_{x}$ to $Y$ is injective and has a dense image in a non-trivial interval $J'$ contained in $\operatorname{Interior}(J)$. Its inverse $\chi$ is uniformly continuous: indeed for any $k$, any $y,y'\in Y_k$ with consecutive projection, and any $y''\in Y$ such that $\pi_{x}(y'')$ is between $\pi_{x}(y)$, $\pi_{x}(y')$, the distance $d(y'',y)$ is smaller than $2^{-k+1}r_0$. As a consequence $\chi$ extends continuously to $J'$ as a homeomorphism such that $\pi_{x}\circ \chi=\operatorname{Id}_{J'}$. The curve $\bar \gamma$ is the image of $\chi$. \end{proof}
Let $\gamma$ be the open curve obtained by removing the endpoints of $\overline \gamma$. By the Local invariance, for $\varepsilon>0$ small, $\varphi$ is injective on $[-\varepsilon,\varepsilon]\times \overline\gamma$; its image is contained in ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ and is homeomorphic to the ball $[0,1]^2$. This is because a continuous bijective map from a compact space to a Hausdorff space is a homeomorphism. Thus $\varphi$ is a homeomorphism from $(-\varepsilon,\varepsilon)\times \gamma$ to its image.
\begin{Claim-numbered}\label{c.open} The set $\varphi_{(-\varepsilon,\varepsilon)}(\gamma)$ is open in $\cT$. \end{Claim-numbered} \begin{proof} Let us fix $z_0\in \varphi_{t}(\gamma)$ for some $t\in (-\varepsilon,\varepsilon)$ and let us consider any point $z\in \cT$ close to $z_0$. We have to prove that $z$ belongs to the open set $\varphi_{(-\varepsilon,\varepsilon)}(\gamma)$. Note that $d(z, x)<r_0$ and $\pi_{x}(z)$ belongs to $R(J)$. By Claim~\ref{c.return} and the definition of $\cT$, the point $z$ has a negative iterate $z'=\varphi_{-s}(z)$ in ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$, with $s$ bounded, so that $\pi_x(z')\in J$.
One deduces that $\pi_{x}(z)$ belongs to $J$. Otherwise, $\pi_x(z)\in R(J)\setminus J$. Since $z$ and $x$ have arbitrarily large returns in the $r_0$-neighborhood of $K\setminus V$ (Claim~\ref{c.large-returns}), the Proposition~\ref{p.coherence} can be applied and $\pi_x\circ P_{s}\circ \pi_{z'}(J)$ is disjoint from $J$. Since $s$ is bounded, the distance $d(\pi_x\circ P_{s}\circ \pi_{z'}(J),J)$ is bounded away from zero. We have $\pi_x(z_0)\in J$ whereas $\pi_x(z)\in P_{s}\circ \pi_{z'}(J)$ which contradicts the fact that $z$ is arbitrarily close to $z_0$.
Since $z$ is close to $z_0$ and the curve $\gamma$ is open, we have $\pi_{x}(z)\in \pi_{x}(\gamma)$
and $z\in \varphi_{t'}(\gamma)$ for some $|t'|<1/2$ by the Local injectivity.
By the Local invariance $\varphi_{-t}(z)$ and $\varphi_{-t'}(z)$
have the same projection by $\pi_{x}$. If $z$ is close enough to $z_0$, both are arbitrarily close to $\gamma$. Since $\pi_{x}$ is injective on $\gamma$ one deduces that $d(\varphi_{-t}(z),\varphi_{-t'}(z))$ is arbitrarily small. By the ``No small period" assumption, this implies that $|t'-t|$ is arbitrarily small. Hence $|t'|<\varepsilon$ proving that $z\in \varphi_{(-\varepsilon,\varepsilon)}(\gamma)$, as required. \end{proof}
By Claim~\ref{c.return}, any point $z\in \cT$ has a backward iterate in ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}$ which project by $\pi_{x}$ in $\pi_{x}(\gamma)$, hence by Local injectivity has an iterate $\varphi_{-t}(z)$ in $\gamma$. Since $\varphi_{-t}$ is a homeomorphism, one deduces that $z$ has an open neighborhood of the form $\varphi_{t}(\varphi_{(-\varepsilon,\varepsilon)}(\gamma))$ which is homeomorphic to the ball $(0,1)^2$. As a consequence, $\cT$ is a compact topological surface.
Since any forward and backward orbit of $\cT$ meets the small open set $\varphi_{(-\varepsilon',\varepsilon')}(\gamma')$, for any $\varepsilon'\in (0,\varepsilon)$ and $\gamma'\subset \gamma$, the dynamics induced by $(\varphi_{t})$ on $\cT$ is minimal.
From classical results on foliations on surfaces (see~\cite[Theorem 4.1.10, chapter I]{hector-hirsch-foliation}), we get: \begin{Claim-numbered} $\cT$ is homeomorphic to the torus ${\mathbb T}^2$ and the induced dynamics of $(\varphi_t)_{t\in \RR}$ on $\cT$ is topologically conjugated to the suspension of an irrational rotation of the circle. \end{Claim-numbered}
By Claim~\ref{c.open}, $\varphi_{(-\varepsilon,\varepsilon)}(\gamma)$ is a neighborhood of $x$ in $\cT$ and $\pi_{x}(\gamma)$ is contained in $\cW^{cs}(x)$, hence is a $C^1$-curve tangent to ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$ at $0_{x}$. Hence $\cT$ is a normally expanded irrational torus. Moreover $\pi_{x}(\cT\cap B(x,r_0/2))$ contains $J$ by Claim~\ref{c.surjective}. This ends the proof of Proposition~\ref{p.aperiodic}. \end{proof}
\subsection{Proof of the topological stability}\label{ss.Lyapunov} We prove Proposition~\ref{Prop:dynamicsofinterval} and Lemma~\ref{l.periodic} that imply the topological stability in Section~\ref{sss.lyapunov-stable}.
\begin{proof}[Proof of Proposition~\ref{Prop:dynamicsofinterval}] By Proposition~\ref{p.limit}, there is a sequence of $\delta$-intervals $(\widehat I_k)$ at $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss backward iterates $\varphi_{-n_k}(x)$ in $K\setminus V$ and converging to a $\delta$-interval $I_\infty$ at a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss point $x_\infty$.
Let us assume first that all large returns of $I_\infty$ are shifting. By Proposition~\ref{p.shifting-return}, $I_\infty$ has two deep sequences of returns, one shifting to the right and one shifting to the left. Proposition~\ref{p.aperiodic0} then implies that $\alpha(x_\infty)$ contains a point $x'\in K\setminus V$ with an aperiodic $\delta$-interval. By Proposition~\ref{p.aperiodic}, the orbit of $x'$ is contained in a normally expanded irrational torus $\cT$. We have proved that the first case of Proposition~\ref{Prop:dynamicsofinterval} holds. \hspace{-1cm}\mbox{}
Let us assume then that $I_\infty$ admits arbitrarily large non-shifting returns. Proposition~\ref{p.periodic-return} implies that $I$ is contained in the unstable set of some periodic $\delta$-interval $J$. \end{proof}
\begin{proof}[Proof of Lemma~\ref{l.periodic}] Let us consider the rectangle $R(I)$: it is mapped into itself by $P_{-2s}$, where $s$ is the period of $q$. By assumption (A3), ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is contracted over the orbit of $q$. One deduces that there exists a neighborhood $B$ of $0_q$ in $R(I)$ such that for any $u\in B\setminus \cW^{cu}(0_q)$, the backward orbit of $u$ by $(P_t)$ converges towards a periodic point of $I$ which is not contracting along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. So Lemma~\ref{l.closing0} implies that $\pi_q(K)$ is disjoint from $B\setminus \cW^{cu}(0_q)$.
For any interval $L\subset \cN_z$ as in Lemma~\ref{l.periodic}, the point $\pi_q(z)$ does not belong to $B\setminus \cW^{cs}(0_q)$. Hence $\pi_q(L)$ contains an interval $J$ which is contained in $B$, meets $\cW^{cu}(0_q)$ and has a length larger than some constant $\chi_0>0$. The backward iterate $P_{-2s}(J)$ still contain an interval $J'$ having this property since $q$ is attracting along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
Since $\pi_q(J)$ intersects $R(I)$, there exists $\theta\in \operatorname{Lip}$ such that $d(\varphi_{-t}(z),\varphi_{-\theta(t)}(q))$ remain small for any $t\in [0,T]$. By the Global invariance, if $k$ is the largest integer such that $\theta(T)>ks$, then $\varphi_{-\theta^{-1}(ks)}(L))$ contains an interval of length larger than $\chi_0/2$. Since $T_\theta^{-1}(T)$ is bounded, this implies that $\varphi_{-T}(L)$ has length larger than some constant $\chi>0$. \end{proof}
\subsection{Proof of the topological contraction} We prove a proposition that will allow us to conclude the topological contraction.
\begin{Definition} $K$ admits \emph{arbitrarily small periodic intervals} if for any $\delta>0$, there is a periodic point $p\in K$, whose orbit supports a periodic $\delta$-interval. \end{Definition}
\begin{Proposition}\label{Pro:smallperiodic-interval}
Under the assumptions of Theorem~\ref{Thm:topologicalcontracting}, if $K$ does not contain a normally expanded irrational torus, if it admits arbitrarily small periodic intervals and is transitive, then there are $C_0>0$, $\varepsilon_0>0$, and a non-empty open set $U_0\subset K$ such that for any $z\in U_0$, we have
$$\sum_{k\in\NN}|P_k(\cW^{cs}_{\varepsilon_0}(z))|<C_0.$$
\end{Proposition}
In the next 3 sections, we assume that the setting of Proposition~\ref{Pro:smallperiodic-interval} holds. We also assume that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is not uniformly contracted (since otherwise Proposition~\ref{Pro:smallperiodic-interval} holds by Lemma~\ref{l.summability-hyperbolicity}). Proposition~\ref{Pro:smallperiodic-interval} is proved in Section~\ref{ss.sum}, and then Theorem~\ref{Thm:topologicalcontracting} is proved in Section~\ref{ss.conclusion-topological}.
\subsubsection{The unstable set of periodic points}
\begin{Lemma}\label{l.unstable-set} For any $\beta>0$, there exist: \begin{itemize} \item[--] a periodic point $p\in K\setminus V$ (with period $T$), \item[--] a point $x\in K\setminus \{p\}$ which is $r_0/2$-close to $p$ and whose $\alpha$-limit set is the orbit of $p$, \item[--] $r_x>0$ and a connected component $Q$ of $B(0_x,r_x)\setminus \cW^{cu}(x)$ in $\cN_x$ {such that $Q\cap \pi_x(K)=\emptyset$ and the diameter of $P_{-t}(Q)$ is smaller than $\beta$ for each $t>0$.} \end{itemize} \end{Lemma}
\begin{proof} Since $K$ admits arbitrarily small periodic intervals, there is a periodic point $p\in K$ with period $T>0$ and a periodic $\delta$-interval $I\subset {\cal N}_p$ for $\delta$ small. Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted in $V\supset K\setminus U$ (assumption (A2)), one can replace $p$ by one of its iterates so that $p\in K\setminus V$. By Lemma~\ref{Lem:hyperbolicreturns} (and the continuity of $(P_t)$), \begin{itemize} \item[--] the restriction of ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ to the orbit of $I$ by ($P_t)$ is an expanded bundle. \end{itemize} Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over the orbit of $0_p$ (by (A3)), one can assume that:
\begin{itemize} \item[--] Only the endpoints of $I$ are fixed by $P_{T}$. One is $0_p$, the other one attracts any point of $I\setminus \{0_p\}$ by negative iterations of $P_T$.
\item[--] There is $\beta_p>0$ such that for any $u\in \cN_p$ satisfying $\|P_{-t}(u)\|<\beta_p$ for each $t>0$, then $u$ is in the unstable manifold of $0_p$ for $P_t$. \end{itemize} Let $r_p>0$ such that $B(p,r_p)\subset U$ and $P_{-t}\circ \pi_p(B(p,r_p))\subset B(0_{\varphi_{-t}(p)},\beta_p)$ for each $t\in [0,T]$.\hspace{-1cm}\mbox{}
Since $K$ is transitive, there are sequences $(x_n)$ in $K$ and $(t_n)$ in $(0,+\infty)$ such that \begin{itemize} \item[--] $\lim_{n\to\infty}x_n=p$ and $\lim_{n\to\infty}t_n=+\infty$, \item[--] $d(\varphi_{t}(x_n), \operatorname{Orb}(p))<r_p$ for $t\in (0,t_n)$ and $d(\varphi_{t_n}(x_n), \operatorname{Orb}(p))=r_p$ for each $n$. \end{itemize} Taking a subsequence if necessary, we let $x:=\lim \varphi_{t_n}(x_n)$. By the Global invariance, $P_{-t}\circ \pi_p(x)\subset B(0_{\varphi_{-t}(p)},\beta_p)$ for each $t>0$, hence $\pi_p(x)$ lies in the unstable manifold of $0_p$. Combined with the Local injectivity, there exists $\theta\in \operatorname{Lip}$ such that $\theta(0)=0$ and $d(\varphi_{\theta(t)}(x),\varphi_t(p))\to 0$ as $t\to -\infty$. Hence the $\alpha$-limit set of $x$ is $\operatorname{Orb}(p)$.
We now consider the dynamics of $P_{T}$ in restriction to $\cN_p$. The periodic interval $I$ is normally expanded. Consequently, for $r_x$ small, one of the components $Q$ of $B(0_x,r_x)\setminus \cW^{cu}(x)$ in $\cN_x$ has an image by $\pi_x$ contained in the unstable set of $I\setminus \{0_p\}$.
Let us assume by contradiction that there exists $y\in U$ such that $\pi_x(y)\in Q$. The backward orbit of $\pi_p(y)$ by $P_T$ converges to the endpoint $v$ of $I$ which is not attracting along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. Lemma~\ref{l.closing0} and the Global invariance imply that the backward orbit of $y$ converges to a periodic orbit in $K$ whose eigenvalues at the period for the fibered flow coincide with the eigenvalues at the period of the fixed point $v$ for $P_T$. This is a contradiction since all the eigenvalues of $v$ are non-negative whereas ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over the periodic orbits of $K$. So $Q$ is disjoint from $\pi_x(K)$.
One can choose $p$ and $I$ such that any iterates $P_{-t}(I)$ has diameter smaller than $\beta/2$. Reducing $r_x$, one can ensure that the iterates of $P_{-t}(Q)$ have diameter smaller than $\beta$ until some time, where it stays close to the orbit of $I$ and has diameter smaller than $2\sup_t\operatorname{Diam}(P_{-t}(I))$. This concludes the proof of the lemma. \end{proof}
\subsubsection{Wandering rectangles}\label{ss.wandering-rectangles} One chooses $\delta\in (0,r_0/2)$, $\beta,\varepsilon>0$, and a component $Q$ as in Lemma~\ref{l.unstable-set} such that \begin{itemize} \item[--] if $\varphi_{-t}(x)$, $t>1$, is $2\delta$-close to $x$, then $Q\cap \pi_x(P_{-t}(Q))=\emptyset$; \item[--] if $y,z\in K$ are close to $x$ and $\theta\in \operatorname{Lip}$ satisfies $d(\varphi_{\theta(t)}(y),\varphi_t(z))<\delta$ for $t\in [-2,0]$, then $\theta(0)-\theta(-2)>3/2$; \item[--] $\beta>0$ associated to a shadowing at scale $\delta$ as in the Global invariance (Remarks~\ref{r.identification}.e); \item[--] for any point $z$, the forward iterates of $\cW^{cs}_\varepsilon(z)$ have length smaller than $\beta/3$ and than
the constant $\delta_E$ given by Lemma~\ref{l.summability-hyperbolicity}
(this is possible since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically stable); \item[--] the backward iterates of $Q$ have diameter smaller than $\beta/2$. \end{itemize}
For $z$ close to $x$, one considers the closed curve $J(z)\subset \cW^{cs}(z)$ of length $\varepsilon$ bounded by $0_z$, such that $\pi_x(J(z))$ intersects $Q$. Since $\pi_x(z)$ does not belong to $Q$ (by Lemma~\ref{l.unstable-set}), the unstable manifold of $0_p$ intersects $\pi_p(J(z))$, defining two disjoint arcs $J(z)=J^0(z)\cup J^1(z)$ such that $J^1(z)$ is bounded by $0_z$, disjoint from $Q$ and $\pi_x(J^0(z))\subset Q$.
Let $H(z)$ denote the set of integers $n\geq 0$ such that $\varphi_n(z)\in K\setminus V$ and $(z,\varphi_n(z))$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss. For $n\in H(z)$, we set $J_n(z)=P_n(J(z))$ and $J^{0}_n(z)=P_n(J^{0}(z))$.
\begin{Lemma}\label{l.construction-rectangle} There is $C_R>0$ with the following property. For any $z$ close to $x$ and any $n\in H(z)$, there exists a rectangle $R_n(z)\subset \cN_{\varphi_n(z)}$ which is the image of $[0,1]\times B_{d-1}(0,1)$ by a homeomorphism $\psi_n$ such that (see Figure~\ref{f.topological-hyperbolicity}): \begin{enumerate}
\item\label{i.construction-rectangle} $\text{Volume\;}(R_n(z))>C_R.|J^0_n(z)|$, \item the preimages $P_{-t}(R_n(z))$ for $t\in [0,n]$ have diameter smaller than $\beta/2$, \item $\pi_x\circ P_{-n}(R_n(z))$ is contained in $Q$. \end{enumerate} \end{Lemma} \begin{figure}\label{f.topological-hyperbolicity}
\end{figure} \begin{proof} The construction is very similar to the proof of Proposition~\ref{p.rectangle}. Let us fix $\alpha'>0$ and $\alpha_{min}>0$ much smaller. One considers a rectangle in $\cN_z$ whose interior projects by $\pi_x$ in $Q$, given by a parametrization $\psi_0$ such that $\psi_0([0,1]\times \{0\})=J^0(z)$ and each disc $\psi_0(\{u\}\times B_{d-1})$ is tangent to the center-unstable cones, has diameter smaller than $\alpha'$ and contains a center-unstable ball centered at $J^0(z)$ and with radius much larger than $\alpha_{min}$. Moreover $\pi_p\circ \psi_0(\{0_p\}\times B_{d-1})$ is contained in the unstable manifold of $p$.
Since the center-unstable cone-field is invariant, at any iterate $\varphi_n(z)$ such that $(z,\varphi_n(z))$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss, one can build a similar rectangle $R_n$ such that $P_{-t}(R_n)$, $0<t<n$, has center-unstable disc of diameter smaller than $\alpha'$ and $P_{-n}(R_n)\subset R_0$.
Since the forward iterates of $J^0(z)$ have length smaller than $\beta/3$, by choosing $\alpha'$ small enough, one guaranties that the $P_{-t}(R_n)$, $0<t<n$, have diameter smaller than $\beta/2$, and that $P_{-n}(R_n)\subset R_0(z)$ are contained in $\pi_z(Q)$. The estimate on the volume is obtained from Fubini theorem and the distortion estimate given by Proposition~\ref{p.distortion}. \end{proof}
\begin{Lemma}\label{l.rectangle-disjoint} For each $z$ close to $x$ and each $n< m$ in $H(z)$, if $d(\varphi_n(z),\varphi_m(z))<\delta$, then $\pi_{\varphi_n(z)}(R_m(z)) \cap R_n(z)=\emptyset$. \end{Lemma} \begin{proof} Let us assume by contradiction that $\pi_{\varphi_m(z)}(R_m(z))$ and $R_n(z)$ intersect.
Since $P_{-t}(R_n(z)\cup J_n(z))$, $t\in [0,n]$, and $P_{-t}(R_m(z)\cup J_m(z))$, $t\in [0,m]$, have diameter smaller than $\beta$, the Global invariance (Remark~\ref{r.identification}.(e)) applies: there is $\theta\in \operatorname{Lip}$ such that \begin{itemize}
\item[--] $|\theta(n)-m|<1/4$, \item[--] for any $t\in [0,n]$, one has $d(\varphi_{t}(z),\varphi_{\theta(t)}(z))<\delta$, \item[--] $\pi_x\circ P_{\theta(0)-m}(R_m(z))$ intersects $\pi_x\circ P_{-n}(R_n(z))$, hence $Q$. \end{itemize} In particular $\theta(n)>n+1/2$ and Proposition~\ref{p.no-shear} gives $\theta(0)>2$.
Since the backward iterates of $Q$ by $P_t$ have diameter smaller than $\beta$, the Global invariance (item (e) in Remarks~\ref{r.identification})
can be applied to the points $x$ and $\varphi_{\theta(0)}(z)$. It gives $\theta'\in \operatorname{Lip}$ with $|\theta'(\theta(0))|\leq 1/4$ such that $d(\varphi_{\theta'(t)}(x),\varphi_{t}(z))<\delta$ for each $t\in [0,\theta(0)]$. Moreover $\pi_x\circ P_{\theta'(0)}(Q)$ intersects $R_0(z)$, hence $Q$. We have $d(\varphi_{\theta'(0)}(x),x)<2\delta$ and $1/4>\theta'(\theta(0))>\theta'(0)+3/2$ by our choice of $\delta$ at the beginning of Section~\ref{ss.wandering-rectangles}
We proved that $\pi_x\circ P_{-t}(Q)\cap Q\neq \emptyset$ for some $t>1$ such that $d(\varphi_{-t}(x),x)<2\delta$. This contradicts the choice of $Q$. The rectangles $\pi_{\varphi_n(z)}(R_m(z))$ and $R_n(z)$ are thus disjoint. \end{proof}
As a consequence of Lemma~\ref{l.rectangle-disjoint}, one gets
\begin{Corollary}\label{c.volume-bounded} There exits $C_H>0$ such that for any $z$ close to $x$,
$$\sum_{n\in H(z)}|J^0_{n}(z)|< C_H.$$ \end{Corollary} \begin{proof} As in the proof of Lemma~\ref{Sub:disjointcase}, one fixes a finite set $Z\subset U$ such that any point $z\in U$ is $\delta/2$-close to a point of $Z$. Let $C_{Vol}$ be a bound on the volume of the balls $B(0_z,\beta_0)\subset \cN_z$ over $z\in K$ and let $C_H=2C_R^{-1}C_{Vol}\text{Card} (Z)$. Since identifications are $C^1$, up to reduce $r_0$, one can assume that the modulus of their Jacobian is smaller than $2$.
The statement now follows from the item~\ref{i.construction-rectangle} of Lemma~\ref{l.construction-rectangle} and the disjointness of the rectangles (Lemma~\ref{l.rectangle-disjoint}). \end{proof}
\subsubsection{Summability. Proof of Proposition~\ref{Pro:smallperiodic-interval}}\label{ss.sum}
\begin{Lemma}\label{l.summability-topological-hyperbolicity} There exists $C_{sum}>0$ such that for any point $z$ close to $x$, we have
$$\forall n\geq 0,~~~\sum_{k=0}^n |P_k(J^0(z))|< C_{sum}.$$ \end{Lemma}
\begin{proof}
Let us denote by $n_0=0<n_1<n_2<\dots$ the integers in $H(z)$. By Proposition~\ref{l.summability}, for any $i$, the piece of orbit $(\varphi_{n_i}(z),\varphi_{n_{i+1}}(z))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
Let $\delta_E, C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$ be the constants associated to $C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$ by Lemma~\ref{l.summability-hyperbolicity}. We have built $J(z)$ such that any forward iterate has length smaller than $\delta_E$. Hence Lemma~\ref{l.summability-hyperbolicity} implies that
$$\sum_{k=n_i}^{n_{i+1}} |P_k(J^0(z))|< C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}'|J^0_{n_{i+1}}|.$$ With Corollary~\ref{c.volume-bounded}, one deduces
$$\sum_{k=0}^n |P_k(J^0(z))|< C_{Sum}:=C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}'C_{H}.$$ \end{proof}
We can now end the proof of the proposition.
\begin{proof}[Proof of the Proposition~\ref{Pro:smallperiodic-interval}]
Let $\eta_S>0$ be the constant associated to $C_{Sum}$ by Lemma~\ref{Lem:schwartz}. If $z$ belongs to a small neighborhood $U_0$ of $x$ and if $\varepsilon_0$ is small enough, the intervals $\cW^{cs}_{\varepsilon_0}(z)$ and $J^0(z)$ are both contained in an interval $\widehat J(z)\subset \cW^{cs}(z)$
such that $|\widehat J(z)|\leq (1+\eta_S)|J^0(z)|$. Combining Lemma~\ref{Lem:schwartz} with Lemma~\ref{l.summability-topological-hyperbolicity}, one gets
$$\sum_{k=0}^n |P_k(\cW^{cs}_{\varepsilon_0}(z))|\leq \sum_{k=0}^n |P_k(\widehat J(z))|< 2\sum_{k=0}^n |P_k(J^0(z))|<C_0:=2C_{sum}.$$ \end{proof}
\subsubsection{Proof of Theorem~\ref{Thm:topologicalcontracting}}\label{ss.conclusion-topological} We consider the set $K$ as in the Theorem~\ref{Thm:topologicalcontracting}, and we assume by contradiction that none of the three properties in the statement of the theorem holds. In particular $K$ does not contain a normally expanded irrational torus.
\begin{Claim} $K$ is transitive. \end{Claim} \begin{proof} Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is not uniformly contracted, there exists an ergodic measure whose Lyapunov exponent along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is non-negative. Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on each proper invariant subset of $K$, the support of the measure coincides with $K$. Hence $K$ is transitive. \end{proof}
Let us fix $\delta>0$ arbitrarily small. Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topological stable, there is $\varepsilon>0$ such that for any $x\in K$ and any $t>0$, one has $$P_t({\cal W}^{cs}_{\varepsilon}(x)) \subset {\cal W}^{cs}_{\delta}(\varphi_t(x)).$$ Since the topological contraction fails, there are
$(x_n)$ in $K$, $(t_n)\to +\infty$ and $\chi>0$ such that
$$\chi<|P_{t_n}({\cal W}^{cs}_{\varepsilon}(x_n))| \text{ and } |P_{t}({\cal W}^{cs}_{\varepsilon}(x_n))|<\delta,~\forall t>0.$$ Let $I=\lim_{n\to\infty}P_{t_n}({\cal W}^{cs}_{\varepsilon}(x))$. It is a $\delta$-interval and by Proposition~\ref{Prop:dynamicsofinterval}, it is contained in the unstable set of a periodic $\delta$-interval since $K$ contains no normally expanded irrational tori. This proves that $K$ admits arbitrarily small periodic intervals and Proposition~\ref{Pro:smallperiodic-interval} applies.
One gets a non-empty open set $U_0\subset K$ such that at any $z\in U_0$ a summability holds in the ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ direction. With Lemma~\ref{Lem:schwartz}, one deduces that for any $x\in U_0$,
$$\lim_{n\to\infty}\|DP_n|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\|=0.$$ Now for any $z\in K$, \begin{itemize}
\item[--] either there is $t>0$ such that $\varphi_t(z)\in U_0$ and then $\lim_{n\to\infty}\|DP_{n}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(z)\|=0$;
\item[--] or the forward orbit of $z$ does not meet $U_0$, then ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is contracted on the proper invariant compact set $\omega(z)$ and we also have $\lim_{n\to\infty}\|DP_{n}|{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(z)}\|=0$. \end{itemize} By using a compactness argument, one deduces that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on $K$. This contradicts our assumptions on $K$. Theorem~\ref{Thm:topologicalcontracting} is now proved. \qed
\section{Markovian boxes}\label{s.markov} We will build boxes with a Markovian property for $C^2$ local fibered flow having a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ with two-dimensional fibers, such that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically contracted.
\noindent {\bf Standing assumptions.} Keep assumptions (A1), (A2), (A3) of Section~\ref{s.topological-hyperbolicity} and add furthermore: \begin{enumerate} \setcounter{enumi}{3} \item[(A4)] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically contracted and ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is one-dimensional, \item[(A5)] there exists an ergodic measure $\mu$ for $(\varphi_t)$ whose Lyapunov exponent along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is positive, and whose support is not a periodic orbit and intersects $K\setminus \overline V$. \end{enumerate}
\subsection{Existence of Markovian boxes}\label{ss.existence-boxes}
We fix a non-periodic point $x\in K\setminus \overline V$ in the support of $\mu$. In particular taking $r_0$ small enough, the ball $U(x,r_0)$ centered at $x$ and with radius $r_0$ in $K$ is contained in $U$. We also denote $\mu_x=(\pi_x)_{*}(\mu|_{U(x,r_0)})$ and fix some $\beta_x>0$.
\begin{Definition} A \emph{box} $B\subset \cN_x$ is the image by a homeomorphism $\psi$ such that: \begin{itemize} \item[--] $\partial^{\cal F}B:=\psi(\{0,1\}\times [0,1])$ is $C^1$, tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ (and called the \emph{${\cal F}$-boundary}), \item[--] $\partial^{\cal E}B:=\psi([0,1]\times \{0,1\})$ is $C^1$, tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ (and called the \emph{${\cal E}$-boundary}). \end{itemize} A \emph{center-stable sub-box} (resp. \emph{center-unstable sub-box}) is a box $B'\subset B$ such that $$\partial^{\cal F}B'\subset \partial^{\cal F}B \text{ (resp. } \partial^{\cal E}B'\subset \partial^{\cal E}B \text{).}$$ \end{Definition} In particular a box is a rectangle as defined in Section~\ref{ss.rectangle}.
\begin{Definition} Let us fix some constants $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}>1$.
\noindent A \emph{transition} between boxes $B,B'\subset \cN_x$ is defined by some $y\in K$ and $t>2$ such that: \begin{itemize} \item[--] $y$ and $\varphi_t(y)$ are $r_0/2$-close to $x$, \item[--] $(y,\varphi_t(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, \item[--] $y$ projects by $\pi_x$ in the interior of $B$ and $\varphi_t(y)$ in the interior of $B'$. \end{itemize}
\noindent Two boxes $B,B'$ have \emph{Markovian transitions} if for any transition $(y,t)$ between $B$ and $B'$, there exist a center-stable sub-box $B^{cs}\subset B$ and a center-unstable sub-boxes $B^{cu}\subset B'$ whose interior contain $\pi_x(y)$ and $\pi_x(\varphi_t(y))$ respectively, such that $\pi_xP_t\pi_y(B^{cs})=B^{cu}.$ \end{Definition} The remainder of Section~\ref{s.markov} is devoted to prove the following main result of this section. \begin{Theorem}[Existence of Markovian boxes]\label{t.existence-box} Under the assumptions above, there exists a box $R$ in $B(0,\beta_x)\subset \cN_x$ whose interior has positive $\mu_x$-measure, such that for any $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}>1$ (defining the transitions) and $\beta_{box}>0$, the box $R$ contains finitely many boxes $B_1,\dots,B_k$ and $t_{box},\Delta_{box}>0$ satisfying the following properties: \begin{enumerate} \item\label{i.markov-boundary} Any $z$ which is $r_0/2$-close to $x$ and whose projection $\pi_x(z)$ belongs to $\partial^{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}R$ (resp. to $\partial^{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}R$) has a forward orbit (resp. a backward orbit) which accumulates on a periodic orbit of $K$. \item\label{box1} The boxes $B_1,\dots,B_k$ have disjoint interiors, their union contains $R\cap \pi_x(K)$, and their boundaries have zero measure for $\mu_x$. \item\label{box2} Any transition $(y,t)$ with $t>t_{box}$ between any boxes $B_i,B_j$ is Markovian. \item\label{box3} The sub-boxes $B^{cs},B^{cu}$ associated to any transition $(y,t)$ with $t>t_{box}$ between any $B_i,B_j$ satisfy: \begin{itemize} \item[--] $Diam(P_s\circ \pi_y(B^{cs}))<\beta_x$ for any $s\in (0,t)$, \item[--] $Diam(P_s\circ \pi_y(B^{cs}))<\beta_{box}$ for any $s\in (t_{box},t)$, \item[--] $B^{cu}$ has distorsion bounded by $\Delta_{box}$. \end{itemize} \item\label{box4} For any two transitions $(y_1,t_1)$ and $(y_2,t_2)$ with $t_1, t_2>t_{box}$ such that the interiors of the sub-boxes $B^{cu}_1$, $B^{cu}_2$ intersect, we have $B^{cu}_1\subset B^{cu}_2$ or $B^{cu}_2\subset B^{cu}_1$.
More precisely, $B^{cu}_2\subset B^{cu}_1$ holds if there exists $\theta\in \operatorname{Lip}_2$ satisfying \begin{itemize} \item[--] $\theta(t_1)\geq t_2-2$, $\theta^{-1}(t_2)\geq t_1-2$ and $\theta(0)\geq -1$, \item[--] $d(\varphi_t(y_1), \varphi_{\theta(t)}(y_2))<r_0/2$ for $t\in [0,t_1]\cap \theta^{-1}([0,t_2])$. \end{itemize}
Up to exchange $(y_1,t_1)$ and $(y_2,t_2)$, there exists such a $\theta$ satisfying $|\theta(t_1)-t_2|\leq 1/2$. \item\label{box5} For any two transitions $(y_1,t_1)$ and $(y_2,t_2)$ with $t_1, t_2>t_{box}$ such that $y_1=y_2$ and $t_2>t_1+t_{box}$, we have $B^{cs}_2\subset B^{cs}_1$. \end{enumerate} \end{Theorem}
\subsection{Construction of boxes} \subsubsection{Notations, choices of constants}\label{ss.constants} One will consider some small numbers $\alpha_0,\eta, \alpha_x,\beta_x>0$, that are chosen in this order, according to the properties stated in this subsection. The constant $\alpha_0$ will bound the size of the plaques. In the whole Section~\ref{s.markov}, one will work with generalized orbits $\bar u=(u(t))$ in the $\eta$-neighborhood of $K$. The number $\alpha_x$ controls the hyperbolicity inside the center-unstable plaques. At last, $\beta_x>0$ is the constant introduced at the beginning of Section~\ref{ss.existence-boxes}, that can be reduced to be much smaller than the other quantities.
\paragraph{The plaques $\cW^{cs},\cW^{cu}$.}
\begin{Lemma}\label{l.plaques-box}
There exist center-stable and center-unstable plaques $\cW^{cs}(\bar u),\cW^{cu}(\bar u)$ that have length smaller than $\alpha_0$, depend continuously on $\bar u$ and satisfy moreover: \begin{itemize} \item[--] The center-unstable plaques are \emph{locally invariant}: there is $\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\in (0,\alpha_0)$ such that $$\bar P_{-1}(\cW^{cu}_{\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}(\bar u))\subset \cW^{cu}(\bar P_{-1}(\bar u)).$$ \item[--] The center-unstable plaques are \emph{coherent}: the statement of Proposition~\ref{p.coherence} holds for $\cW^{cu}$, the constants $\eta,\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ (and the flow $(P_{-t})$). \item[--] The center-stable plaques are \emph{trapped for time $s>0$}: $$\forall s>0,~~~\bar P_s(\overline{\cW^{cs}(\bar u)})\subset \cW^{cs}(\bar P_s(\bar u)).$$ \item[--] The center-stable plaques are \emph{coherent}: let $\bar u,\bar v$ be any generalized orbits with $u(0)\in \cN_y$, $v(0)\in \cN_{y'}$ such that $y,y'$ are $r_0/2$-close to $x$ and the projection $(y(t))_{t\in \RR}$ of $\bar u$ in $K$ has arbitrarily large positive iterates in the $r_0$-neighborhood of $K\setminus V$; if $\pi_x (\cW^{cs}(\bar u))$ and $\pi_x (\cW^{cs}(\bar v))$ intersect, then they are contained in a same $C^1$ curve. \end{itemize} \end{Lemma}
\begin{proof} The first item is given by Proposition~\ref{t.generalized-plaques}. It gives also a locally-invariant center-stable plaque family $\cW^{cs,0}$. The second item is obtained by Proposition~\ref{p.coherence} directly.
Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically contracted over the $0$-section, there exists $\varepsilon>0$ and $T_*>0$ such that $\bar P_s(\cW^{cs,0}_\varepsilon(\bar u))\subset \cW^{cs,0}(\bar P_s(\bar u))$ for any $s>0$ and which is trapped for times $s\geq T_*$. Let us choose $b>0$ small and define $$ \cW^{cs}(\bar u)=\bigcup_{0\leq t\leq T_*} \bar P_t(\cW^{cs,0}_{\varepsilon+bt}(\bar P_{-t}(\bar u))). $$ These plaques are open sets in $\cW^{cs,0}$ and we have to check the trapping property at any time $s>0$. Note that it is enough to choose $s\in (0,T_*)$.
Let us consider $t\in [0,T_*]$. In the case $t\geq T_*-s$ we set $t'=s+t-T_*$ and we have $$\bar P_s(\bar P_t(\cW^{cs,0}_{\varepsilon+bt}(\bar P_{-t}(\bar u))) =\bar P_{t'}\circ \bar P_{T_*}(\cW^{cs,0}_{\varepsilon+bt}(\bar P_{-t-s}\circ \bar P_s(\bar u))).$$ By the trapping property at time $T_*$, provided $b$ has been chosen small enough, one has $$\bar P_{T_*}(\cW^{cs,0}_{\varepsilon+bt}(\bar P_{-t-s}\circ \bar P_s(\bar u))\subset \cW^{cs,0}_{\varepsilon+bt'-bs}(\bar P_{T_*-s-t}\circ \bar P_s(\bar u)).$$ Hence $\bar P_s(\bar P_t(\cW^{cs,0}_{\varepsilon+bt}(\bar P_{-t}(\bar u)))$ is contained in $\bar P_{t'}(\cW^{cs,0}_{\varepsilon+bt'-bs}(\bar P_{-t'}\circ \bar P_s(\bar u)))$.
{In the case $t< T_*-s$, we set $t'=t+s>t$ and we have \begin{eqnarray*} \bar P_s(\bar P_t(\cW^{cs,0}_{\varepsilon+bt}(\bar P_{-t}(\bar u))) &=&\bar P_{s+t}(\cW^{cs,0}_{\varepsilon+bt}(\bar P_{-(t+s)}\circ \bar P_s(\bar u)))\\ &=& \bar P_{t'}(\cW^{cs,0}_{\varepsilon+bt'-bs}(\bar P_{-t'}\circ \bar P_s(\bar u))) \end{eqnarray*}} The closure of $\cup_{0\leq t'\leq T_*} \bar P_{t'}(\cW^{cs,0}_{\varepsilon+bt'-bs}(\bar P_{-t}\circ \bar P_s(\bar u)))$ is contained in $\cW^{cs}(\bar P_s(\bar u))$.
Combining these two cases, one thus gets the third item: $$\forall s>0,~~~\bar P_s({\rm Closure}(\cW^{cs}(\bar u)))\subset \cW^{cs}(\bar P_s(\bar u)).$$
For the fourth item, one uses the Local injectivity: since the plaque are small, one can assume that $y,y'$ are $\eta$-close. Then Proposition~\ref{p.coherence} applies and gives the coherence. \end{proof}
\paragraph{The constants $C_x,\lambda_x$.} For $C_x,\lambda_x>1$, we introduce the set $\cH$ of points $y\in K$ that are $(C_x/2,\lambda_x^2)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Since the Lyapunov exponent of $\mu$ along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is positive, we can fix $C_x,\lambda_x$ such that the following set has positive $\mu$-measure, for any $\beta_x>0$: $$H_x:=\{y\in \cH,\; d(y,x)<r_0/2 \text{ and } \pi_x(y)\in B(0_x,\beta_x)\}.$$ The choice of $\beta_x$ will be fixed later.
\paragraph{The constant $\alpha_x$.} Up to reduce $\eta>0$, there exists $\alpha_{x}>0$ (depending on $C_x,\lambda_x$) such that for any generalized orbit $\bar u$, if $\bar u$ is $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, then the set $\bar P_{-t}(\cW^{cu}_{\alpha_x}(\bar u))$ is defined for any $t>0$, has a diameter smaller than $\min(\eta,\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})\lambda_x^{-t/2}$ (see Proposition~\ref{p.unstable}) and is contained in $\cW^{cu}(\bar P_{-t}(\bar u))$ (by invariance, Remark~\ref{r.plaque-invariance}).
Up to reduce $\eta,\alpha_x$, a stronger coherence for center-unstable plaques is satisfied:
\begin{Lemma}\label{l.coherence-alpha} Consider two generalized orbits $\bar u,\bar v$ and $t\geq 0$ such that: \begin{itemize} \item[--] $u(0)\in \cN_y$, $v(-t)\in \cN_{y'}$ with $y,y'$ $r_0/2$-close to $x$, and the projection $(y(t))_{t\in \RR}$ of $\bar u$ in $K$ has arbitrarily large negative iterates in the $r_0$-neighborhood of $K\setminus V$, \item[--] $\bar v$ is $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $\pi_x(u(0))\in \pi_x(\bar P_{-t}(\cW^{cu}_{\alpha_{x}}(\bar v)))$. \end{itemize} Then $\pi_x(\bar P_{-t}(\cW^{cu}_{\alpha_{x}}(\bar v)))\subset \pi_x(\cW^{cu}(\bar u))$. \end{Lemma} \begin{proof} This is a direct consequence of Proposition~\ref{p.coherence} (and the second item of Lemma~\ref{l.plaques-box}) applied to the sets $X=\{u(0)\}$ and $X'=\bar P_{-t}(\cW^{cu}_{\alpha_{x}}(\bar v))$. Indeed, the hyperbolicity of $\bar v$ ensures that the diameter of the sets $\bar P_{-s}(X')$ is smaller than $\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ for any $s\geq 0$. \end{proof}
The choice of $\eta$ is fixed now. One can build generalized orbits as local product between generalized orbits and orbits of $K$. Up to reduce $\alpha_x$ and $\beta_x$, one gets:
\begin{Lemma}\label{l.product1} For any generalized orbit $\bar p$ contained in the $\eta/2$-neighborhood of $K$, and for any $y\in K$, $t\geq 0$ satisfying \begin{itemize} \item[--] $y$ and $z$ are $r_0/2$-close to $x$, where $z\in K$ is the point such that $p(-t)\in \cN_z$, \item[--] $\bar p$ is $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $p(-t-s)=P_{1-s}(p(-t-1))$ for any $s\in (0,1]$, \item[--] $\pi_x(\cW^{cs}_{\beta_x}(y))$ and $\pi_x(\bar P_{-t}(\cW^{cu}_{\alpha_x}(\bar p)))$ intersect and $d(z,y)<\eta/2$, \end{itemize} then there exists a generalized orbit $\bar u$ (in the $\eta$-neighborhood of $K$) satisfying: \begin{itemize} \item[--] $u(s)\in \cW^{cs}(\varphi_s(y))$ for every $s\geq 0$, \item[--] $\pi_x(u(0))\in \pi_x(\bar P_{-t}(\cW^{cu}_{\alpha_x}(\bar p)))$ and $u(-s)\in \bar P_{-t-s}(\cW^{cu}_{\alpha_x}(\bar p))$ for $s>0$. \end{itemize} \end{Lemma} \begin{Remark}\label{r.product} In the case $d(z,y)<\eta/2$ does not hold, one can choose by the Local injectivity some $\tau\in [-1/4,1/4]$ such that $d(\varphi_\tau(z),y)<\eta/2$. One can then define in a same way a generalized orbit which satisfies $u(-s)\in \bar P_{-t-s+\tau}(\cW^{cu}_{\alpha_x}(\bar p))$ for $s>\max(0,\tau)$. \end{Remark} \begin{proof} Define $u(0)$ to be the intersection point between $\cW^{cs}_{\beta_x}(y)$ and $\pi_y(\bar P_{-t}(\cW^{cu}_{\alpha_x}(\bar p)))$. Since $\beta_x$ is small, the topological contraction of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$
implies that $|u(s)|<\eta$ for each $s\geq0$, where
$u(s):=P_s(u(0))$. The generalized flow $\bar P$ associated to the generalized orbit $\bar p$ (see Definition~\ref{d.generalized-flow}) allows to define $u(s):=\bar P_{s}(\pi_z(u(0)))$ for each $s< 0$. Note that $u(s)$ belongs to $\bar P_{s}(\cW^{cu}_{\alpha_x}(\bar p))$, whose diameter is smaller than $\eta$ for any $s< 0$, provided $\alpha_x$ is chosen small enough since $\bar p$ is $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Note also that the projection of $(u(s))_{s\in [-1,0)}$ on $K$ is continuous.
We have thus defined in this way a generalized orbit, whose projection on $K$ coincides with the projection of $\bar p$ for times $s<0$ and with $\varphi_s(y)$ for times $s\geq 0$. In order to check that it is contained in the $\eta$-neighborhood of $K$, it remains to show that: \begin{itemize} \item[--] $y\in K\setminus V$: this follows from our assumptions, \item[--] the projection of $p(s)$ on $K$ for $s<0$ close to $0$ is $\eta$-close to $y$: this follows from the fact that $\bar p$ is in the $\eta/2$-neighborhood of $K$ and that $d(z,y)<\eta/2$. \end{itemize} \end{proof} \paragraph{The constant $\beta_x$.} The constant $\beta_x$ chosen for Theorem~\ref{t.existence-box} can be reduced to be smaller than $\alpha_0,\eta,\alpha_x$ and to satisfy (using the trapping): \begin{itemize} \item[--] for any $y\in K$ and $t\geq 1/4$ such that $y, \varphi_t(y)$ are $r_0/2$-close to $x$, the two components of $\pi_x(\cW^{cs}(\varphi_t(y))\setminus P_t(\cW^{cs}(y)))$ have length larger than $2\beta_x$. \end{itemize}
\paragraph{Other constants.} Once $R$ is constructed, one can choose other numbers $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\beta_{box}$ as in Theorem~\ref{t.existence-box}. Another constant $\alpha_{box}>0$ (which is a relaxed analogue of $\alpha_x$) will be introduced later in Section~\ref{ss.moreconstants}, depending on these choices.
\subsubsection{A shadowing lemma}
\begin{Proposition}\label{p.shadowing} For any $\delta>0$, there exist $r>0$ and $T_0\geq 1$ such that for any points $y,\varphi_T(y)\in H_x$ that are $r$-close with $T\geq T_0$, there exists $p\in \cN_x$ such that \begin{itemize} \item[--] $P_s(\pi_y(p))$ is defined and is contained in $B(0_{\varphi_s(y)},\delta)\subset \cN_{\varphi_s(y)}$ for any $s\in (0,T)$, \item[--] $p$ is fixed by $\widetilde P_T:=\pi_x\circ P_T \circ \pi_y$. \end{itemize} \end{Proposition} We could give an argument which uses the domination ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, similar to the construction in~\cite[Proposition 9.6]{CP}. We propose here a more topological proof. \begin{proof} Let us choose $\varepsilon>0$ much smaller than $\delta$.
\begin{Claim} If $T_0$ is large enough, for any $y,\varphi_T(y)\in H_x$ with $T\geq T_0$, there exists a box $B\subset \cN_y$ such that: \begin{itemize} \item[--] $P_s(B)$ is a box of $B(0,\delta)$ in $\cN_{\varphi_s(y)}$ for any $s\in [0,T]$, \item[--] $B$ contains $\cW^{cs}_{\varepsilon}(y)$ and $P_T(B)$ contains $\cW^{cu}_{\varepsilon}(\varphi_T(y))$. \end{itemize} \end{Claim} \begin{proof} Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically contracted, if $\varepsilon$ has been chosen small, the iterates $P_s(\cW^{cs}_\varepsilon(y))$ are smaller than $\delta/10$ for any $s\in [0,T]$. We can choose two disjoint arcs $L^-_0,L^+_0$ of length $1$, tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, centered at the endpoints of $\cW^{cs}_\varepsilon(y)$ and disjoint from $\cW^{cu}(y)$. Let us consider $L^-\subset L^-_0$ (resp. $L^+\subset L^+_0$) the maximal arc whose iterates by $P_{s}$, $s\in [0,T]$, remain at distance smaller than $\delta$ from $0_{\varphi_s(y)}$. Since $(y,\varphi_T(y))$ is $(C_x/2,\lambda_x^2)$ hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, and since the endpoints of $P_s(\cW^{cs}_\varepsilon(y))$ are close to $0_{\varphi_s(y)}$, we deduce that $P_T(L^-)$ and $P_T(L^+)$ have length larger than $10\varepsilon$.
Let us note that $P_T(L^-)$ and $P_T(L^+)$ are disjoint from $\cW^{cu}_\varepsilon(\varphi_T(y))$: otherwise $L^-$ (or $L^+$) would intersect $P_{-T}(\cW^{cu}_\varepsilon(\varphi_T(y)))$, but these three curves have a length arbitrarily small if $T$ is large (by hyperbolicity along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$) and contain respectively the endpoints and the center of $\cW^{cs}_\varepsilon(y)$ which are separated by a uniform distance (of order $\varepsilon$).
We then build two disjoint curves $J^-,J^+$ through the endpoints of $\cW^{cu}_\varepsilon(\varphi_T(y))$, tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, disjoint from $\cW^{cs}(\varphi_T(y))$ and connecting $P_T(L^-)$ to $P_T(L^+)$. The curves $P_{-T}(J^-)$ and $P_{-T}(J^+)$ are still tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, so that with $L^-,L^+$ they bound a box $B$ with the required properties. The claim is thus proved. \end{proof}
Let us choose $r$ small. If $y,\varphi_T(y)\in H_x$ are $r$-close with $T\geq T_0$, the claim can be applied and moreover the projection of the boxes $\pi_x(B)$ and $\pi_x(P_T(B))$ intersect. If $T_0$ has been chosen large enough, using the uniform expansion along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and the topological contraction along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, one deduces that $B$, $P_T(B)$ are contained in small neighborhoods of $\cW^{cs}(y)$ and $\cW^{cu}(P_T(y))$ respectively. Consequently, the union of their projection on $\cN_x$ is diffeomorphic to $([-2,2]\times [-1,1])\cup ([-1,1]\times [-2,2])$ in $\RR^2$: $\partial^{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}B$ is identified with $\{-2,2\}\times [-1,1]$ and $\partial^{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}P_T(B)$ is identified with $\{-1,1\}\times [-2,2]$.
The map $\widetilde P_T:=\pi_xP_T\pi_y$ is defined from $\pi_x(B)$ to $\pi_x(P_T(B))$. We can deform continuously the restriction $\widetilde P_T\colon \partial( \pi_x(B))\to \partial(\pi_x\circ P_T(B))$ so that it coincides after deformation with the restriction of a linear map $A\colon [-2,2]\times [-1,1]\to [-1,1]\times [-2,2]$ where $A=\begin{pmatrix} \pm 1/2 & 0 \\ 0 & \pm 2 \end{pmatrix},$ proving that the degree of the map
$$\Theta\colon z\mapsto {\widetilde P_T(z)-z}/{\|\widetilde P_T(z)-z\|}$$ from $\partial( \pi_x(B))$ to $S^1$ (for the canonical orientations of $\partial( \pi_x(B))\subset \RR^2$ and $S^1$) is non-zero. This proves that $\widetilde P_T$ has a fixed point $p$: {otherwise,} one can consider the degree of $\Theta$ on each circle $D_t:=\partial([-2t,2t]\times [-t,t])$ for $t\in (0,1]$; it does not depend on $t$, hence is non-zero and it is a contradiction since for $t$ small the disk bounded by $D_t$ is disjoint from its image.
The point $\pi_y(p)$ belongs to $B$; hence any iterate $P_s(\pi_y(p))$, $s\in [0,T]$, is contained $B(0,\delta)\subset \cN_{\varphi_s(y)}$ by construction of $B$. \end{proof}
\subsubsection{Construction of the box $R$}\label{ss.boxR}
The box $R$ is built from the next proposition. It implies the item~\ref{i.markov-boundary} of Theorem~\ref{t.existence-box}.
\begin{Proposition}\label{p.boxR} For $\mu$-almost every point $y\in H_x$, and any $\eta_x>0$, there exist a generalized orbit $\bar p=(p(t))$ which is periodic and has two iterates $\bar p_1,\bar p_2$ satisfying: \begin{itemize} \item[--] $\bar p$ is contained in the $\eta_x$-neighborhood of $K$, \item[--] $\bar p_1,\bar p_2$ are $\eta_x$-close to $0_y$ and $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. \end{itemize} The box $R\subset \cN_x$ bounded by the four curves $\pi_x(\cW^{cs}(\bar p_i))$ and $\pi_x(\cW^{cu}_{\alpha_x}(\bar p_i))$, where $i\in \{1,2\}$, has \begin{itemize} \item[--] interior $\operatorname{Int}(R)$ with positive $\mu_x$-measure, \item[--] boundary $\partial(R)$ with zero $\mu_x$-measure. \end{itemize} Moreover if $z\in K$ is $r_0/2$-close to $x$ and $i\in \{1,2\}$, then \begin{itemize} \item[--] if $\pi_x(z)\in \pi_x(\cW^{cs}(\bar p_i))$, the forward orbit of $z$ accumulates on a periodic orbit of $K$.
\item[--] if $\pi_x(z)\in \pi_x(\cW^{cu}_{\alpha_x}(\bar p_i))$, the backward orbit of $z$ accumulates on a periodic orbit of $K$. \end{itemize}
\end{Proposition} Since $\bar p_1,\bar p_2$ are arbitrarily close to $y\in H_x$, we have $R\subset B(0_x,\beta_x)$. {A point $p(t)$ in the generalized orbit $\bar p$ is a \emph{return of $\bar p$ at $x$} if its projection in $K$ is $r_0/2$-close to $x$.}
\begin{proof}[Proof of Proposition~\ref{p.boxR}] Recall the coherences of the center-stable plaques $\cW^{cs}(\bar u)$ defined for generalized orbits and the center-unstable plaques $\cW^{cu}_{\alpha_x}(\bar u)$ at $(C_x,\lambda_x)$-hyperbolic points for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
\begin{Lemma}\label{l.measure} Consider any generalized orbit $\bar u$. If the projection $(y(t))$ of $(u(t))$ on $K$ has arbitrarily large negative iterates in the $r_0$-neighborhood of $K\setminus V$ and if $d(y(0),x)<r_0/2$, then, the projection $\pi_x(\cW^{cs}(\bar u))$ has zero $\mu_x$-measure.
If $\bar u$ is $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, then the same holds for the projection $\pi_x(\cW^{cu}_{\alpha_x}(\bar u))$. \end{Lemma} \begin{proof} Assume by contradiction that $\pi_x(\cW^{cs}(\bar u))$ has positive $\mu_x$-measure: there exists a measurable set $A\subset K$ such that $\pi_x(A)\subset \pi_x(\cW^{cs}(\bar u))$ and $\mu(A)>0$. Hence there exist a positively recurrent point $z\in A$ and arbitrarily large $T>0$ such that $\varphi_T(z)$ belongs to $A$, is arbitrarily close to $z$ and $P_T(\cW^{cs}(z))$ has arbitrarily small diameter (by topological hyperbolicity of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$). Since $y,z, \varphi_T(z)$ are $r_0/2$-close to $x$, the coherence implies that $\pi_x(\cW^{cs}(z))$, $\pi_x\circ P_T(\cW^{cs}(z))$ and $\pi_x(\cW^{cs}(u))$ are all contained in a same $C^1$-curve. Hence, $\pi_x(\cW^{cs}(z))$ contains $\pi_x\circ P_T(\cW^{cs}(z))$. We have proved that $\cW^{cs}(z)$ contains $\widetilde P_T(\cW^{cs}(z))$, where $\widetilde P_T=\pi_z\circ P_T$, so that the sequence $(\widetilde P_T^k(0_z))$ converges to a fixed point of $\widetilde P_T$ contained in $\cW^{cs}(z)$. By Corollary~\ref{c.closing0}, the orbit of $z$ converges to a periodic orbit of $K$. This is a contradiction since $\mu$-almost every point $z$ in $A$ has a forward orbit which is dense in the support of $\mu$, which is not a periodic orbit by assumption.
The proof for center-unstable plaques is similar. If there exists a measurable set $A\subset K$ such that $\pi_x(A)\subset \pi_x(\cW^{cu}_{\alpha_x}(\bar u))$ and $\mu(A)>0$. Since the Lyapunov exponent of $\mu$ along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is positive, up to reduce the set $A$, there exists $C'>0$, $\lambda'>1$ such that any point $z\in A$ is $(C',\lambda')$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. By Proposition~\ref{p.unstable}, there exists $\beta>0$ such that $P_{-t}(\cW^{cu}_{\beta}(z))$ is defined for any $t>0$ and has a diameter smaller than $\alpha_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}\lambda^{-t/2}$. One ends the argument by considering $z$ and $\varphi_{-T}(z)$ arbitrarily close to $z$. By the coherence in Lemma~\ref{l.coherence-alpha}, the plaque $\cW^{cu}_{\alpha_x}(\bar u)$ is contained in $\cW^{cu}(z)$ and in $\cW^{cu}(\varphi_T(z))$. Hence if $T$ is large enough, $P_{-T}(\cW^{cu}_{\beta}(z))$ is arbitrarily small and contained in $\cW^{cu}_{\beta}(z)$. We conclude as before. \end{proof}
Let us build a first approximation of $R$. \begin{Lemma}\label{l.first-box} $\mu$-almost every point $y\in H_x$ has arbitrarily large iterates $\varphi_t(y)\in H_x$ close to $y$ such that the projection of the four plaques $\cW^{cs}(y)$, $\cW^{cs}(\varphi_t(y))$, $\cW^{cu}_{\alpha_x}(y)$, $\cW^{cu}_{\alpha_x}(\varphi_t(y))$ by $\pi_x$ in $\cN_x$ bound a small box $R_y\subset B(0_x,\beta_x)$ whose measure for $\mu_x$ is positive. \end{Lemma} \begin{proof} Let us choose $y\in H_x$ whose forward and backward orbits have dense sets of iterates in $H_x$ and such that $\pi_x(\{z\in H_x,\; d(z,y)<r\})$ has positive $\mu_x$-measure for any $r>0$.
For $r>0$ small, let us consider the four connected components of $B(\pi_x(y),r)\setminus \pi_x( \cW^{cs}(y))\cup \pi_x (\cW^{cu}_{\alpha_x}(y))$. Since $\pi_x( \cW^{cs}(y)), \pi_x( \cW^{cu}_{\alpha_x}(y))$ have zero measure for $\mu_x$, for one of these connected components $Q$, the measure $\mu_x(Q\cap H_x\cap B(\pi_x(y),r'))$ is positive for any $r'\in (0,r)$.
Choose $\varphi_t(y)\in H_x$ close to $y$ in $Q$. By the coherence, the plaques $\cW^{cs}(y)$, $\cW^{cs}(\varphi_t(y))$ have disjoint projection by $\pi_x$; by Lemma~\ref{l.coherence-alpha}, the same holds for the plaques $\cW^{cu}_{\alpha_x}(y)$, $\cW^{cu}_{\alpha_x}(\varphi_t(y))$. Hence they bound a small box $R_y\subset B(0_x,\beta_x)$ whose measure for $\mu_x$ is positive. \end{proof}
\noindent \emph{End of the construction of the box $R$.} By Lemma~\ref{l.first-box}, for $\mu$-almost every point $y\in H_x$, there exists $t>0$ large such that the projection of the four plaques $\cW^{cs}(y)$, $\cW^{cs}(\varphi_t(y))$, $\cW^{cu}_{\alpha_x}(y)$, $\cW^{cu}_{\alpha_x}(\varphi_t(y))$ by $\pi_x$ in $\cN_x$ bound a small box $R_y\subset B(0_x,\beta_x)$ with positive $\mu_x$-measure.
We then choose $T>0$ much larger than $t$ such that $\varphi_T(y)\in H_x$ is very close to $y$ and we apply Proposition~\ref{p.shadowing}. We get a point $p\in \cN_x$ arbitrarily close to $\pi_x(y)$ and the repetition of the piece of orbit $\{p(s)=P_s(\pi_y(p)), s\in [0,T)\}$ gives a generalized periodic orbit $\bar p$ which is $\eta_x$-close to $K$. Since $y$ and $\varphi_t(y)$ are $(C_x/2,\lambda_x^2)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, one deduces from Lemma~\ref{l.cont4} that $\bar p$ and $\bar P_t(\bar p)$ are $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and have projection by $\pi_x$ close to $\pi_x(y)$ and $\pi_x(\varphi_t(y))$.
The box $R\subset B(0_x,\beta_x)$ bounded by the projection of the four plaques $\cW^{cs}(\bar p)$, $\cW^{cs}(\bar P_t(\bar p))$, $\cW^{cu}_{\alpha_x}(\bar p)$, $\cW^{cu}_{\alpha_x}(\bar P_t(\bar p))$ by $\pi_x$ in $\cN_x$ is close to $R_y$, hence has positive $\mu_x$-measure. By Lemma~\ref{l.measure} the boundary of $R$ has zero $\mu_x$-measure.
Assume $z$ is a point satisfying $\pi_x(z)\in \pi_x(\cW^{cs}(\bar p_i))$. There exists $s\in [-1/4,1/4]$ such that $z':=\varphi_s(z)$ is $r_0/2$-close to $y$ and still satisfies $\pi_y(z')\in \cW^{cs}(\bar p_i)$. By the Global invariance, there exists $T'>0$ such that the forward orbit of $z'$ under $\pi_{z'}\circ P_{T'}$ is semi-conjugated by $\pi_y$ with the forward orbit of $\pi_y(z')$ under $\pi_{y}\circ P_{T}$. The later converges to the fixed point $p_i:=p_i(0)$ of the orbit $\bar p_i=(p_i(t))$. Corollary~\ref{c.closing0} applies and implies that the forward orbit of $z'$ and $z$ converges to a periodic orbit.
A similar argument holds when $\pi_x(z)\in \pi_x(\cW^{cu}_{\alpha_x}(\bar p_i))$. This concludes Proposition~\ref{p.boxR}. \end{proof}
\begin{Remark}\label{r.lengthR} We can choose the diameter of the rectangle $R$
much smaller than $\beta_x$. In particular, by topological hyperbolicity of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, if $y$ is $r_0/2$-close and $\pi_x(y)\in \operatorname{Interior}(R)$, then any arc $I\subset \cW^{cs}(y)$ satisfying $\pi_x(I)\subset R$ has forward iterates of length much smaller than $\beta_x$.
Moreover, for any $\beta_{box}>0$, there exists a uniform time $t_1>0$ such that for any such $y,I$
the length of $|P_t(I)|$ is much smaller than $\beta_{box}$ when $t>t_1$. \end{Remark}
\subsubsection{New choices of constants}\label{ss.moreconstants} In the previous section we have built the box $R$. Before building the boxes $B_1,\dots,B_k$, we introduce $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\beta_{box}$ as in the statement of Theorem~\ref{t.existence-box}, and another constant
$\alpha_{box}>0$. One can reduce these numbers in order to satisfy the following properties: \begin{itemize} \item[--] $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$: by relaxing constants, one can require that for any $i\in\{1,2\}$ and $s\in \RR$, any generalized orbit $\bar u$ satisfying $u(-t)\in \bar P_{-t-s}(\cW^{cu}_{\alpha_x}(\bar p_i))$ for any $t\geq 0$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. \item[--] $\beta_{box}$: Proposition~\ref{p.distortion} associates to $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$ some constants $\Delta,\beta$. We can reduce $\beta_{box}$ so that $\beta_{box}<\beta$. \item[--] $\Delta_{box}$: it is chosen so that the projection by the local diffeomorphisms $\pi_x$ of any box with distortion $\Delta$ is a box with distortion $\Delta_{box}$. \item[--] $\alpha_{box}$: one chooses $\alpha_{box}$ small so that the two following properties are satisfied (for the same reasons as in Section~\ref{ss.constants} for choosing $\alpha_x$).
\item[] \emph{Backwards contraction.} For any generalized orbit $\bar u$ which is $(2C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},{\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}}^{1/2})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, the set $\bar P_{-t}(\cW^{cu}_{\alpha_{box}}(\bar u))$ is defined for any $t\geq 0$, is contained in $\cW^{cu}(\bar P_{-t}(\bar u))$ and has diameter smaller than $\min(\beta_{box},\alpha_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X})\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{-t/2}$.
\item[--] \emph{Coherence.} Consider two generalized orbits $\bar u,\bar v$ and $t\geq 0$ such that: \begin{itemize} \item[--] $u(0)\in \cN_y$, $v(-t)\in \cN_{y'}$ with $y,y'$ $r_0/2$-close to $x$, and the projection $(y(t))_{t\in \RR}$ of $\bar u$ in $K$ has arbitrarily large negative iterates in the $r_0$-neighborhood of $K\setminus V$, \item[--] $\bar v$ is $(2C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{1/2})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $\pi_x(u(0))\in \pi_x(\bar P_{-t}(\cW^{cu}_{\alpha_{box}}(\bar u)))$. \end{itemize} Then $\pi_x(\bar P_{-t}(\cW^{cu}_{\alpha_{box}}(\bar u)))\subset \pi_x(\cW^{cu}(\bar v))$. \end{itemize}
\subsubsection{Construction of the sub-boxes $B_1,\dots, B_k$}\label{ss.subbox}
\begin{Proposition}\label{p.subbox} There exists $\delta>0$ and finitely many sub-boxes $B_1,\dots,B_k\subset R$ with disjoint interiors, whose union contains $R\cap \pi_x(K)$, whose boundary has zero $\mu_x$-measure, {having the following properties.}
\begin{enumerate} \item[(i)]\label{subbox1} \emph{Geometry.} If $\gamma_1,\gamma_2$ are the two components of $\partial^{\cal F}(B_j)$, then $$d(\gamma_1,\gamma_2)>10.\max(\operatorname{Diam}(\gamma_1),\operatorname{Diam}(\gamma_2)).$$ \item[(ii)]\label{subbox2} \emph{${\cal F}$-boundary.} Any component $\gamma$ of $\partial^{\cal F}(B_j)$ coincides with $\pi_x(I)$ of an arc $I$ contained in $\pi_x(\bar P_{-t}(\cW^{cu}_{\alpha_{x}}(\bar p_{i})))$, $i\in \{1,2\}$, where $\bar P_{-t}(\bar p_i)$ is a return of $\bar p_i$ at $x$. Moreover, when it is defined, $\pi_x\circ \bar P_{-s}(I)$ for $s\geq 0$ is disjoint from all the $\operatorname{Interior}(B_\ell)$, $\ell\in \{1,\dots,k\}$.
\item[(iii)]\label{subbox2b} \emph{${\cal E}$-boundary.} Any component $\gamma$ of $\partial^{\cal E}(B_j)$ satisfies one of the following properties: \begin{itemize} \item[--] $\pi_x(\cW^{cs}(y))$ does not intersect the $\delta$-neighborhood of $\gamma$ for any $y\in \pi_x(K)\cap B_j$; in particular $\gamma\cap \pi_x(K)=\emptyset$. \item[--] $\gamma$ is the projection by $\pi_x$ of an arc $I$ contained in the center-stable plaque $\cW^{cs}(\bar q)$ of a periodic generalized orbit $\bar q$. Moreover when it is defined, $\pi_x\circ \bar P_{s}(I)$ for $s\geq 0$ is disjoint from all the $\operatorname{Interior}(B_\ell)$, $\ell\in \{1,\dots,k\}$. \end{itemize}
\item[(iv)]\label{subbox4} \emph{Coherence with plaques.} Consider $y$ that is $r_0/2$-close to $x$ such that $\pi_x(y)\in B_i$. Then $\pi_x(\cW^{cs}(y))\cap B_i$ is an arc connecting the two components of $\partial^{\cal F} B_i$.
Consider $y$ that is $r_0/2$-close to $x$ and a generalized orbit $\bar u$ with $u(0)\in \cN_y$, such that $\bar u$ is $(2C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{1/2})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and $\pi_x(\bar u)\in B_i$. Then $\pi_x(\cW^{cu}_{\alpha_{box}}(\bar u))\cap B_i$ is an arc connecting the two components of $\partial^{\cal E} B_i$. \end{enumerate} \end{Proposition}
This subsection is devoted to prove Proposition~\ref{p.subbox}, which implies Item~\ref{box1} of Theorem~\ref{t.existence-box}.
\noindent \emph{Unstable curves.} Recall that $R$ has been built from the plaques of $\bar p_1,\bar p_2$ in the orbit of $\bar p$. By the Local invariance, one chooses a finite set of iterates $\bar p_1=\bar p$, $\bar p_2=\bar P_{t_2}(\bar p)$, $\bar p_3=\bar P_{t_3}(\bar p)$,\dots, of the generalized orbit $\bar p$ such that: \begin{itemize} \item[--] Each $\bar p_i$ is a return of $\bar p$ at $x$. \item[--] Consider a return $\bar P_{-t}(\bar p)$ of $\bar p$ at $x$. Then there exists $\bar p_j$ and $s\in [-1/4,1/4]$ such that $\bar P_{-t}(\bar p)=\bar P_{s}(\bar p_j)$.
\item[--] If $\bar P_{-t}(\bar p_i)=\bar p_j$ for $|t|\leq 1/4$, then $i=j$. \end{itemize} One considers some $C^1$-curves $\gamma^u_i\subset \cW^{cu}(\bar p_i)$ and $\delta_1>0$ with the following properties: \begin{itemize} \item[(a)] The union of the curves $\pi_x(\gamma^u_i)$ is a $C^1$-submanifold. \item[(b)] $\partial^{\cal F}R\subset \pi_x(\gamma^u_1)\cup \pi_x(\gamma^u_2)$. \item[(c)] Each $\gamma^u_i$ locally coincides with $\bar P_{-s}(\cW^{cu}_{\alpha_x}(\bar p_1))$ or $\bar P_{-s}(\cW^{cu}_{\alpha_x}(\bar p_2))$ for some $s\geq 0$. \item[(d)] If $\bar p_i=\bar P_{-s}(\bar p_j)$ for some $s\geq 0$ and $i\neq j$ (so that $s\geq 1/4$), then $\bar P_{-s}(\gamma^u_j)\subset \gamma^u_i$.
Moreover $\pi_x(\gamma^u_i\setminus \bar P_{-s}(\gamma^u_j))$ is the union of two arcs of length larger than $\delta_1$.
\item[(e)] For each $i$ and each $t\geq 0$ such that $\bar P_{-t}(\bar p_i)$ is a return of $\bar p_i$ at $x$, there exists $j$ and $s\in [t-1/4,t+1/4]$ such that $\bar p_j=\bar P_{-s}(\bar p_i)$ and $\pi_x\circ \bar P_{-s}(\gamma_i^u)=\pi_x\circ \bar P_{-t}(\gamma_i^u)$. \end{itemize}
Property (d) shows that if one chooses sub-curves $\tilde \gamma_i^u\subset \gamma_i^u$ such that the components of $\pi_x(\gamma_i^u\setminus \tilde \gamma_i^u)$ have length smaller than $\delta_1$, then the inclusion $\bar P_{-s}(\tilde \gamma^u_j)\subset \tilde \gamma^u_i$ still holds when $\bar p_i=\bar P_{-s}(\bar p_j)$ for some $s\geq 0$ and $i\neq j$.
Let us explain how to build it. Let $\sigma_1\subset \cW^{cu}_{\alpha_x}(\bar p_1)$ such that $\pi_x(\sigma_1)$ is a component of $\partial^{\cal F} R$. The returns of the backward iterates of $\bar p_1$ by $\bar P_t$ inside the finite set $\{\bar p_1,\bar p_2,\dots\}$ defines an infinite periodic sequence $\bar z_1,\bar z_2,\cdots$. There exists a minimal $s_k>0$ such that $\bar P_{-s_k}(\bar z_k)=\bar z_{k+1}$.
We inductively define $\sigma_k$ as the curve in $\cW^{cu}(\bar z_k)$ such that $\pi_x(\sigma_k)$ is the $\delta_1$-neighborhood of $\pi_x( \bar P_{-s_k}(\sigma_{k-1}))$. We then define $\gamma_i^1$ as the union of the $\sigma_k$ such that $\bar z_k=\bar p_k$. Since $\bar P_{-s}(\cW^{cu}_{\alpha_x}(\bar p_1))$ decreases exponentially as $s\to +\infty$, by choosing $\delta_1$ small enough, we get: \begin{itemize} \item[--] For any $i$, there is $s\geq 0$ such that $\gamma_i^1\subset \bar P_{-s}(\cW^{cu}_{\alpha_x}(\bar p_1))$. \item[--] If $\bar p_i=\bar P_{-s}(\bar p_j)$ for some $s\geq 0$ and $i\neq j$, then $\bar P_{-s}(\gamma^1_j)\subset \gamma^1_i$ and $\pi_x(\gamma^1_i\setminus \bar P_{-s}(\gamma^1_j))$ is the union of two arcs of length larger than $\delta_1$. \end{itemize} We repeat the same construction starting with the curve in $\cW^{cu}_{\alpha_x}(\bar p_2)$ which projects to the other component of $\partial^{\cal F} R$. One obtains another family of curves $\gamma^2_i$. One then set $\gamma^u_i=\gamma^1_i\cup \gamma^2_i$ so that items (b), (c), (d) are satisfied.
In order to check item (a), it is enough to notice that (from item (c) and Lemma~\ref{l.coherence-alpha}, the union of any two curves $\pi_x(\gamma^u_i)\cup \pi_x(\gamma^u_j)$ is a $C^1$-submanifold.
Let us prove item (e): if $\bar P_{-t}(\bar p_i)$ is a return of $\bar p_i$ at $x$, by definition of the family $\bar p_1,\bar p_2,\bar p_3,\dots$, there exists $j$ and $s\in [t-1/4,t+1/4]$ such that $\bar P_{-s}(\bar p_i))=\bar p_j$. By the Local invariance, $\pi_x\circ P_{-t}(\gamma^u_i)=\pi_x\circ \bar P_{-s}(\gamma^u_i)$.
\noindent \emph{Stable curves.} Let us consider $\delta_2>0$ much smaller than $\min( \delta_1,\alpha_{box})$. One also considers $\varepsilon,\delta_3>0$ that will be fixed later.
Choose a point $y$ in a subset of full $\mu$-measure of $H_x\cap \pi_x^{-1}(\operatorname{Interior}(R))$ and a forward iterate $\varphi_T(y)\in H_x$ close to $y$ with $T>0$ large. By Proposition~\ref{p.shadowing}, we build a periodic generalized orbit $\bar q$ which is $\varepsilon$-close to the zero-section of $\cN$ for the Hausdorff distance. As for the generalized orbit $\bar p$, one chooses a finite set of iterates $\bar q_1$, $\bar q_2$, $\bar q_3$,\dots, of the generalized orbit $\bar q$ such that: \begin{itemize} \item[--] Each $\bar q_i$ is a return of $\bar q$ at $x$. \item[--] Assume that $\bar P_{-t}(\bar q)$ is a return of $\bar q$ at $x$. Then there exists $\bar q_j$ and $s\in [-1/4,1/4]$ such that $\bar P_{-t}(\bar q)=\bar P_{s}(\bar q_j)$.
\item[--] If $\bar P_{-t}(\bar q_i)=\bar q_j$ for $|t|\leq 1/4$, then $i=j$. \end{itemize} We denote by $\{z_1,\dots,z_m\}$ the collection of $\bar p_i$, $\bar q_j$ and build curves $\gamma_1^s,\dots,\gamma_m^s$ such that: \begin{itemize} \item[(a')] The union of the curves $\pi_x(\gamma^s_i)$ is a $C^1$-submanifold. \item[(b')] $\partial^{\cal E}R\subset \pi_x(\gamma^s_1)\cup \pi_x(\gamma^s_2)$. \item[(c')] Each $\gamma^s_i$ is contained in $\cW^{cs}(z_i)$. \item[(d')] If $z_i=\bar P_{s}(z_j)$ for some $s\geq 0$ and $i\neq j$, then $\bar P_{s}(\gamma^s_j)\subset \gamma^s_i$. \item[(e')] For each $i$ and each $t\geq 0$ such that $\bar P_{-t}(z_i)$ is a return of $z_i$ at $x$, there exists $j$ and $s\in [t-1/4,t+1/4]$ such that $z_j=\bar P_{-s}(z_i)$ and $\pi_x\circ \bar P_{-s}(\gamma_i^s)=\pi_x\circ \bar P_{-t}(\gamma_i^s)$. \item[(f')] If $\gamma^s_i\cap R$ is non-empty, then it is an arc which connects the two components of $\partial^{\cal F} R$. \end{itemize}
The center-stable plaques $\cW^{cs}(z_i)$ satisfy the properties of items (a') to (e') above, by coherence, trapping, the Local invariance and definition of $R$. Note that these properties are still satisfied if one replace $\cW^{cs}(z_i)$ by a sub-arc $\gamma^s_i$ such that the components of $\pi_x(\cW^{cs}(z_i)\setminus \gamma^s_i)$ have length smaller than $\beta_x$. (For the property (d') this comes from the choice of $\beta_x$ and the fact that $z_i=\bar P_{s}(z_j)$ for some $s\geq 0$ and $i\neq j$ implies $s\geq 1/4$.) By construction, $R$ is contained in $B(0,\beta_x)$, so one can find such sub-arcs $\gamma_i^s$ which satisfy (f').
\noindent \emph{Strips.} The set $\operatorname{Interior} (R)\setminus (\pi_x(\gamma_1^s)\cup\dots\cup\pi_x(\gamma^s_m))$ has finitely many connected components, whose closures are center-stable sub-boxes of $R$ that we call strips.
We distinguish two kinds of strips: \begin{itemize} \item[--] \emph{thin strips}: strips whose ${\cal E}$-boundaries are $\delta_2$-close to each-other: any $C^1$-curve in the strip which connects the two components of the ${\cal E}$-boundary and which is tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ has length smaller than $\delta_2$, \item[--] \emph{thick strips}: the other ones. \end{itemize}
\begin{Lemma}\label{l.real-rectangle} The minimal distance between the components of the ${\cal E}$-boundary of the thick strips is bounded away from $0$ (uniformly in the choice of the periodic orbit $\bar q$). \end{Lemma} \begin{proof} Otherwise, there exist $z_i,z_j$ and $\gamma^s_i,\gamma_j^s$ whose projections by $\pi_x$ have two points arbitrarily close, and there exists a transverse arc tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ of length larger than $\delta_2$ connecting these two curves.
Taking the limit, one gets two points $\bar z,\bar z'$ which still belong to generalized orbits ($\eta$-close to the $0$-section) whose center-stable plaques intersect but are not contained in a same $C^1$-submanifold tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. This contradicts the coherence. \end{proof}
The next lemma fixes the constant $\varepsilon>0$. \begin{Lemma}\label{l.substrip} Let us choose $\delta_3>0$. If $\varepsilon$ is small enough, then for any thick strip $S$ and any $y\in K$ which is $r_0/2$-close to $x$ and satisfies $\pi_x(y)\in S$, $\pi_x(\cW^{cs}(y))\cap S$ is $\delta_3$-close to $\partial^{\cal E}S$. \end{Lemma} \begin{proof} First we show that when $\varepsilon$ goes to zero, the distance between $\pi_x(K)\cap S$ and $\partial^{\cal E} S$ goes to zero. Otherwise there exists a thick strip $S$ and a point $y\in K$ that is $r_0/2$-close to $x$ such that $\pi_x(y)$ belongs to $S$ and is at a distance from $\partial^{\cal E}S$, larger than a uniform constant $e>0$. For $\varepsilon>0$ small enough, there exists a point $z_i$ close to $y$, defining a curve $\gamma_i^s$ which intersects $S$ but is at a bounded distance from $\partial^{\cal E} S$. This contradicts the definition of the strips $S$ as the closures of connected components of $\operatorname{Interior}(R)\setminus (\pi_x(\gamma^s_1)\cup\dots\cup\pi_x(\gamma^s_m))$.
We then conclude that all the set $\pi_x(\cW^{cs}(y))\cap S$ is close to $\partial^{\cal E}S$. Otherwise, one can take a limit $\bar y$ of such points $y$ and a limit $\gamma$ of components of $\partial^{\cal E} S$, such that $\pi_x(\cW^{cs}(\bar y))$ and $\gamma$ intersect at $\pi_x(\bar y)$ but are not contained in a $C^1$-curve. Since $\gamma$ is contained in a center-stable plaque, this contradicts the coherence. \end{proof}
By Lemmas~\ref{l.real-rectangle} and~\ref{l.substrip}, we take $\delta_3\in (0,\delta_2)$ smaller than half of the minimal distance between the components of $\partial^{\cal E} S$ of any thick strips $S$ and choose $\varepsilon$ such that in each thick strip $S$, the sets $\pi_x(\cW^{cs}(y))\cap S$, for any $y\in S$, is at a distance to $\partial^{\cal E} S$ smaller than $\delta_3$. This allows to build in each thick strip S, two disjoint center-stable sub-boxes (sub-strips) $S_-,S_+$ such that: \begin{itemize} \item[--] $S_-\cup S_+$ contains $\pi_x(K)\cap S$, \item[--] the two components of $\partial^{\cal E} S_-$ (resp. $\partial^{\cal E} S_+$) are $\delta_2$-close, \item[--] one component of $\partial^{\cal E} S_-$ (resp. $\partial^{\cal E} S_+$) coincides with a component of $\partial^{\cal E}S$, the other one is disjoint from the $\delta_3$-neighborhood of $\partial^{\cal E} S$. \end{itemize} In particular, there exists $\delta>0$ such that for any thick strip $S$ and any $y\in \pi_x(K)\cap S$, the plaque $\pi_x(\cW^{cs}(y))$ is disjoint from the $\delta$-neighborhood of the component $\gamma$ of $\partial^{\cal E}S_{\pm}$ which is not contained in $\partial^{\cal E}S$.
\noindent \emph{The sub-boxes $B_0,\dots,B_k$.} One chooses sub-curves $\tilde \gamma^u_i\subset \gamma_i^u$ such that the components of $\pi_x(\gamma_i^u\setminus \tilde \gamma_i^u)$ have length smaller than $\delta_1$ and whose endpoints do not belong to the interior of thin strips nor to boxes $S_\pm$ associated to a thick strip $S$.
The sub-boxes $B_0,\dots,B_k$ are obtained from a thin strips $S_0=S$ or a sub-strips $S_0\in\{S_-,S_+\}$ as follow: We consider the connected components of $$\operatorname{Interior}(S_0)\setminus (\pi_x(\tilde \gamma^u_1)\cup\dots\cup \pi_x(\tilde \gamma^u_m))$$ and take their closures. They have disjoint interiors and their union contains $R\cap \pi_x(K)$. The item (i) holds by the choice of $\delta_2$, much smaller than the distance between pair of curves $\gamma^u_i$.
Each component of the ${\cal F}$-boundary of these boxes is the projection $\pi_x(I)$ of an arc $I$ contained in a curve $\tilde \gamma^u_i$. In particular the ${\cal F}$-boundary has zero $\mu_x$-measure. Moreover, by the properties on the curves $\tilde \gamma^u_i$ and the Local invariance, for each return $\bar P_{-s}(\bar p_i)$ at $x$, $s\geq 0$, the iterate $\pi_x(\bar P_{-s}(I))$ is contained in a curve $\tilde \gamma^u_j$, hence is disjoint from the interior of the boxes $B_\ell$. The item (ii) is thus satisfied.
For each component $\gamma$ of the ${\cal E}$-boundary of these boxes $B_\ell$, either the $\delta$-neighborhood of $\gamma$ is disjoint from $\pi_x(\cW^{cs}(y))$ for any $y\in B_\ell$, or $\gamma$ is the projection $\pi_x(I)$ of an arc $I$ contained in a curve $\tilde \gamma^s_i$. In particular the ${\cal E}$-boundary has zero $\mu_x$-measure. Moreover, in this second case by the properties on the curves $\tilde \gamma^s_i$ and Local invariance, for each return $\bar P_{-s}(\bar z_i)$ at $x$, $s\geq 0$, the iterate $\pi_x(\bar P_{-s}(I))$ is contained in a curve $\tilde \gamma^s_j$, hence is disjoint from the interior of the boxes $B_\ell$. The item (iii) is thus satisfied.
We have proved that the boundary of the boxes has zero $\mu_x$-measure.
\noindent \emph{Coherence with plaques.} Consider $y\in K$ that is $r_0/2$-close to $x$ and such that $\pi_x(y)\in B_i$. Let $\gamma$ be a component of $\partial^{\cal E}B_i$. There are two cases. \begin{itemize} \item[--] If $\gamma$ is contained in a curve $\gamma_i^s$ (hence in a center-stable plaque), then by coherence, $\pi_x(\cW^{cs}(y))$ is disjoint from or contains $\gamma$. \item[--] Otherwise $B_i$ is built from a sub-box $S_-$ or $S_+$ of a thick strip $S$, and $\gamma$ is disjoint from $\pi_x(\cW^{cs}(y'))$ for any $y'\in \pi_x(K)\cap B_i$. So $\pi_x(\cW^{cs}(y))\cap B_i$ is disjoint from $\gamma$. \end{itemize} We have obtained the first part of item (iv).
In order to check the second part, one recalls that the components of the ${\cal F}$-boundary of each box $B_i$ are in a curve $\pi_x(\bar P_{-s}(\cW^{cu}_{\alpha_x}(\bar p_i)))$, where $\bar p_i$ is $(C_x,\lambda_x)$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ by item (c) above; the length of all their backwards iterates of $\cW^{cu}_{\alpha_x}(\bar p_i)$ remain small. If $y$ is $r_0/2$-close to $x$ and $\bar u$ is a generalized orbit with $u(0)\in \cN_y$, such that $\pi_x(\bar u)\in B_i$ and $\bar u$ is $(2C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{1/2})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$, then the lengths of all their backwards iterates of $\cW^{cu}_{\alpha_{box}}(\bar u)$ remain small. Hence Lemma~\ref{l.coherence-alpha} implies that the union of $\pi_x(\cW^{cu}(\bar u))$ and of $\partial^{\cal F}B_i$ is a submanifold. Consequently, each component of $\partial^{\cal F}B_i$ is either disjoint from or contained in $\pi_x(\cW^{cu}(\bar u))$. Since the distance between the two components of $\partial^{\cal E} B_i$ is much smaller than $\alpha_{box}$, the curve $\pi_x(\cW^{cu}_{\alpha_{box}}(\bar u))$ meets both of them. This gives the second part of item (iv) and thus completes the proof of Proposition~\ref{p.subbox}. \qed
\subsection{The Markovian property}
We have proved the items~\ref{i.markov-boundary} and~\ref{box1} of Theorem~\ref{t.existence-box} in Sections~\ref{ss.boxR} and~\ref{ss.subbox}. We now prove the other items.
\paragraph{Items~\ref{box2} and~\ref{box3} of Theorem~\ref{t.existence-box}.} Let us consider a transition $(y,t)$ between two sub-boxes $B,B'\in \{B_1,\dots,B_k\}$ (associated to the constants $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$) and such that $t>t_{box}$, where $t_{box}$ is a large constant to be chosen later. By item (iv) of Proposition~\ref{p.subbox}, there exists an arc $I\subset \cW^{cs}(y)$ containing $0_y$, whose projection by $\pi_x$ is contained in $B$ and connects the two components of the ${\cal F}$-boundary of $B$.
\begin{Lemma} The image $\pi_x(P_t(I))$ is contained in $B'$. \end{Lemma} \begin{proof} Otherwise, by item (iv) of Proposition~\ref{p.subbox}, there exists $u'\in P_t(I)$ which is not an endpoint and which projects by $\pi_x$ inside the ${\cal F}$-boundary of $B'$. Hence by the item (iii) of Proposition~\ref{p.subbox} and the Global invariance, $u:=P_{-t}(u')$ projects by $\pi_x$ is contained in $\partial^{\cal F}(B)$. Since $u$ is not an endpoint of $I$, the arc $\pi_x(I)$ is not contained in $B$. This is a contradiction. \end{proof}
\begin{Lemma} There exists $T_1$ such that provided $t>T_1$, each endpoint of $I$ belongs to a generalized orbit $\bar u$ such that $\bar P_{t}(\bar u)$ is $(2C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},{\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}}^{1/2})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. \end{Lemma} \begin{proof} Let $u$ be an endpoint of $I$. Note that $u\in \cW^{cs}_{\beta_x}(y)$. By item (ii) of Proposition~\ref{p.subbox}, $\pi_x(u)$ belongs to $\pi_x(\bar P_{-\tau}(\cW^{cu}_{\alpha_{x}}(\bar p_i)))$, for some return $\bar P_{-\tau}(\bar p_i)$, $\tau>0$ of $\bar p_i$, $i=1,2$ at $x$. By definition of the $\bar p_i$, one can assume that there is no discontinuity in the orbit $\bar p_i(-\tau-s)$, with $s\in [-1,0)$. Lemma~\ref{l.product1} and Remark~\ref{r.product} apply and define the generalized orbit $\bar u$. It is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ by our choice of $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$ and $\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$.
For any $\varepsilon>0$ small, there exist uniform $T_0,C_0>0$ such that the following holds. \begin{itemize} \item[--] Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically contracted, the points $P_s(y)$ and $u(s)$ are close for any $s>T_0$. Moreover, $(y,\varphi_t(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Hence, for $s\in (T_0,t)$,
$$\|DP_{s-t}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}(u(t))\|\leq \|DP_{s-t}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}(\varphi_t(y))\|(1+\varepsilon)^{(t-s)}\leq C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{s-t}(1+\varepsilon)^{(t-s)},$$
\item[--] $\|DP_{T_0-s}{|{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}(u(T_0))\|\leq C_0$ for any $s\in [0,T_0]$. \end{itemize} So $\bar P_{t}(\bar u)$ is $(2C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},{\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}}^{1/2})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ if $T_1$ satisfies $$C_0\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{T_0}(1+\varepsilon)^{-T_0}<(1+\varepsilon)^{-T_1}{\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}}^{T_1/2}.$$ \end{proof}
Let $\bar u$ and $\bar v$ be the generalized orbits associated to each endpoint of $I$. By coherence (stated in Section~\ref{ss.moreconstants}), the projection of the plaques $\cW^{cu}_{\alpha_{box}}(\bar P_{t}(\bar u))$, $\cW^{cu}_{\alpha_{box}}(\bar P_{t}(\bar v))$ are disjoint and by item (iv) of Proposition~\ref{p.subbox} they cross $B'$. We thus obtain a center-unstable sub-box $B^{cu}\subset B'$ bounded by these curves.
For all $0\leq s\leq t$, the iterates $P_{-s}\circ \pi_{\varphi_t(y)}(B^{cu})$ are contained in $B(0, 2\alpha_0)\subset \cN_{\varphi_{t-s}(y)}$, where $\alpha_0$ is an upper bound on the size of the plaques $\cW^{cs}$ and $\cW^{cu}$. Moreover, the four edges remain tangent to the cones ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ or ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. We denote $B^{cs}:=\pi_x\circ P_{-t}\circ \pi_{\varphi_t(y)}(B^{cu})$.
\begin{Lemma}\label{l.diameter} The sets $P_{s}(\pi_{y}(B^{cs}))$ have diameter smaller than $\beta_x$ for each $s\in [0,t]$. Their diameter is smaller than $\beta_{box}$ when $s$ is larger than some uniform constant $T_2$. \end{Lemma} \begin{proof} There exists a uniform $C>0$ such that the diameter of $P_{s}(\pi_{y}(B^{cs}))=P_{s-t}(\pi_{\varphi_{t}(y)}(B^{cu}))$
is smaller than $C\max(|P_{s}(I)|, |P_{s-t}(\partial^{\cal F}B^{cu})|)$.
By our choice of $\alpha_{box}$, the curve $P_{s-t}(\cW^{cu}_{\alpha_{box}}(u(t)))$ has a size much smaller than $\beta_{box}$ and it contains $P_{s-t}(\partial^{\cal F}B^{cu})$. By Remark~\ref{r.lengthR}, the length of $P_{s}(I)$ is much smaller than $\beta_x$, implying the first property, since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically contracted. This length is much smaller than $\beta_{box}$ when $t$ is larger than a constant $T_2$, giving the second property. \end{proof}
The following lemmas end the proof of item~\ref{box2} of Theorem~\ref{t.existence-box}.
\begin{Lemma} When $t$ is larger than some $T_3$, the $\delta$-neighborhood of $\pi_x(I)$ contains $B^{cs}$. \end{Lemma} \begin{proof} For each point $z$ in $B^{cs}$, there exists a curve $\gamma\subset \pi_y(B^{cs})$ tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ which connects $\pi_y(z)$ to a point $z'$ in $I$. The iterates $P_s(\gamma)$ for $s\in [0,t]$ are still tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and, for any such $s$ which is larger than some uniform $T$, one has: \begin{itemize} \item[--] $P_s(\gamma)$ is contained in $P_s(\pi_y(B^{cs}))$ hence in a small neighborhood of $0_{\varphi_s(y)}$ by Lemma~\ref{l.diameter}, \item[--] the tangent spaces to $P_s(\gamma)$ are close to ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}({\varphi_s(y)})$. \end{itemize} The derivative of $P_{t-s}$ along $P_s(\gamma)$ and ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}({\varphi_s(y)})$ can be compared. Since $(y,\varphi_t(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$,
$|P_s(\gamma)|$ is exponentially small in $t-s$ for $s\geq T$. If $t$ is large enough, this implies that the length of $\gamma$ is exponentially small in $t$. In particular it is smaller than $\delta$. \end{proof}
\begin{Lemma} When $t>T_3$, the box $B^{cs}$ is a center-stable sub-box of $B$. \end{Lemma} \begin{proof} By coherence (stated in Section~\ref{ss.moreconstants}), the union of the ${\cal F}$-boundary of $B$ and $B^{cs}$ is contained in the union of two disjoint $C^1$-curves. If one assumes by contradiction that $B^{cs}$ is not contained in $B$, there exists a point $u$ in the ${\cal E}$-boundary of $B$ that belongs to $\operatorname{Interior}(B^{cs})$. By item (iii) of Proposition~\ref{p.subbox}, two cases are possible: \begin{itemize} \item[--] $u$ belongs to the projection by $\pi_x$ of an arc $J\subset \cW^{cs}(\bar q)$ of a generalized periodic orbit $\bar q$ and no forward iterate of $J$ projects to the interior of any box $B_1,\dots,B_k$. This is a contradiction since $\pi_x\circ P_t \circ\pi_y(u)$ belongs to the interior of $B^{cu}\subset B'$. \item[--] $u$ is $\delta$-far from $\pi_x(\cW^{cs}(y))$: it contradicts the previous lemma. \end{itemize} We have proved that $B^{cs}\subset B$ and $\partial^{\cal F} B^{cs}\subset \partial^{\cal F} B$ as required. \end{proof}
The following ends the proof of the item~\ref{box3} of Theorem~\ref{t.existence-box}. \begin{Lemma} The distortion of $B^{cu}$ is bounded by $\Delta_{box}$. \end{Lemma} \begin{proof} Since $B^{cs}$ is a center-stable sub-box of $B$ and since $B$ satisfies the item (i) of Proposition~\ref{p.subbox}, the box $B^{cs}$ satisfies this condition too. Since $(y,\varphi_t(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and since the diameter of $\pi_y(B^{cu})$ is smaller than $\beta_{box}$, Proposition~\ref{p.distortion} and Remark~\ref{r.distortion} (and the choice of the constants $\Delta,\beta_{box}$) imply that $\pi_y(B^{cu})$ has distortion bounded by $\Delta$. From our choice of $\Delta_{box}$, the box $B^{cu}$ has distortion bounded by $\Delta_{box}$. \end{proof}
\begin{Lemma}\label{l.otherinclusion} Consider $0<s<t$ such that $s$ and $t-s$ are larger than some $T_4$ and such that $\varphi_s(y)$ is $r_0/2$-close to $x$ and $\pi_x(\varphi_t(y))$ belongs to some box $B_i$. Then the interior of $\pi_x\circ P_s\circ \pi_y(B^{cs})$ does not meet the ${\cal E}$-boundaries of $B_i$. \end{Lemma} \begin{proof} Note that if $T_4$ is large enough, then the diameter of $\pi_x\circ P_s\circ \pi_y(B^{cs})$ is arbitrarily small: indeed, from the topological contraction of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, the distance between the two components of the ${\cal F}$-boundary of $P_s\circ \pi_y(B^{cs})$ is arbitrarily small if $s$ is large enough. These components are contained in the backward image of plaques $\cW^{cu}_{\alpha_{box}}(\bar P_{t}(\bar u))$, $\cW^{cu}_{\alpha_{box}}(\bar P_{t}(\bar v))$ whose lengths are exponentially small in $t-s$.
Assume by contradiction that the interior of $\pi_x\circ P_s\circ \pi_y(B^{cs})$ meets some component $\gamma$ of $\partial^{\cal E} B_i$. By item (iii) of Proposition~\ref{p.subbox}, $\gamma$ satisfies one of the two next cases. \begin{itemize} \item[--] $\gamma$ is disjoint from the $\delta$-neighborhood of $\pi_x(K)\cap B_i$: it is a contradiction since $\pi_x\circ P_s\circ \pi_y(B^{cs})$ contains $\pi_x(\varphi_s(y))\in \pi_x(K)\cap B_i$ and has arbitrarily small diameter. \item[--] $\gamma$ is the projection by $\pi_x$ of an arc $I$ contained in the center-stable plaque $\cW^{cs}(\bar q)$ of a periodic generalized orbit $\bar q$ and $\pi_x\circ \bar P_{\tau}(I)$ for $\tau\geq 0$ is disjoint from all the $\operatorname{Interior}(B_\ell)$, $\ell\in \{1,\dots,k\}$. This is a contradiction since by the Global invariance, there exists an iterate $\pi_x\circ \bar P_{\tau}(I)$ which intersects the interior of $B^{cu}$. \end{itemize} \end{proof}
We take $t_{box}$ equal to the supremum of the $T_i$ for $i=1,2,3,4$.
\paragraph{Proof of items~\ref{box4} and~\ref{box5} of Theorem~\ref{t.existence-box}.} We consider two transitions $(y_1,t_1)$, $(y_2,t_2)$ with $t_1,t_2>t_{box}$ such that the interiors of the boxes $B^{cu}_1$ and $B^{cu}_2$ intersect.
\begin{Lemma}\label{l.contained} Assume that there exists $\theta\in \operatorname{Lip}_2$ such that: \begin{itemize} \item[--] $\theta(t_1)\geq t_2-2$, $\theta^{-1}(t_2)\geq t_1-2$ and $\theta(0)\geq -1$, \item[--] $d(\varphi_t(y_1),\varphi_{\theta(t)}(y_2))<r_0/2$ for $t\in [0,t_1]\cap \theta^{-1}([0,t_2])$. \end{itemize} Then $B^{cu}_2\subset B^{cu}_1$. \end{Lemma} \begin{proof} Let us assume by contradiction that $\partial^{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(B^{cu}_1)\cap{\rm Interior}(B^{cu}_2)\neq\emptyset$. The two transitions are associated to boxes $B_1,B_2$ (containing $y_1,y_2$ respectively) and $B'$ containing both $B^{cu}_1$ and $B^{cu}_2$. We also denote $y'_1:=\varphi_{t_1}(y_1)$ and $y'_2:=\varphi_{t_2}(y_2)$. Moreover we set $[a,b]=[0,t_1]\cap \theta^{-1}([0,t_2])$.
By the Global invariance $$\pi_x\circ P_{a-b}\circ \pi_{\varphi_b(y_1)}(B^{cu}_i)=\pi_x\circ P_{\theta(a)-\theta(b)}\circ \pi_{\varphi_{\theta(b)}(y_2)}(B^{cu}_i).$$
By our assumptions, $|\theta(b)-t_2|\leq 2$
and $|b-t_1|\leq 2$. By the Local invariance, $$ \pi_{\varphi_b(y_1)}(B^{cu}_i)= P_{b-t_1}\circ \pi_{y'_1}(B^{cu}_i) \text{ and } \pi_{\varphi_{\theta(b)}(y_2)}(B^{cu}_i)= P_{\theta(b)-t_2}\circ \pi_{y'_2}(B^{cu}_i).$$
Since $\theta$ is $2$-Lipschitz and $\theta(0)\geq -1$, we check that $|a|\leq 2$ and that $\varphi_a(y_1)$ is $r_0$-close to $x$. Hence by the Local invariance, $$\pi_x\circ P_{a-b}\circ \pi_{\varphi_b(y_1)}(B^{cu}_i)= \pi_x\circ P_{-b}\circ \pi_{\varphi_b(y_1)}(B^{cu}_i).$$ This shows that $$\pi_x\circ P_{a-b}\circ \pi_{\varphi_b(y_1)}(B^{cu}_1)= B^{cs}_1 \text{ and } \pi_x\circ P_{\theta(a)-\theta(b)}\circ \pi_{\varphi_{\theta(b)}(y_2)}(B^{cu}_2)=\pi_x\circ P_{\theta(a)}\circ \pi_{y_2}(B^{cs}_2).$$ Consequently, the interior of $\pi_x\circ P_{\theta(a)}\circ \pi_{y_2}(B^{cs}_2)$ meets the ${\cal F}$-boundary of $B^{cs}_1$. We denote by $\gamma$ the corresponding component of $\partial^{\cal F}B^{cs}_1$. By item (ii) of Proposition~\ref{p.subbox} we have $\gamma=\pi_x(I)$ where $I$ is an arc in $\bar P_{-t}(\cW^{cu}_{\alpha_{x}}(\bar p_1))$ or in $\bar P_{-t}(\cW^{cu}_{\alpha_{x}}(\bar p_2))$ for some $t\geq 0$.
In the case $\theta(a)\in [0,1]$, the Local invariance shows that $\pi_x\circ P_{\theta(a)}\circ \pi_{y_2}(B^{cs}_2)=B^{cs}_2$. Hence the interior of $B^{cs}_2$ meets $\pi_x(I)$, contradicting the item (ii) of Proposition~\ref{p.subbox}.
In the other case, $\theta(a)>1$. The Global invariance (Remark~\ref{r.identification}, item (e)) shows that since $\pi_x(I)$ intersects ${\rm Interior}(\pi_x\circ P_{\theta(a)}\circ \pi_{y_2}(B^{cs}_2))$, there is an iterate $\bar P_{-s}(I)$, $s\geq 0$, whose projection by $\pi_x$ meets ${\rm Interior}(B^{cs}_2)$. Again, this contradicts the item (ii) of Proposition~\ref{p.subbox}. \end{proof}
In order to prove item~\ref{box4}, we have to check that, up to exchange $(y_1,t_1)$ and $(y_2,t_2)$, the conditions of Lemma~\ref{l.contained} are satisfied by some $\theta$
which furthermore satisfies $|\theta(t_1)-t_2|\leq 1/2$. By the Global invariance (Remark~\ref{r.identification}, item (e)), there exists $\theta\in \operatorname{Lip}_2$ such that: \begin{itemize} \item[--] $\theta(t_1)\in [t_2-1/4,t_2+1/4]$, \item[--] $d(\varphi_t(y_1),\varphi_{\theta(t)}(y_2))<r_0/2$ for $t\in [0,t_1]\cap \theta^{-1}([0,t_2])$. \end{itemize}
Note that it also gives $|\theta^{-1}(t_2)- t_1|\leq 1/2$ since $\theta$ is $2$-bi-Lipschitz. Up to exchange $y_1$ and $y_2$, we can suppose $0\in \theta^{-1}([0,t_2])$, hence the assumptions of the Lemma~\ref{l.contained} are satisfied. This gives $B^{cu}_2\subset B^{cu}_1$ and ends the proof of item 5.
The proof of item~\ref{box5} uses similar ideas. Let $B^{cs}_i,B^{cu}_i$, $i=1,2$, be the boxes associated to the transitions $(y,t_1)$ and $(y,t_2)$ where $y=y_1=y_2$ such that the interior of $B^{cs}_1$ and $B^{cs}_2$ intersect and that $t_2>t_1+t_{box}>2 t_{box}$. Recall that $y$ belongs to the interior of some box $B\in \{B_1,\dots,B_k\}$ which contains $B^{cs}_1$ and $B^{cs}_2$. Let $I$ be the arc in $\cW^{cs}(y)$ connecting the two components of $\partial^{\cal F} B$ and let $\bar u,\bar v$ be the two generalized orbits associated to the endpoints of $I$ as in the proof of items 3 and 4 above.
Let us consider the two boxes $B^{cu}_1=\pi_xP_{t_1}\pi_y(B^{cs}_1)$ and $\pi_xP_{t_1}\pi_y(B^{cs}_2)$: their interior intersect (since they contain $\pi_x\varphi_{t_1}(y)$). By construction $\partial^{\cal F} B^{cu}_1$ is contained in the union of $\pi_x(\cW^{cu}(\bar P_{t_1}(\bar u)))$ and $\pi_x(\cW^{cu}(\bar P_{t_1}(\bar v)))$. By construction, the ${\cal F}$-boundary of $P_{t_2}\pi_y(B^{cs}_2)$ is contained in the union of $\cW^{cu}_{\alpha_{box}}(\bar P_{t_2}(\bar u))$ and $\cW^{cu}_{\alpha_{box}}(\bar P_{t_2}(\bar v))$ and by the coherence stated in Section~\ref{ss.moreconstants} the ${\cal F}$-boundary of $\pi_xP_{t_1}\pi_y(B^{cs}_2)$ is also contained in the union of the projection by $\pi_x$ of the plaques $\cW^{cu}(\bar P_{t_1}(\bar u))$ and $\cW^{cu}(\bar P_{t_1}(\bar v))$. Moreover by applying Lemma~\ref{l.otherinclusion} to $\varphi_{t_1}(y)$, $\pi_x\circ P_{t_1}\circ \pi_y(B^{cs}_2)$ and to the box containing $B^{cu}_1$, the iterate $\pi_xP_{t_1}\pi_y(B^{cs}_2)$ can not intersect the ${\cal E}$-boundary of $B^{cu}_1$. This implies that $\pi_xP_{t_1}\pi_y(B^{cs}_2)$ is contained in $B^{cu}_1$, hence $B^{cs}_2\subset B^{cs}_1$.
The proof of Theorem~\ref{t.existence-box} is now complete. \qed
\section{Uniform hyperbolicity}\label{s.uniform} In this section we prove Theorem~\ref{Thm:1Dcontracting} (see section~\ref{ss.Dcontracting}).
\noindent {\bf Standing assumptions.} In the whole section, $(\cN,P)$ is a $C^2$ local fibered flow over a topological flow $(K,\varphi)$ which is not a periodic orbit and $\pi$ is a $C^2$-identification compatible with $(P_t)$ on an open set $U$ {as in Definition~\ref{d.compatible}} such that: \begin{enumerate} \item[(B1)] There is a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and the fibers of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ are one-dimensional. \item[(B2)] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on an open set $V$ containing $K\setminus U$. \item[(B3)] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on any compact invariant proper subset of $K$. \item[(B4)] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is topologically contracted. \end{enumerate}
The main result of this section is Proposition~\ref{Pro:measure-contracted}, which is proved in the next two sections.
\begin{Proposition}\label{Pro:measure-contracted}
Under the standing assumptions above, for any ergodic invariant measure $\mu$ whose support is $K$, if the Lyapunov exponent of $\mu$ along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is positive, then the Lyapunov exponent exponent of $\mu$ along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is negative.
\end{Proposition}
Consider a measure $\mu$ as in the statement of the proposition. We recall that $K=\operatorname{supp}(\mu)$ is not a periodic orbit. In particular the assumptions of Theorem~\ref{t.existence-box} are satisfied. One will choose $x\in K\setminus \overline V$, some $\beta_x$ small (to be precised later) and consider a Markovian box $R\subset B(0_x,\beta_x)$.
The proof is divided into two cases: the non-minimal case and the minimal case. \subsection{The non-minimal case}
In this section, we will prove Proposition~\ref{Pro:measure-contracted} when {\bf the dynamics on $K$ is not minimal}. Since $K=\operatorname{supp}(\mu)$, the dynamics on $K$ is transitive. One can thus fix a non-periodic point $x\in K\setminus \overline V$ whose orbit is dense in $K$ and reduce $r_0$ so that: \begin{itemize} \item[--] the ball $U(x,r_0)\subset K$ centered at $x$ with radius $r_0$ is contained in $U$, \item[--] the maximal invariant set in $K\setminus U(x,r_0)$ is non-empty. \end{itemize}
We still denote $\mu_x:=({\pi_x})_*(\mu|_{U(x,r_0)})$.
\subsubsection{Notations, choices of constants}\label{ss.nota}
\paragraph{a -- The constant $\beta_x$, the box $R$, the sets $\widehat R$ and $W$.} One chooses some constants $\beta_x>0$ and $T_x\geq 1$ which satisfy Lemma~\ref{l.continuity-Pliss}. One will also assume that $\beta_x$ smaller than $\beta_S$ in Lemma~\ref{Lem:schwartz}. Theorem~\ref{t.existence-box} associates to $\beta_x$ a box $R\subset B(0_x,\beta_x)\subset\cN_x$ whose interior has positive $\mu_x$-measure by Theorem~\ref{t.existence-box}.
\noindent {\it Notation.} For any box $B\subset \cN_x$, we denote by $\widehat B$ the following open subset of $K$: $$\widehat B:=\{y\in K,\; d(x,y)<r_0/2 \text{ and } \pi_x(y)\in\operatorname{Interior}(B)\}.$$ Let $W$ be the set of points $z$ such that $\varphi_s(z)\notin \widehat R$ for any $s\in [0,1]$.
\paragraph{b -- The constants $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}, C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$.} Note that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on the set $W':=\bigcup_{s\in [0,1]}\varphi_s(W)$: indeed this set is disjoint from the open set $\widehat R$ and by our choice of $r_0$ it contains a non-empty compact invariant proper set $K'\subset K$; if ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is not uniformly contracted on the set $W'$, one gets a contradiction with our assumption (B3). Proposition~\ref{l.summability} can thus be applied to $W$ and gives some $C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^0,\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$ and $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq T_x$.
One sets $C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}=C^0_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\max_{-1\leq s\leq 1}\|DP_t\|^2$. Consequently a piece of orbit $(y,\varphi_t(y))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, once there exists $s_0,s_1\in [-1,1]$ and a piece of orbit $(\varphi_{-k}(z),z)$ satisfying the assumptions of Proposition~\ref{l.summability} and $y=\varphi_{-k+s_1}(z)$, $\varphi_t(y)=\varphi_{s_2}(z)$.
\paragraph{c -- Hyperbolicity for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ in $K\setminus \widehat R$.} By (B3), the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted outside $\widehat R$; one can thus relax again $C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$ so that any $y\in K$ and $t>0$ satisfy:
$$\forall s\in [1/2,t-1/2],\; \varphi_s(y)\notin \widehat R \;\; \Rightarrow\;\; \|DP_{t}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)\|\le C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}} \lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{-t}.$$
\paragraph{d -- Hyperbolicity for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.} As in Section~\ref{ss.assumptions}, $\lambda$ is associated to the $2$-domination ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
Let $\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}:=\lambda^{1/2}$. There is $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}>0$ with the following property:\hspace{-1cm}\mbox{}
If $(y,\varphi_{t-s}(y))$ is a $(2T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-Pliss string for some $s\in [0,1]$, then $(y,\varphi_{t}(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{1/2},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
\paragraph{e -- The constants $\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\beta_{box}$.} We need weaker constants $C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$ for the hyperbolicity along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. We first set $\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}'=\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{1/2}$ and then apply the following lemma: fixing a transition $(x_0,t_0)$, it defines for points $y\in \widehat B^{cs}$ a piece of orbit $(y,\varphi_{t(y)}(y))$ shadowing $(x_0,\varphi_{t_0}(x_0))$.
\begin{Lemma}\label{l.shadow-box} There exists $\beta_{box}>0$ such that if $B_1,\dots,B_k$, $t_{box}$ are boxes and the constant associated to $R,C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\beta_{box}$ by Theorem~\ref{t.existence-box}, then the following holds for some $C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}>0$.
Let $(x_0,t_0)$ be a transition with $t_0>2t_{box}$ such that $(x_0,\varphi_n(x_0))$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss, where $n\in [t_0-1,t_0]$; let $B^{cs},B^{cu}$ be the associated sub-boxes, and $y\in \widehat {B}^{cs}$. Then there exist $\theta\in\operatorname{Lip}_2$ and $t(y)$ such that: \begin{enumerate} \item $\varphi_{t(y)}(y)\in\widehat B^{cu}$,
\item $|\theta(0)|\leq 1/4$, $|\theta(t_0)-t(y)|\leq 1/4$, \item $d(\varphi_t(x_0), \varphi_{\theta(t)}(y))<r_0/2$ for $t\in [-1,t_0+1]$, \item $(y,\varphi_{t(y)}(y))$ is a $(2T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-Pliss string. \end{enumerate} If $(x_0,\varphi_{t_0}(x_0))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, {then} $(y,\varphi_{t(y)}(y))$ is $(C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. \end{Lemma}
From the items 1 and 4 of this Lemma and the choice of $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$, $(y,t(y))$ is a transition. The associated boxes are $B^{cs}$ and $B^{cu}$. Indeed, the items 2 and 3 together with the item~\ref{box4} of Theorem~\ref{t.existence-box} imply that the associated center-unstable box coincides with $B^{cu}$. By the Global invariance, the center-stable box $\pi_x\circ P_{-t(y)}\pi_{\varphi_{t_0(y)}(y)}(B^{cu})$ coincides with $B^{cs}$.
\begin{proof} For $\lambda'\in (1,\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{1/4})$, Lemma~\ref{l.shadowingandhyperbolicity} gives $C',\delta,\rho$. We may take $\rho \in (0,1/3)$. The Global invariance then associates to $\delta,\rho$ some constants $\beta,r$. By the Local injectivity, we can take $\beta_{box}\in (0,\beta)$ smaller such that for any $x,y\in U$ that are $r_0$-close to $x$ and satisfy $d(\pi_x(y),\pi_x(x_0))\leq \beta_{box}$, there exists $s\in [-1/4,1/4]$ such that $d(x,\varphi_s(y))<r$.
By Lemma~\ref{l.continuity-Pliss} (with the constants $\beta_x$ and $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}\geq T_x$ introduced in paragraphs (a) and (b) above), we obtain $\theta\in \operatorname{Lip}_2$
such that $|\theta(0)|\leq 1/4$ and item 3 is satisfied; moreover $(y,\varphi_{\theta(y)+a}(y))$ is a $(2T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda^{1/2})$-Pliss string for any $a\in [-1,1]$. By the Local injectivity, there exists $t(y)\in [\theta(t_0)-1/4, \theta(t_0)+1/4]$ such that $d(\varphi_{t(y)}(y),x)<r_0/2$ and $\pi_x\circ \varphi_{t(y)}(y)=\pi_x\circ \varphi_{\theta(t_0)}(y)$. In particular item 2 holds. Moreover $\pi_{\varphi_{t_0}(x_0)}\circ \varphi_{\theta(t_0)}(y)=P_{t_0}\circ \pi_{x_0}(y)\in P_{t_0}\circ \pi_{x_0}(B^{cs})$. Its projection by $\pi_{x}$ belongs to $B^{cu}=\pi_{x}\circ P_{t_0}\circ \pi_{x_0}(B^{cs})$, so $\pi_{x} \circ \varphi_{\theta(t_0)}(y)\in B^{cu}$ giving the first item.
Let us assume now that $(x_0,\varphi_{t_0}(x_0))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and consider $\sigma\in [t_{box},t_0+1/4]$ such that \begin{itemize} \item[--] $\varphi_{\theta(\sigma)}(y)\in \widehat R$, \item[--] $\varphi_{\theta(s)}(y)\notin \widehat R$ for $s\in [t_{box},t_0]$ satisfying $\theta(s)\leq \theta(\sigma)-1/2$. \end{itemize} From the property stated at paragraph (c) above, we have for any $s\in [t_{box},\sigma]$
$$\|DP_{\theta(s)-\theta(t_{box})}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{\theta(t_{box})}(y))\|\le C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}{\lambda _{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{-(\theta(s)-\theta(t_{box}))}.$$ Since $\theta(t_{box})<2t_{box}$, there exists $C_1>0$ independent from $x_0,t_0,y$ such that for any $s\in [0,\sigma]$, \begin{equation}\label{e.hyp1}
\|DP_{\theta(s)}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)\|\le C_1{\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^{-\theta(s)}. \end{equation}
From Local injectivity (recalled above) and since $\operatorname{Diam}(P_{s}\circ \pi_{x_0}(B^{cu}))<\beta_{box}$ for $s\in [t_{box},t_0]$ (by item~\ref{box3} of Theorem~\ref{t.existence-box}), there exists $\varepsilon\in[-1/4,1/4]$ such that $d(\varphi_{\theta(\sigma)+\varepsilon}(y),\varphi_{\sigma}(x_0))<r$. The Global invariance gives $\theta'\in\operatorname{Lip}_{1+\rho}$, such that for each $s\in [t_{box}-1,t_0+1]$ one has $d(\varphi_{\theta'(s)}(y),\varphi_s(x_0))<\delta$ and ${\theta'}(\sigma)=\theta(\sigma)+\varepsilon$.
Lemma~\ref{l.shadowingandhyperbolicity} now gives for each $s\in [t_{box}-1,t_0+1]$,
$$\|DP_{\theta'(s)-\theta'(t_{box})}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{\theta'(t_{box})}(y))\|\leq C'{\lambda'}^{s-t_{box}}\|DP_{s-t_{box}}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{t_{box}}(x_0))\|.$$ Since $(x_0,\varphi_{t_0}(x_0))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, since $\theta'$ is $3/2$-bi-Lipschitz, and since $\lambda'<\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{1/4}$, one gets $C_2>0$ (depending on $t_{box}$, not on $x_0,t_0,y$) such that for any $s\in [t_{box}-1,t_0+1]$, \begin{equation}\label{e.hyp2}
\|DP_{\theta'(s)-\theta'(t_{box})}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(\varphi_{\theta'(t_{box})}(y))\|\leq C_2{\lambda'}^{s-t_{box}}C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{-(s-t_{box})}\leq C_2C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}^{-(\theta'(s)-\theta'(t_{box}))/2}. \end{equation} Combining~\eqref{e.hyp1} and~\eqref{e.hyp2}, one deduces that $(y,\varphi_{\theta(t_0)}(y))$ is $(C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ for some constant $C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$, provided $\theta'(t_{box}-1)\leq \theta(\sigma)$ and $\theta'(t_0+1)\geq t(y)$.
Since $\theta'$ is $2$-bi-Lipschitz, one gets $\theta'(t_{box}-1)\leq \theta'(\sigma)-1/2=\theta(\sigma)+\varepsilon-1/2<\theta(\sigma)$. One can apply Proposition~\ref{p.no-shear} to $\varphi_{\theta(\sigma)}(y)$, the reparametrization $\theta'\circ \theta^{-1}$ and the interval $[\theta(\sigma),\theta(t_0)]$. Since $|\theta'(\sigma)-\theta(\sigma)|<2$, one gets $\theta'(t_0)+1/2\geq \theta(t_0)$. Since $\theta'$ is $4/3$-bi-Lipschitz, this gives $\theta'(t_0+1)\geq \theta'(t_0)+3/4\geq \theta(t_0)+1/4\geq t(y)$ and concludes the proof. \end{proof}
\paragraph{f -- The sub-boxes $B_1,\dots,B_k$ and the constants $t_{box}$, $\Delta_{box},C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$.} Finally we apply Theorem~\ref{t.existence-box} to $R,C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\beta_{box}$ and obtain the sub-boxes $B_1,\dots,B_k$ and the constants $t_{box}$, $\Delta_{box}$ that we fix now. Lemma~\ref{l.shadow-box} gives $C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}$.
\subsubsection{Existence of large hyperbolic returns}
Since we have to prove that the Lyapunov exponent of $\mu$ along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is negative, one can reduce to the case where the following condition holds: \begin{enumerate} \item[(B5)] The Lyapunov exponent of $\mu$ along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is larger than $-\log(\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$. \end{enumerate} In the following we will say that a point $z\in K$ is called \emph{regular} if :
\begin{itemize}
\item[--] the orbit of $z$ equidistributes towards $\mu$, i.e. $\frac{1}{t}\int_0^t\delta_{\varphi_{s}(z)}ds\to\mu$ as $t\to\pm\infty$.
\item[--] For any iterate $\varphi_t(z)$, if $d(\varphi_t(z),x)<r_0$, then $\pi_x(\varphi_t(z))$ is not contained in the boundary of $R$, nor of any box $B_i$, $1\le i\le k$.
\end{itemize} By Birkhoff ergodic theorem and since the boundaries of the boxes $R$ and $B_i$ have zero measure (for $\mu_x=(\pi_x)_*(\mu)$), the set of regular points has full measure for $\mu$.
\begin{Lemma}\label{l.transition} For any $T_0>0$, there exists a regular point $x_0$ and $t_0>T_0$ such that \begin{itemize} \item[--] both $x_0$ and $\varphi_{t_0}(x_0)$ are in $\widehat R$, \item[--] $(x_0,\varphi_{n}(x_0))$ is $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss for some $n\in [t_0-1,t_0]$. \end{itemize} \end{Lemma} \begin{proof} Let us take a regular point $y$. We have $\omega(x)=\alpha(x)=K$. By assumption the maximal invariant set outside $\widehat R$ is a non-empty compact invariant proper set $K_0$ of $K$. One can assume that $y$ is very close to $K_0$. Thus, backward iterates $\varphi_{t_1}(y)$ and forward iterate $\varphi_{t_2}(y)$ in $\widehat R$ occur for $t_1$ and $t_2$ large: clearly, one can choose $y$ such that $t_2-t_1>T_0+1$. Choosing $t_1,t_2$ close to their infimum values, one furthermore gets that $\varphi_{t}(y)\not\in \widehat R$ for $t\in (t_1+1/4,t_2-1/4)$.
Let $x_0:=\varphi_{t_1}(y)$. Note that $x_0$ is not $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$: since $x_0$ is regular, its forward orbit equidistributes on the measure $\mu$ and this would imply that the Lyapunov exponent of $\mu$ along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is less than or equal to $-\log(\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$, contradicting the assumption (B5). By Proposition~\ref{l.summability} and the choice of constants in~\ref{ss.nota}(b), there exists a forward iterate $\varphi_n(x_0)\not\in W$, $n\geq 1$, such that $(x_0,\varphi_n(x_0))$ is a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. By definition of $W$, there is $t_0\in [n,n+1]$ such that $\varphi_{t_0}(x_0)\in \widehat R$. By our choice of $t_1,t_2$, one has $t_0\geq t_2-1/2-t_1>T_0$. \end{proof}
\subsubsection{Contraction at returns} Let $(x_0,t_0)$ be given by Lemma~\ref{l.transition} for $T_0>2 t_{box}$. {Consider sub-boxes $B_{i_0}$, $B_{j_0}$ such that} $$\pi_x(x_0)\in \operatorname{Interior}(B_{i_0}),\quad \pi_x(\varphi_{t_0}(x_0))\in \operatorname{Interior}(B_{j_0}).$$ We get a transition $(x_0,t_0)$ between $B_{i_0}$ and $B_{j_0}$. By Theorem~\ref{t.existence-box}, one thus gets a center-stable sub-box $B^{cs}\subset B_{i_0}$ and a center-unstable sub-box $B^{cu}\subset B_{j_0}$ as in the statement of this theorem.
\begin{Lemma}\label{l.contraction} If $t_0$ is large enough, there exists $\lambda_*>1$ (depending on $t_0$) such that for any regular $y\in \widehat B^{cs}$, there is $\tau>t_{box}$ satisfying $\varphi_{\tau}(y)\in \widehat B^{cs}$ and
$$\|DP_{\tau}|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)\|\le \lambda_*^{-\tau}.$$ \end{Lemma}
The proof of this lemma breaks into 5 steps.
\paragraph{Step 1. Definition of the times $\sigma<t(y)\leq \tau$.} For any regular $y\in \widehat B^{cs}$, Lemma~\ref{l.shadow-box} gives a time $t(y)$ such that $\varphi_{t(y)}(y)\in \widehat B^{cu}$ and $(y,\varphi_{t(y)}(y))$ shadows the piece of orbit $(x_0,\varphi_{t_0}(x_0))$.
The forward orbit of $y$ is dense in $K$, hence there is $\tau\geq t(y)$ such that {$$\varphi_{\tau}(y)\in \widehat B^{cs},~\textrm{but}~\varphi_{s}(y)\not\in \widehat B^{cs}~\textrm{for any}~s\in (t(y),\tau-1).$$}
We also introduce a return time $\sigma\in[0,t(y)-1]$ (possibly equal to $0$) such that {$$\varphi_{\sigma}(y)\in \widehat B^{cs},~\textrm{but}~\varphi_{s}(y)\not\in \widehat B^{cs}~\textrm{for any}~s\in (\sigma+1,t(y)-1).$$}
\paragraph{Step 2. Definition of the times $t_1<t_2<\dots<t_\ell$.} We now introduce intermediate times between $\sigma+1$ and $\tau-1$. We first set $$t_1=t(\varphi_{\sigma}(y)).$$
By applying Lemma~\ref{l.shadow-box} twice, the orbit segment $(y,\varphi_{t_1}(y))$ is $({C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^2,\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}')$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. Let $C_2=\max_{0\leq t\leq 2} \|DP_t\|{\lambda'}_{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}^2$. From Lemma~\ref{l.shadow-box}, we get $$\tau\geq t(y)\geq \frac 1 2 t_0-\frac 1 4.$$ If $t_1+2\ge \tau$, then provided $t_0$ has been chosen large enough one gets
$$\|{DP_{\tau}}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(y)\|\leq C_2\;{C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^2{\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}'}^{-\tau}\leq C_2\;{C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^2{\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}'}^{-\frac 1 2 (\frac {t_0}{2}-1/4)}{\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^{-\tau/2}\leq {\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^{-\tau/2}.$$ Hence the conclusion of Lemma~\ref{l.contraction} holds in this case with $\lambda_*={\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}'}^{1/2}$. A similar discussion holds when $\tau\leq t(y)+2$. Thus, without loss of generality, we can assume that: $$t_1+2< \tau \text{ and } t(y)+2<\tau.$$
\begin{Sublemma}\label{l.extistence-tm} There exists a sequence of times $\{t_m\}_{m=2}^\ell$ in $[t_1+1,\tau-1]$ such that:
\begin{itemize}
\item[--] $\varphi_{t_m}(y)\in \widehat R$ and $(\varphi_{\sigma}(y),\varphi_{t_m}(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
(Equivalently $(\varphi_{\sigma}(y), t_m-\sigma)$ is a transition.)
\item[--] $t_m\geq t_{m-1}+1$ and $(\varphi_{t_{m-1}}(y),\varphi_{t_{m}}(y))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
\item[--] $(\varphi_{t_\ell}(y),\varphi_{\tau}(y))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
\end{itemize}
\end{Sublemma} \begin{proof} We define inductively the increasing sequence of integers $\{n_m\}_{m=1}^\ell$ such that:
\begin{itemize}
\item[--] $n_1=0$ and for any $2\le m\le \ell$, the piece of orbit $(\varphi_{t_1}(y),\varphi_{t_1+n_m}(y))$ is a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string, $n_{m}-n_{m-1}\geq 2$ and $\varphi_{t_1+n_m}(y)\notin W$;
\item[--] for any integer $0\leq n\leq \tau(y)-t_1(y)-2$ such that neither $n$, nor $n-1$ belong to $\{n_1,\dots,n_\ell\}$, then either $\varphi_{t_{1}+n}(y)\in W$ or $(\varphi_{t_1}(y),\varphi_{t_{1}+n}(y))$ is not a $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss string.
\end{itemize} By definition of $W$, there exists $t_m\in [t_1+n_m,t_1+n_m+1]$ such that $\varphi_{t_m}(y)\in \widehat R$. Note that we have $t_m\geq t_{m-1}+1$ and $t_\ell+1\leq \tau$.
By our choice of $C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}$, the piece of orbit $(\varphi_{t_1}(y),\varphi_{t_{m}}(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{1/2},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. By Lemma~\ref{l.shadow-box}, $(\varphi_{\sigma}(y),\varphi_{t_1}(y))$ is a $(2T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-Pliss string, hence is also $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}}^{1/2},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. Consequently $(\varphi_{\sigma}(y),\varphi_{t_{m}}(y))$ is $(C_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}},\lambda_{{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}})$-hyperbolic for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
By our choice of $n_m$, any integer $n$ with $n_{m-1}+2\leq n<n_{m}$ either belongs to $W$ or satisfies that $(\varphi_{t_1}(y),\varphi_{t_{1}+n}(y))$ is not $T_{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$-Pliss. Proposition~\ref{l.summability} and the choice of $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$ implies that $(\varphi_{t_1+n_{m-1}}(y),\varphi_{t_1+n_{m}}(y))$ and $(\varphi_{t_{m-1}}(y),\varphi_{t_{m}}(y))$ are $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. This gives the second item. The third item is obtained similarly. \end{proof}
\paragraph{Step 3. Construction of center-unstable boxes associated to the times $t_m$.} By Sublemma~\ref{l.extistence-tm}, $(\varphi_{\sigma}(y), t_m-\sigma)$ is a Markovian transition between boxes in $\{B_1,B_2,\cdots,B_k\}$ for any $1\leq m\leq \ell$. By Theorem~\ref{t.existence-box} it defines a center-stable sub-box $B_m^{cs}$ and a center-unstable sub-box $B_m^{cu}$. Moreover the distortion of $B_{m}^{cu}$ is bounded by the constant $\Delta_{box}$.
\begin{Sublemma}\label{l.box-disjoint}
The interiors of the boxes $B_m^{cu}$, for $t_{box}<m\leq \ell$, are mutually disjoint.
\end{Sublemma}
\begin{proof} Let us first notice that if $m>t_{box}$, the center-stable sub-box $B^{cs}_m$ is contained in $B^{cs}$: indeed, let us consider the transitions $(\varphi_{\sigma}(y),\varphi_{t_1}(y))$ and $(\varphi_{\sigma}(y),\varphi_{t_m}(y))$. We have $t_1-\sigma>t_{box}$ and $t_m-t_1>t_{box}$. Moreover, the boxes associated to the first transition are $B^{cs},B^{cu}$ (as explained after the Lemma~\ref{l.shadow-box}). Theorem~\ref{t.existence-box}, item~\ref{box5}, implies that $B^{cs}_m$ is contained in $B^{cs}=B^{cs}_1$.
Assume by contradiction that the interiors of $B_i^{cu}$ and $B_j^{cu}$, for $i\neq j$ larger than $t_{box}$, intersect. Up to exchange $i$ and $j$, the item~\ref{box4} of Theorem~\ref{t.existence-box} gives $\theta\in \operatorname{Lip}_2$ such that \begin{itemize} \item[--] $d(\varphi_{s}(y),\varphi_{\theta(s)}(y))<r_0/2$, for any $s\in [\sigma,t_i]\cap \theta^{-1}([\sigma,t_j])$,
\item[--]$|\theta(t_i)-t_j|\leq 1/2$ and $\theta(\sigma)\geq \sigma-1$. \end{itemize} \begin{Claim} $\theta(\sigma)>\sigma+2$. \end{Claim} \begin{proof} By Proposition~\ref{p.no-shear}, $\theta(\sigma)\in[\sigma-1,\sigma+2]$
implies $|\theta(t_i)-t_i|<1/2$. This gives
$|t_i-t_j|<1$ and this contradicts the definition of the sequence $(t_m)$ since $t_{m+1}-t_m\geq 1$ for any $m$. \end{proof}
Since $B^{cs}_i\subset B^{cs}$, the image $\pi_{\varphi_{\sigma}(y)}(B_i^{cs})=P_{-(t_i-\sigma)}\circ\pi_{\varphi_{t_i}(y)}(B^{cu}_i)$ by $\pi_x$ is contained in $B^{cs}$. Since $\pi_x\circ \varphi_{t_j}(y)$ belongs to $B^{cu}_j\subset B^{cu}_i$, one gets $$\pi_x\circ P_{-(t_i-\sigma)}\circ\pi_{\varphi_{t_i}(y)}(\varphi_{t_j}(y))\in B^{cs}.$$ Using the Global invariance this gives $$\pi_x\circ P_{-(\theta(t_i)-\theta(\sigma)(y))}\circ \pi_{\varphi_{\theta(t_i)}(y)}(\varphi_{t_j}(y))\in B^{cs}.$$
Since $|t_j-\theta(t_i)|\leq 1/2$, the Local invariance gives
$0_{\varphi_{\theta(t_i)}(y)}= \pi_{\varphi_{\theta(t_i)}(y)}(\varphi_{t_j}(y))$, hence $\pi_{x}\circ \varphi_{\theta(\sigma)}(y)\in B^{cs}$. The local injectivity gives $s$ with $|\theta(\sigma)-s|\leq 1/4$ such that $\varphi_s(y)\in \widehat B^{cs}$.
\begin{Claim} We have $t(y)+1<s<\tau-1$. \end{Claim} \begin{proof} Since $\theta(\sigma)>\sigma+2$, we have $\sigma+1<s$. By definition of $\sigma$, one gets $s\geq t(y)-1$.
Note that $\pi_x(\varphi_s(y))=\pi_x(\varphi_{t(y)}(t))$ when $s\in[t(y)-1,t(y)+1]$, hence $\varphi_{t(y)}(y)\in \widehat B^{cs}$; this gives $\tau=t(y)$ by definition and contradicts the assumption $\tau>t(y)+1$.
Since $\theta(\sigma)\leq \theta(t_i)\leq t_j+1/2$, one gets $s\leq t_j+3/4$. Since $t_\ell+2<\tau$, we have $s<\tau-1$. \end{proof}
We have thus obtained a time $s$ which contradicts the definition of $\tau$. This concludes the proof of Sublemma~\ref{l.box-disjoint}. \end{proof}
\paragraph{Step 4. Summability.} Let $J^{cs}(y)=\cW^{cs}(y)\cap \pi_y(B^{cs})$. In each sub-box $B_j$ of $R$, we choose a $C^1$-curve $\gamma_j$ tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ with endpoints in $\partial^{cu}B_j$. Set
$$L_{\cal B}} \def\cH{{\cal H}} \def\cN{{\cal N}} \def\cT{{\cal T}=\sum_{1\leq j\leq k}|\gamma_j|.$$ It only depends on $R$ and $B_1,\dots,B_k$, but not on the points $x_0$, $y$.
\begin{Sublemma} We have
$$\sum_{i=0}^{[\tau]}|P_i(J^{cs}(y))|\le C_{sum}:=\frac{2{C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^2\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}{\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}-1}\Delta_{box}L_{{\cal B}} \def\cH{{\cal H}} \def\cN{{\cal N}} \def\cT{{\cal T}}(1+t_{box}).$$ \end{Sublemma} \begin{proof}
For each $1\leq m\leq \tau$ we have $|P_{t_m}(J^{cs}(y))|\leq \Delta_{box}L_{{\cal B}} \def\cH{{\cal H}} \def\cN{{\cal N}} \def\cT{{\cal T}}$. Moreover $P_{t_m}(J^{cs}(y))$ is a curve tangent to ${\cal C}} \def\cI{{\cal I}} \def\cO{{\cal O}} \def\cU{{\cal U}^{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and crosses $B^{cu}_m$. Since the interiors of $\{B_m^{cu},\; t_{box}<m\leq \ell\}$ are mutually disjoint (Lemma~\ref{l.box-disjoint}) and are center-unstable sub-boxes of $B_1,\dots,B_k$ which have distortion bounded by $\Delta_{box}$, we have that
$$\sum_{1\leq m\leq \ell}|P_{t_m}(J^{cs}(y))|\le \Delta_{box}L_{{\cal B}} \def\cH{{\cal H}} \def\cN{{\cal N}} \def\cT{{\cal T}}(1+t_{box}).$$ By Sublemma~\ref{l.extistence-tm}, $(\varphi_{t_m}(y),\varphi_{t_{m+1}}(y))$ is $(C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}},\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}})$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. Thus, we have
$$\sum_{t_m\leq i\leq t_{m+1}}|P_{i}(J^{cs}(y))|\le \frac{C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}{\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}-1}|P_{t_m}(J^{cs}(y))|.$$ A similar estimate holds for integers $i$ in $[t_\ell,\tau]$. Hence
$$\sum_{t_1\leq i\leq \tau}|P_i(J^{cs}(y))|\le \frac{C_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}{\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}-1}\Delta_{box}L_{{\cal B}} \def\cH{{\cal H}} \def\cN{{\cal N}} \def\cT{{\cal T}}(1+t_{box}).$$
We have shown previously that $(y,\varphi_{t_1}(y))$ is $({C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^2,\lambda_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}')$-hyperbolic for ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, hence $$\sum_{0\le i<t_1}|P_i(J^{cs}(y))|\le \frac{{C'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}^2\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}}{\lambda'_{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}-1}\Delta_{box}L_{{\cal B}} \def\cH{{\cal H}} \def\cN{{\cal N}} \def\cT{{\cal T}}.$$ The estimate of the sublemma follows from these two last inequalities. \end{proof}
\paragraph{Step 5. End of the proof of Lemma~\ref{l.contraction}.} By item~\ref{box3} of Theorem~\ref{t.existence-box}, for any $0\leq s \leq \tau$ one has $P_s(J^{cs}(y))\subset B(0_{\varphi_s(y)},\beta_S)$. Lemma~\ref{Lem:schwartz} associates to $C_{Sum}$ a constant $C_S>1$, independent from $x_0,t_0$ and gives
$$\|DP_{\tau}|{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)}\|\le C_{S}\frac{|P_{\tau}J^{cs}(y)|}{|J^{cs}(y)|}.$$
By construction $\tau\geq \frac {t_0}{2}-1/4$. Moreover, $|J^{cs}(y)|$ is bounded away from zero independently from $x_0,t_0$. The topological hyperbolicity of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ ensures that $|P_{\tau}J^{cs}(y)|$ is arbitrarily small if $\tau$ is large. As a consequence, if $t_0$ is large enough, for any regular $y\in \widehat B^{cs}$, one gets
$$\|DP_{\tau}|{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)}\|\leq \frac 1 2.$$ By assumption (B3), ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on the maximal invariant set in $K\setminus \widehat B^{cs}$. The time $t(y)$ is bounded uniformly in $y$ and $\varphi_s(y)$ does not meet $\widehat B^{cs}$ for $s\in (t(y),\tau-1)$. Hence there exists $C_B,\lambda_B>1$ (depending on $x_0,t_0$) such that for any regular $y\in \widehat B^{cs}$,
$$\|DP_{\tau}|{{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)}\|\leq C_B\lambda_B^{-\tau}.$$ Choosing $\lambda_*>1$ close to $1$ one gets for any $t>0$, $$\min(1/2, C_B\lambda_B^{-t})\leq \lambda_*^{-t},$$ which gives the estimate of Lemma~\ref{l.contraction}. \qed
\subsubsection{Proof of Proposition~\ref{Pro:measure-contracted} in the non-minimal case}
We can now conclude the proof of Proposition~\ref{Pro:measure-contracted} when the dynamics on $K$ is non-minimal. For any regular point $y\in \widehat B^{cs}$, we have obtained a contracting return $\tau(y)$. This allows to define an increasing sequence of times $\{\tau_n\}_{n\in\NN}$ such that $\tau_0=\tau(y)$ and $\tau_{n+1}=\tau_n+\tau(\varphi_{\tau_n}(y))$. By Lemma~\ref{l.contraction}, we have for any $n\geq 0$
$$\|{DP_{\tau_n}}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(y)\|\le \lambda_*^{-\tau_n}.$$
Since $y$ is regular, $\frac 1 {\tau_n} \log({\|DP_{\tau_n}}{|{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(y)\|)$ converges as $n\to +\infty$ to the Lyapunov exponent of $\mu$ along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$. Consequently this Lyapunov exponent is smaller or equal to $-\log(\lambda_*)$. Hence it is negative as announced. \qed
\subsection{The minimal case}
In this section, we will continue to prove Proposition~\ref{Pro:measure-contracted}, now assuming that {\bf the dynamics on $K$ is minimal}. We will apply a local version of the result of Pujals and Sambarino \cite{PS1}.
\begin{Theorem}\label{Thm:localized-pujals-sambarino} Assume that $f:~W_1\to W_2$ is a $C^2$ diffeomorphism, where $W_1,W_2\subset \RR^2$ are open sets, and that there is a compact invariant set $\Lambda\subset W_1\cap W_2$ of $f$ such that: \begin{itemize} \item[--] every periodic point in $\Lambda$ is a hyperbolic saddle, \item[--] $\Lambda$ admits a dominated splitting $T_\Lambda\RR^2=E\oplus F$, \item[--] $\Lambda$ does not contain a circle tangent to $E$ or $F$ which is invariant by an iterate of $f$, \end{itemize} then $\Lambda $ is hyperbolic. \end{Theorem}
Note that Pujals-Sambarino stated their theorem for global diffeomorphisms of a compact surface, but their proof gives also the local result above. It is also obtained in \cite{CPS}.
Our goal now is to reduce the minimal case to Theorem~\ref{Thm:localized-pujals-sambarino} by introducing a local surface diffeomorphism and an invariant compact set $\Lambda$.
Let us consider some $r$ small such that the ``No small period" assumption holds for some $\varkappa<1/2$. As before one chooses a point $x\in K\setminus \overline V$, some $\beta_x$, and a box $R\subset B(0_x,\beta_x)$ given by Theorem~\ref{t.existence-box}. We introduce the set $$\widehat R=\{y\in K,\; d(y,x)<r_0/2 \text{ and } \pi_x(y)\in R\}.$$ Assuming that $\beta_x$ has been chosen small enough, the Local injectivity associates to any $y\in \widehat R$, a point $y'\in \widehat R$ such that $d(y',x)<r/2$ and $\pi_x(y)=\pi_x(y')$. Moreover $y'=\varphi_t(y)$ for some $t\in [-1/4,1/4]$.
\paragraph{The set $\Lambda$.} We introduce the following set: $$\Lambda:=\pi_x(\widehat R)=\{u\in R,\; \exists y\in K,\; d(x,y)<r_0/2 \text{ and } u=\pi_x(y)\}.$$ Note that, one can choose the points $y$ in the definition of $\Lambda$ to be $r/2$-close to $x$ and in particular to satisfy $d(x,y)\leq r_0/3$.
\begin{Lemma} $\Lambda$ is compact and contained in the interior of $R$. \end{Lemma} \begin{proof} Indeed, let us consider $\{u_n\}_{n\in\NN}$ in $\Lambda$ such that $\lim_{n\to\infty}u_n=u$. We take $y_n\in K$ that is $r_0/3$-close to $x$ such that $\pi_x(y_n)=u_n$. Taking a subsequence if necessary, we assume that $y=\lim_{n\to\infty}y_n$. This point is $r_0/3$-close to $x$ and by continuity of the identification $\pi_x(y)=u$. Hence $u\in \Lambda$, proving that $\Lambda$ is compact.
Since $K$ is not periodic and is minimal, $K$ does not contain periodic orbits. By property~\ref{i.markov-boundary} of Theorem~\ref{t.existence-box}, one deduces that for any point $y$ which is $r_0$-close to $x$, the projection $\pi_x(y)$ is disjoint from the boundary $\partial R$. Hence $\Lambda=\pi_x(\widehat R)$ is contained in the interior of $R$. \end{proof}
\paragraph{The return map $f$ on $\Lambda$.} For any $u\in \Lambda$, one defines $f(u)$ as follows: consider $y\in \widehat R$ such that $\pi_x(y)=u$ and choose the smallest $t\ge 1$ such that $d(\varphi_t(y),x)\leq r_0/3$ and $\pi_x(\varphi_t(y))\in \Lambda$ (such a $t$ exists because $K$ is minimal). We then define $f(u)=\pi_x(\varphi_t(y))$.
\begin{Lemma}\label{l.well-def} $f$ is well defined. \end{Lemma} \begin{proof} We have to check that the definition of $f(u)$ does not depend on the choice of $y$. So we consider $y,y'\in \widehat R$ such that $\pi_x(y)=\pi_x(y')$ and the minimal times $t,t'$ as in the previous definition. By the Local injectivity, there exists $s_0\in [-1/4,1/4]$ such that $y_0:=\varphi_{s_0}(y)$ is $r/2$-close to $x$ and $\pi_x(y)=\pi_x(y_0)$. One builds similarly $s'_0$ and $y'_0$. Then $y_0,y_0'$ are $r_0$ close to each other and satisfy $\pi_x(y_0)=\pi_x(y'_0)$
so that by the Local injectivity, $y'_0=\varphi_{s}(y_0)$ for some $s\in [-1/4,1/4]$. In particular $y'=\varphi_{s+s_0-s'_0}(y)$ with $|s+s_0-s'_0|\leq 3/4$.
Using the Local injectivity, there exists $\tau\in [-1/4,1/4]$ such that $\varphi_{t+\tau}(y)$ is $r/2$-close to $x$ and satisfies $\pi_x(\varphi_{t+\tau}(y))=\pi_x(\varphi_t(y))$. Since $y_0:=\varphi_{s_0}(y)$ and $\varphi_{t+\tau}(y)$ are both $r/2$-close to $x$, the ``No small period" assumption implies that
$|t+\tau-s_0|$ is either larger or equal to $2$, or smaller than $1/2$. Since by definition $t\geq 1$ one has $t+\tau-s_0\geq 2$.
Thus, $\varphi_t(y)$ is the image of $y'$ at the time $t-(s_0+s+s'_0)=(t+\tau-s_0)-(s'_0+\tau+s_0)$, which is larger than $2-3/4>1$. By minimality in the definition of $t$, $\varphi_t(y)$ is a forward iterate of $\varphi_{t'}(y')$. In a similar way $\varphi_{t'}(y')$ is a forward iterate of $\varphi_{t}(y)$. Hence these two points coincide. \end{proof}
The next lemma shows that the orbits under $f$ correspond to (the projection by $\pi_x$ of) orbits under $\varphi$ restricted to $\widehat R$. \begin{Lemma}\label{l.orbit-f} For any $y\in \widehat R$, let $t=\min\{s\geq 1,\; d(\varphi_s(y),x)\leq r_0/3 \text{ and }\varphi_s(y)\in \widehat R\}$. Then for any $s\in [0,t]$ such that $\varphi_s(y)\in \widehat R$, we have $s\notin (3/4,3/2)$. Moreover: \begin{itemize} \item[--] if $s\leq 3/4$, $\pi_x(\varphi_s(y))=\pi_x(y)$, \item[--] if $s\geq 3/2$, $\pi_x(\varphi_s(y))=\pi_x(\varphi_t(y))$ (which coincides with $f(\pi_x(y))$) and $s\geq t-1/4$.
\end{itemize} \end{Lemma} \begin{proof}
The proof of Lemma~\ref{l.well-def} showed that if $y,y'\in \widehat R$ have the same projection by $\pi_x$, then $y'=\varphi_s(y)$ for some $|s|\leq 3/4$.
On the other hand if $y,y'\in \widehat R$ belong to the same orbit (i.e. $y'=\varphi_s(y)$) but have different projection by $\pi_x$, then $|s|>3/2$. Indeed, by the Local injectivity, there exists $y_0=\varphi_{s_0}(y)$ and $y'_0=\varphi_{s'_0}(y')$
which are $r/2$-close to $x$ such that $|s_0|+|s'_0|\leq 1/2$. Since $y_0,y'_0$ have different projections by $\pi_x$, the Local invariance implies that $|s-s_0+s'_0|\ge2$. This gives $|s|> 3/2$.
These two properties imply that for any $y,y'\in \widehat R$ with $y'=\varphi_s(y)$, then $s\notin (3/4,3/2)$
and these points have the same projection by $\pi_x$ if $|s|\leq 3/4$.
Let us assume that $3/2\leq s\leq t$. One considers by the Local injectivity
$s'$ such that $|s-s'|\leq 1/4$, $\varphi_{s'}(y)$ is $r/2$-close to $x$ and $\pi_x(\varphi_{s'}(y))=\pi_x(\varphi_{s}(y))$. Then $s'\geq 1$ and by definition of $t$, one gets $s'\geq t$. Consequently $s\geq t-1/4$ and $\pi_x(\varphi_s(y))=\pi_x(\varphi_t(y))$. \end{proof}
\begin{Lemma}\label{l.continuous} The map $f$ is continuous. \end{Lemma} \begin{proof} Fix $u\in\Lambda$. There is $y\in \widehat R$ such that $d(x,y)< r/2$ and $u=\pi_x(y)$. For any $u'\in \Lambda$ close to $u$, there exists $y'\in \widehat R$ with the same properties and such that $\pi_y(y')$ is arbitrarily close to $0_y$. So by the Local injectivity and the ``No small period" assumption, one can choose $y'$ arbitrarily close to $y$. Let $t,t'$ be the times associated to $y,y'$ as in Lemma~\ref{l.orbit-f}.
By continuity of the flow, $\varphi_t(y')$ is $r_0$-close to $x$ and has a projection by $\pi_x$ close to $\pi_x(\varphi_t(y))\in \Lambda$. Since $\Lambda$ is compact and contained in the interior of $R$, $\pi_x(\varphi_t(y'))$ belongs to $R$ (hence to $\Lambda)$. We claim that it coincides with $f(u')$ which will conclude the proof.
Let us assume by contradiction that $\pi_x(\varphi_t(y'))\neq \pi_x(\varphi_{t'}(y'))$. Lemma~\ref{l.orbit-f} implies that $t\geq t'+3/2$. Then $\varphi_{t'}(y)$ is $r_0/2$-close to $x$ and projects by $\pi_x$ to $R$. We get $\varphi_{t'}(y)\in \widehat R$ with $1\leq t'\leq t-3/2$, contradicting Lemma~\ref{l.orbit-f}. \end{proof}
By repeating the above construction for negative times, we will obtain another map. Then Lemma~\ref{l.orbit-f} shows that it is the inverse of $f$. Since $\varphi$ is minimal, this gives:
\begin{Corollary} $f$ is a homeomorphism and induces a minimal dynamics on $K$. \end{Corollary} \noindent \paragraph{Extension of $f$ as a local diffeomorphism.} For any $u\in \Lambda$ we choose $y$ and $t$ as in the definition of $f$. One gets a local $C^2$-diffeomorphism $f_u$ from a (uniform) neighborhood of $u$ to a (uniform) neighborhood of $f(u)$ defined by $\pi_xP_t\pi_y$. (The uniformity comes from the fact that $t$ is uniformly bounded. The Local invariance shows it does not depend on $y$.)
\begin{Lemma} There exists a local diffeomorphism on a neighborhood of $\Lambda$ which extends $f$ and each $f_u$. \end{Lemma} \begin{proof} For $u',u\in \Lambda$ that are close, we have to show that the diffeomorphisms $f_u,f_{u'}$ matches on uniform neighborhood of $u$ and $u'$. Let us consider $y,y'$ and $t,t'$ defining the local diffeomorphisms, such that $y,y'$ are $r/2$-close to $x$. Note that from the proof of Lemma~\ref{l.continuous}, $y,y'$ (resp. $t,t'$) can be chosen arbitrarily close if $u,u'$ are close.
Take $z \in \cN_x$ in the intersection of the domains close to $\pi_x(u)$ and $\pi_x(u')$. Its projection by $\pi_y$ and $\pi_{y'}$ gives $v\in\cN_y$ and $v'\in \cN_{y'}$ close to $0_y$ and $0_{y'}$ whose orbits under $P$ remain close to the zero section during the time $t$ (resp. $t'$). By Global invariance, there exists an increasing homeomorphism $\theta$ of $\RR$ close to the identity such that $$f_u(z )=\pi_xP_t(v)=\pi_xP_{\theta(t)}(v')=\pi_xP_{t'}(v')=f_{u'}(z ).$$
One deduces that the maps $f_u$ define a $C^2$ map on a neighborhood of $\Lambda$ which extends $f$. Since the same construction can be applied with the local diffeomorphisms $f_u^{-1}$, one concludes that $f$ is a diffeomorphism. \end{proof}
\noindent \paragraph{Extensions of the bundles $E,F$.} \begin{Lemma} The tangent bundle over $\Lambda$ admits a splitting $E\oplus F$ which is invariant and dominated by $f$. Moreover $E$ is uniformly contracted by $f$ on $\Lambda$ if and only if ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted by the flow $(P_t)$ on $K$. \end{Lemma} \begin{proof} At each point $u\in \Lambda$ we define the spaces $E(u),F(u)$ as the image by $D\pi_x(0_y)$ of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y),{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(y)$ where $y\in \widehat R$ and $\pi_x(y)=u$. These spaces are well defined: if $y'\in\widehat R$ also satisfies $\pi_x(y')=u$, then $y'=\varphi_t(y)$ for some $t\in [-1,1]$; the Local injectivity and the invariance of the bundles ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ implies that $D\pi_x(0_y).{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)=D\pi_x(0_{y'}).{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y')$. The same holds for ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. The continuity of the families $\pi_{y,x}$ and of the bundles ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W},{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ over the $0$-section of $\cN$ implies that $E,F$ are continuous over $\Lambda$.
Let us consider $u'=f(u)$ and two points $y,y'\in \widehat R$ that are $r_0/2$-close to $x$ such that $\pi_x(y)=u$ and $\pi_{x}(y')=u'$. Then, there exits $t>0$ such that $\varphi_t(y)=y'$ so that $DP_t(0_y).{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y')$. Consequently, we obtain the invariance of $E$ by $Df$: $$Df(u).E(u)=D\pi_{x}(0_{y'})\circ DP_t(0_y)\circ D\pi_y(0_x)\circ D\pi_x(0_y).{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y)=D\pi_{x}(0_{y'}).{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(y')=E(u').$$
Note that the splitting $E\oplus F$ on $\Lambda$ is dominated for the dynamics of $Df$ since $Df^N(u)$ coincides for $N$ large with $D\pi_x\circ DP_t\circ D\pi_y$ for some large $t>0$ and some $y\in \widehat R$ satisfying $u=\pi_x(y)$ and since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is dominated for the dynamics of $DP_t$. Since all orbits of $f$ correspond to the orbit under $\varphi$ (by minimality of $K$), the argument proves that $E$ is uniformly contracted by $f$ on $\Lambda$ if and only if ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted by the flow $(P_t)$ on $K$. \end{proof}
\noindent \paragraph{End of the proof of Proposition~\ref{Pro:measure-contracted} in the minimal case.} Since the set $K$ is minimal and not a periodic orbit, the set $\Lambda$ does not contain any periodic orbit. Note that $\Lambda$ cannot contain a closed curve tangent to $E$ nor a closed curve tangent to $F$ since $\Lambda$ is contained in $R$ which has arbitrarily small diameter. So Pujals-Sambarino's theorem applies and $\Lambda$ is a hyperbolic set for $f$. This implies that $E$ and ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ are uniformly contracted by $f$ and $(P_t)$ respectively as required.
The proof of Proposition~\ref{Pro:measure-contracted} is now complete. \qed
\subsection{Fibered version of Ma\~n\'e-Pujals-Sambarino's theorem}\label{ss.Dcontracting}
\begin{proof}[Proof of Theorem~\ref{Thm:1Dcontracting}] Let us assume that a local fibered flow $(\cN,P)$ satisfies the assumptions of Theorem~\ref{Thm:1Dcontracting}. We suppose furthermore that $K$ does not contain a normally expanded irrational torus, and that ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over each periodic orbit.
Assume by contradiction that the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is not uniformly contracted. Then, there exists a non-empty invariant compact subset $\widetilde K\subset K$ such that \begin{itemize} \item[--] ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is is not uniformly contracted over $\widetilde K$, \item[--] but ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over any invariant compact proper subset $\widetilde K'\subset \widetilde K$. \end{itemize} The assumptions (A1), (A2), (A3) are satisfied and the Theorem~\ref{Thm:topologicalcontracting} can be applied to $\widetilde K$. By our assumptions, the two first conclusions are not satisfied, hence the bundle ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ over $\widetilde K$ is topologically contracted. Note also that since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is contracted over periodic orbits of $K$, the set $\widetilde K$ is not reduced to a periodic orbit. The properties (B1), (B2), (B3), (B4) are satisfied on $\widetilde K$.
Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is not uniformly contracted over $\widetilde K$ and is one-dimensional, there exists an ergodic measure $\mu$ with support contained in $\widetilde K$ whose Lyapunov exponent along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ non-negative. By domination, the Lyapunov exponent along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is positive. Since ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over any invariant proper compact subset, the support of $\mu$ coincides with $\widetilde K$. Proposition~\ref{Pro:measure-contracted} applies to $\widetilde K$ and $\mu$ and contradicts the fact that the Lyapunov exponent of $\mu$ along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is non-negative. Hence ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over $K$. \end{proof}
\section{Generalized Ma\~n\'e-Pujals-Sambarino theorem for singular flows}\label{s.MPS-theo}
In this section, we will prove Theorem~A' by using Theorem~\ref{t.compactification} and Theorem~\ref{Thm:1Dcontracting}. We consider a manifold $M$ and an invariant compact set $\Lambda$ for a $C^2$ vector field $X$ on $M$ whose singularities are hyperbolic and have simple real eigenvalues (in particular their number is finite). The results trivially holds for isolated singularities (since by assumption they admit a negative Lyapunov exponent). Hence, it is enough to assume that the set of regular orbits is dense in $\Lambda$.
In the last subsection we will prove the easy side of Theorem A'. In all the other subsections, we assume that the linear Poincar\'e flow on $\Lambda\setminus {\rm Sing}(X)$ has a dominated splitting $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and prove the existence of a dominated splitting for the tangent flow.
\subsection{Compactification} One first applies Theorem~\ref{t.compactification} and gets maps $$i\colon \Lambda \setminus {\operatorname{Sing}}(X)\to K:=\widehat \Lambda
\text{ and } I\colon \cN M|_{\Lambda \setminus {\operatorname{Sing}}(X)}\to \cN:=\widehat {\cN M}.$$
The set $K$ is the closure of $\Lambda\setminus {\operatorname{Sing}}(X)$
in the blowup $\widehat M$ of $M$ at each singularity ${\operatorname{Sing}}(X)\cap \Lambda$, so that the map $i$ is the canonical injection of $\cN M|_{\Lambda \setminus {\operatorname{Sing}}(X)}$ in $\cN$, and $I$ is the canonical injection of $\cN M|_{\Lambda \setminus {\operatorname{Sing}}(X)}$ inside the compactification $\cN$. In the following we drop the injections $i$ and $I$.
The set $K$ is endowed with a flow $\widehat \varphi$ which extends the flow $\varphi$ on $\Lambda\setminus {\operatorname{Sing}}(X)$. The rescaled sectional Poincar\'e flow extends as a $C^2$ local fibered flow $\widehat P^*$ in a neighborhood of the $0$-section of $\cN$ over $K$. The fibers of $\cN$ have dimension $\operatorname{dim}(M)-1$.
\subsection{Identifications} We now choose an open set $U\subset K$ such that $K\setminus U$ is an arbitrarily small neighborhood of the compact set $K\setminus (\Lambda\setminus {\operatorname{Sing}}(X))$. For any singularity $\sigma\in \Lambda$, we denote by $d^s,d^u$ its stable and unstable dimensions. {Since it is hyperbolic, $d^s+d^u=\operatorname{dim} M$.} Let us choose a $C^1$ chart at $\sigma$ which identifies $\sigma$ with $0\in \RR^{d^s+d^u}$, and the local stable and unstable manifolds with $\RR^{d^s}\times \{0\}$ and $\{0\}\times \RR^{d^u}$. There exists two (closed) differentiable balls $B^s\subset \RR^{d^s}\times \{0\}$ and $B^u\subset \{0\}\times \RR^{d^u}$ that are transverse to the linear vector field $u\mapsto DX(0).u$ { on $\RR^{d^s+d^u}\setminus \{0\}$}. For $\varepsilon>0$ small, the vector field $X$ is transverse to the boundary of the $\varepsilon$-scaled neighborhood $B_\sigma=\varepsilon.(B^s\times B^u)$ at any point of $\partial B^s\times \operatorname{Interior}(B^u)$ and of $\operatorname{Interior}(B^s)\times \partial B^u$. Note that for $x\in \partial B^s\times \partial B^u$, the image $\varphi_t(x)$ does not belong to $B_\sigma$ for any small $t\neq 0$. The union of the $B_\sigma${s for all singularities} is a neighborhood of $K\setminus (\Lambda\setminus {\operatorname{Sing}}(X))$ and its complement defines the open set $U$. Thus, the ``Transverse boundary" property holds.
Since there is no fixed point of $\widehat \varphi$ in $\overline U$, one can rescale the time (i.e. consider the new flow $t\mapsto {\widehat\varphi}_{t/C}$ for some large $C>1$) so that any periodic orbit which meets $\overline U$ has period larger than $10$. Then the ``No small period" property follows from the continuity of the flow.
If $\overline \beta>0$ is small enough, for any point $x$ in a neighborhood of $\overline U$ the image of $B(0,\overline \beta)\subset \cN_x$ by the exponential map $\exp_x$ is transverse to the vector field $X$. Consequently, for $\varepsilon>0$ small, if $\beta_0\in (0,\overline\beta)$ and $r_0>0$ are much smaller than $\varepsilon$, for any point $y\in M$ such that $d(x,y)<r_0$ and any $u\in B(0,\beta_0)\subset \cN_y$, there exists a unique $s\in (-\varepsilon,\varepsilon)$ satisfying $\varphi_s(\exp_y(u))$ belongs to $\exp_x(B(0,\overline \beta))$. After rescaling, we thus define the identification
$$\pi_{y,x}(u):=\|X(x)\|^{-1}.\exp_x^{-1}\circ \varphi_s\circ \exp_y(\|X(y)\|.u).$$ Since $X$ is $C^2$, the map $\pi_{y,x}$ is $C^2$ also. By the uniqueness of the parameter $s$, we obtain the relation $\pi_{z,x}\circ \pi_{y,z}=\pi_{y,x}$.
The Local injectivity now follows immediately by choosing $t=s$ as in the definition of the identification $\pi_{y,x}$. Let us consider $x,y\in U$, $t\in [-2,2]$ and $u\in B(0_y, \beta_0)$ such that $y$ and $\varphi_t(y)$ are $r_0$-close to $x$. If $r_0$ has been chosen small enough the ``No small period" property implies that $t$ is small. Then the uniqueness of $s$ in the definition of the identification $\pi$ implies $\pi_x\circ \widehat P^*_t(u)=\pi_x(u)$. This gives the Local invariance.
\subsection{Global invariance}
The following two lemmas follow from the fact that
the vector field $X$ is almost constant in the {$\varkappa\|X(y)\|$-neighborhood of $y$} for $\varkappa>0$ small enough,
\begin{Lemma}\label{l.flow} For any $\rho>0$, there exists $\delta>0$ with the following property.
If $y\in \Lambda\setminus {\operatorname{Sing}}(X)$ and if $z=\exp_y(u)$ for some $u\in B(0_y,\delta\|X(y)\|)\subset \cN_y$, then for any $s\in (0,1)$, there exists a unique $s'\in (0,2)$ such that $\varphi_{s'}(z)=\exp_{\varphi_s(y)}\circ P_s(u)$. Moreover $\max(s/s',s'/s)<1+\rho/3$. \end{Lemma}
\begin{Lemma}\label{l.project} For any $\delta,t_0>0$, there exists $\beta>0$ with the following property.
For any $y\in \Lambda\setminus {\operatorname{Sing}}(X)$ and $z\in M$ such that
$d(z,y)\leq 10\beta\|X(y)\|$, there exists a unique $t\in (-t_0,t_0)$
such that $\varphi_t(z)$ belongs to the image by $\exp_y$ of $B(0_y,\delta{\|X(y)\|})\subset \cN_y$. \end{Lemma}
We can now check the last item of the Definition~\ref{d.compatible} for the local fibered flow $\widehat P^*$. Let us fix $\delta,\rho>0$ small: by reducing $\delta$, one can assume that Lemma~\ref{l.flow} above holds. One then chooses $t_0>0$ small such that $d(x,\varphi_t(x))<\delta/3$ for any $x\in M$ and $t\in [-t_0,t_0]$. One fixes $r>0$ and $\beta\in (0,\delta)$ small such that: \begin{itemize} \item[a--] for any $y,y'$ and $u\in \cN_y$, $u'\in \cN_{y'}$ as in the statement of Global invariance, then
$\varphi_t\exp_y({\|X(y)\|}u)=\exp_{y'}({\|X(y')\|}u')$ for some $t\in [-t_0,t_0]$ (arguing as in the proof of Local injectivity), \item[b--] $\delta,\beta$ satisfy the Lemma~\ref{l.project},
\item[c--] for any $x\in M$, {$d(x,\exp_x(w))<\delta/3$ for any $w\in T_xM$ satisfying $\|w\|\le 10\beta\|X(x)\|$.} \end{itemize}
Consider any $y,y'$ , $u\in \cN_y$, $u'\in \cN_{y'}$ and $I,I'$ as in the statement of the Global invariance. Lemma~\ref{l.flow} can be applied to $y$ and $z=\exp_y(\|X(y)\|.u)$: for each $s\in (0,1)$, one defines $\theta_0(s)\in (0,2)$ to be equal to the $s'$ given by Lemma~\ref{l.flow}. The map $\theta_0$ is $(1+\rho/3)$-bi-Lipschitz and increasing. Moreover $\theta_0(0)=0$. Since $\| \widehat P^*_s(u)\|\leq \beta<\delta$ for any $s\in I$, one has $P_s(\|X(y)\|.u)\in B(0,\delta\|X(\varphi_s(x))\|)$ and one can apply inductively Lemma~\ref{l.flow} to the points $\varphi_s(y)$ and $P_s(\|X(y)\|.u)$, which defines $\theta_0$ on $I$. One gets:
$$\forall s\in I,~~~\exp_{\varphi_s(y)}\circ P_s(\|X(y)\|.u)=\varphi_{\theta_0(s)}(z).$$
The same argument for $y'$ and $z'=\exp_{y'}(\|X(y')\|.u')$ defines a map $\theta_0'\colon I'\to \RR$.
Let us now consider $s\in I\cap \theta_0^{-1}\circ \theta_0'(I')$.
{By (a),} since $\pi_y(u')=u$, there exists $t\in [-t_0,t_0]$ such that $\varphi_t(z)=z'$. By the definition of $\theta_0$, the points $\varphi_s(y)$ and $\varphi_{\theta_0(s)}(z)$ are $\delta/3$-close. Since $|t|\leq t_0$, the points $\varphi_{\theta_0(s)}(z)$ and $\varphi_{\theta_0(s)+t}(z)= \varphi_{\theta_0(s)}(z')$ are $\delta/3$-close. Since $\theta_0(s)\in \theta'_0(I')$ and using (c) above, the points $\varphi_{\theta_0(s)}(z')$ and $\varphi_{(\theta_0')^{-1}\circ\theta_0(s)}(y')$ are $\delta/3$-close. Consequently, the points $\varphi_s(y)$ and $\varphi_{\theta(s)}(y')$ are $\delta$-close, where $\theta=(\theta_0')^{-1}\circ\theta_0$. Note that $\theta$ is bi-Lipschitz for the constant $(1+\rho/3)^2<1+\rho$ and satisfies $\theta(0)=0$. This proves the first part of the Global invariance.
Finally we take $v\in \cN_y$, $v'=\pi_{y'}(v)$ in $\cN_{y'}$ such that
$\|\widehat P^*_{s}(v)\|<\beta$ for each $s\in I\cap \theta^{-1}(I')$. Set $\zeta=\exp_y(\|X(y)\|.v)$. {By Lemma~\ref{l.project},} there exists a unique $t'\in (-t_0,t_0)$ such that {$\varphi_{t'}(\zeta)=\exp_{y'}(w')$ for some $w'\in B(0,\delta)\subset \cN_{y'}$.} By definition of $\pi_{y,y'}$, it coincides with
$\exp_{y'}(\|X(y')\|.v')$.
Arguing as above, there exists $\theta_1$ such that
$\exp_{\varphi_s(y)}\circ P_s(\|X(y)\|.v)=\varphi_{\theta_1(s)}(\zeta)$ and $\theta_1(0)=0$.
$d(\varphi_{\theta_1(s)}(\zeta), \varphi_s(y))$ is smaller than $\|P_s(\|X(y)\|.v)\|$ and $\beta\|X(\varphi_s(y))\|$. Similarly, $$d(\varphi_{\theta_0(s)}(z),\varphi_s(y))\leq
\beta\|X(\varphi_s(y))\|.$$
$$d(\varphi_{\theta'_0(s')}(z'),\varphi_{s'}(y'))\leq
\beta\|X(\varphi_{s'}(y'))\|.$$
Since $z'=\varphi_t(z)$, to each $s\in I\cap \theta^{-1}(I')$
one associates $s'$ such that $\theta_0(s)=\theta'_0(s')+t$, one gets
$$d(\varphi_{\theta_0(s)}(z),\varphi_{s'}(y'))\leq
\beta\|X(\varphi_{s'}(y'))\|.$$
If $\beta$ has been chosen small enough, one deduces that $\beta\|X(\varphi_{s}(y))\|$ and $\beta\|X(\varphi_{\theta_0(s)}(z))\|$
are smaller than $2\beta \|X(\varphi_{s'}(y'))\|$. Hence, \begin{equation}\label{e.bound-projection} d(\varphi_{\theta_1(s)}(\zeta),\varphi_{s'}(y'))\leq
5\beta\|X(\varphi_{s'}(y'))\|.
\end{equation} By Lemma~\ref{l.project},
one can find $\sigma(s)\in (-t_0,t_0)$
such that $\varphi_{\theta_1(s)+\sigma(s)}(\zeta)$ belongs to the image by $\exp_{\varphi_{s'}(y')}$ of
$B(0,\delta)\subset \cN_{\varphi_{s'}(y')}$. In the case $s'=0$, since $t$ and $\sigma$ are small,
{the definition of $\pi_y$ gives} $\varphi_{\theta_1(s)+\sigma(s)}(\zeta)=\exp_{y'}(\|X(y')\|.v')$. Applying Lemma~\ref{l.flow} {inductively}, one has that $\varphi_{\theta_1(s)+\sigma(s)}(\zeta)=\exp_{\varphi_{s'}(y')}(P_{s'}(\|X(y')\|.v'))$ for any $s\in I\cap \theta^{-1}(I')$. By~\eqref{e.bound-projection} and Lemma~\ref{l.project}, one deduces $\|P_{s'}(\|X(y')\|.v')\|\leq \delta\|X(\varphi_{s'}(y'))\|$, that is $\|\widehat P^*(v')\|\leq \delta$ as wanted.
We have obtained
$$\varphi_{\sigma(s)}\circ \exp_{\varphi_s(y)}\circ P_s(\|X(y)\|.v)=\exp_{\varphi_{s'}(y')}(P_{s'}(\|X(y')\|.v')).$$ When $\varphi_s(y)$ and $\varphi_{s'}(y')$ are in a neighborhood of $\overline U$, one deduces by definition of identification,
$$\pi_{\varphi_{s}(y)}\circ P_{s'}(\|X(y')\|.v')= P_s(\|X(y)\|.v).$$
By definition of $\theta$ and $s'$, one notices that $\theta(s)$ and $s'$ are close. Hence $$\pi_{\varphi_{s'}(y')}\circ P_{\theta(s)}(\|X(y')\|.v')=P_{s'}(\|X(y')\|.v').$$ This gives $\pi_{\varphi_s(y)}\circ \widehat P^*_{\theta(s)}(v')=\widehat P^*_s(v)$ and completes the proof of the Global invariance.
\subsection{Dominated splitting} We have assumed that the linear Poincar\'e flow $\psi$ on $\Lambda\setminus {\operatorname{Sing}}(X)$
admits a dominated splitting, denoted by $\cN M|_{\Lambda\setminus {\operatorname{Sing}}(X)}= {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$. It extends as a dominated splitting $\cN=\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus \widehat {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ over $K$ for the extended rescaled linear Poincar\'e flow $\widehat \psi^*$ (hence for $\widehat P^*$). Indeed: \begin{itemize} \item[--] dominated splittings are invariant under rescaling, hence ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is a dominated splitting for $\psi^*$ (and $\widehat \psi^*$) over $\Lambda\setminus {\operatorname{Sing}}(X)$; \item[--] for continuous linear cocycles, dominated splittings extend to the closure. \end{itemize}
The existence of a dominated splitting for the tangent flow on $\Lambda$ can be restated as the uniform contraction of the bundle $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$.
\begin{Proposition}\label{p.mixed-domination} Under the previous assumptions, these two properties are equivalent: \begin{itemize} \item[I-- ] There exists a dominated splitting $T_\Lambda M=E\oplus F$ for the tangent flow $D\varphi$ with $\operatorname{dim}(E)=\operatorname{dim}(\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W})$ and $X\subset F$; \item[II-- ] $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted by $\widehat P^*$ (and $\widehat \psi^*$) over $K$. \end{itemize} \end{Proposition} \begin{proof} Let us prove $I\Rightarrow II$. From Property I, we have a dominated splitting between $E$ and $\RR X$, hence there exists $C>0$ and $\lambda\in (0,1)$ such that for any $t>0$ and any $x\in \Lambda$,
$$\|D{\varphi_t}|{E}(x)\|\leq C\lambda^t \|D{\varphi_t}|{\RR X}(x)\|=
C\lambda^t\frac{\|X(\varphi_t(x))\|}{\|X(x)\|}.$$ Since the angle between $E$ and $X$ is uniformly bounded away from zero, the projection of {$E(z)$ on $X(z)^\perp$} and its inverse are uniformly bounded, hence the ratio between
$\|D{\varphi_t}|{E}(x)\|$ and $\|{\psi_t}|{\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(x)\|$ is bounded. This implies that there exists $C'$ such that:
$$\|\widehat {\psi_t^*}|{\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(x)\|=\frac{\|X(x)\|}{\|X(\varphi_t(x))\|}\|{\psi_t}|{\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}}(x)\|\leq C'\lambda^t.$$ The bundle $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is thus uniformly contracted over $\Lambda\setminus {\operatorname{Sing}}(X)$, hence over {$K$} also. This gives Property {II}. The implication $II\Rightarrow I$ is a restatement of \cite[Lemma 2.13]{GY}. \end{proof}
\subsection{Uniform contraction of $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ near the singular set} {From the assumptions of Theorem~A',} any singularity $\sigma\in \Lambda$ has a dominated splitting $T_\sigma M=E^{ss}\oplus F$, where $E^{ss}$ is uniformly contracted, has the same dimension as ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$, and the associated invariant manifold $W^{ss}(\sigma)$ intersects $\Lambda$ only at $\sigma$. Let $V$ be a small open neighborhood of the {compact set $K\setminus (\Lambda\setminus {\operatorname{Sing}}(X))$}.
\begin{Lemma}\label{l.contraction-V} The bundle $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted on $V$ by $\widehat P^*$. \end{Lemma} \begin{proof} We use the notations and the discussions of section~\ref{ss.blow-up}. For each singularity $\sigma\in \Lambda$, let $\Delta_\sigma$ be the set of unit vectors $u\in {E^{cu}(\sigma)}\subset T_\sigma M$. It is compact and $\widehat \varphi$-invariant. The splitting at $\sigma$ induces a dominated splitting $E^{ss}\oplus {E^{cu}}$ of the extended bundle $\widehat {TM}$ over $\Delta_\sigma$.
For regular orbits near $\Delta_\sigma$, the lines $\RR X$ are close to ${E^{cu}(\sigma)}$, hence have a uniform angle with $E^{ss}$. Consequently, the dominated splitting $E^{ss}\oplus { E^{cu}}$ over $\Delta_\sigma$ projects on the extended normal bundle $\widehat \cN$ over $\Delta_\sigma$ as a splitting ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}'\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}'$ where $\operatorname{dim}({\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}')=\operatorname{dim}(E^{ss})=1$, which is dominated for the linear Poincar\'e flow $\psi$ (and hence for $\psi^*$ also).
The dominated splitting $E^{ss}\oplus { E^{cu}}$ induces a dominated splitting between $E^{ss}$ and the extended line field $\RR \widehat{X_1}$. The proof of Proposition~\ref{p.mixed-domination} above shows that the extended rescaled linear Poincar\'e flow $\widehat \psi^*$ contracts ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}'$ (above $\Delta_\sigma$).
On $K\cap\Delta_\sigma$, the dominated splittings $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus \widehat {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ and ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}'\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}'$ have the same dimensions, hence coincide. Moreover since $W^{ss}(\sigma)\cap \Lambda=\{\sigma\}$, we have $p^{-1}(\sigma)\cap K=\Delta_\sigma\cap K$, {where $p$ denotes the projection $K\to \Lambda$}. This shows that $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted by $\widehat P^*$ over $p^{-1}({\operatorname{Sing}}(X)\cap \Lambda)\cap K$, hence on any small neighborhood $V$. \end{proof}
\subsection{Periodic orbits and normally expanded invariant tori} We now check that the two first conclusions of Theorem~\ref{Thm:1Dcontracting} do not hold.
\begin{Lemma}
For any periodic orbit $\cO$ in $K$, the bundle $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}|_{\cO}$ is uniformly contracted. \end{Lemma} \begin{proof} By Lemma~\ref{l.contraction-V}, the bundle $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is contracted over periodic orbits contained in $V$. The other periodic orbits are lifts (by the projection $p\colon \widehat M\to M$) of orbits in $\Lambda\setminus {\operatorname{Sing}}(X)$. { From the assumptions of Theorem~A',} the Lyapunov exponents along ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ are all negative for periodic orbits in $\Lambda\setminus {\operatorname{Sing}}(X)$. This concludes. \end{proof}
\begin{Lemma} There do not exist any normally expanded irrational torus $\cT$ for $(K,\widehat \varphi)$. \end{Lemma} \begin{proof}
We now { use the fact that} $M$ is three-dimensional. Let us assume by contradiction that there exists a normally expanded irrational torus $\cT$ for $(K,\widehat \varphi)$. By Lemma~\ref{l.torus}, the bundle $\widehat {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is uniformly expanded over $\cT$. Since it does not contain fixed point of $\widehat \varphi$, it projects by $p$ in $\Lambda\setminus {\operatorname{Sing}}(X)$. By construction, the dynamics of $\widehat \psi^*$ over $\cT$ and $\psi^*$ over $p(\cT)$ are the same, hence ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is uniformly expanded over $p(\cT)$ by $\psi^*$. Since $p(\cT)\cap {\operatorname{Sing}}(X)=\emptyset$, one deduces that ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is uniformly expanded over $p(\cT)$ by $\psi$. Proposition~\ref{p.mixed-domination} implies that the tangent flow over $p(\cT)$ has a dominated splitting $TM|_{p(\cT)}=E^c\oplus E^{uu}$, with $\operatorname{dim}(E^{uu})=1$.
As a partially hyperbolic set each $x\in p(\cT)$ has a strong unstable manifold $W^{uu}(x)$. Note that $W^{uu}(x)\cap p(\cT)=\{x\}$ (since the dynamics is topologically equivalent to an irrational flow). Then~\cite{BC-whitney} implies that $p(\cT)$ is contained in a two-dimensional submanifold $\Sigma$ transverse to $E^{uu}$ and locally invariant by $\varphi_1$: there exists a neighborhood $U$ of $p(\cT)$ in $\Sigma$ such that $\varphi_1(U)\subset \Sigma$. Since $p(\cT)$ is homeomorphic to ${\mathbb T}^2$, it is open and closed in $\Sigma$, hence coincides with $\Sigma$. This shows that $p(\cT)$ is $C^1$-diffeomorphic to ${\mathbb T}^2$, normally expanded and carries a dynamics topologically equivalent to an irrational flow. This contradicts the assumptions of Theorem~A'. Consequently there do not exist any normally expanded irrational torus $\cT$ for $(K,\widehat \varphi)$. \end{proof}
\subsection{Proof of the domination of the tangent flow} Under the assumptions of Theorem~A', the fibers {of ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ and ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ are one-dimensional.} Note that one can choose $U$ after $V$ such that $U\cup V=K$. We have thus shown that $\widehat P^*$ over $K$ satisfies the setting of Theorem~\ref{Thm:1Dcontracting}, that $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted over the periodic orbits and that there is no normally expanded irrational torus. One deduces that $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ is uniformly contracted by $\widehat P^*$ above $K$. Proposition~\ref{p.mixed-domination} then implies that there exists a dominated splitting $T_\Lambda M=E\oplus F$ such that $E$ is one-dimensional (as for $\widehat {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$) and $X(x)\subset F(x)$ for any $x\in \Lambda$.
This shows one side of Theorem A': a domination of the linear Poincar\'e flow implies a domination $TM|_\Lambda=E\oplus F$ of the tangent flow with $\operatorname{dim}(E)=1$.
\subsection{Proof of the domination of the linear Poincar\'e flow} The other direction of Theorem A' is easier.
\begin{Proposition}\label{p.oneside} Under the assumptions of Theorem A', if there exists a dominated splitting
$TM|_{\Lambda}=E\oplus F$ with $\operatorname{dim}(E)=1$ for the tangent flow $D\varphi$, then: \begin{itemize} \item[--] $X(x)\subset F(x)$ for any $x\in \Lambda$, \item[--] the linear Poincar\'e flow on $\Lambda\setminus {\rm Sing}(X)$ is dominated, \item[--] $E$ is uniformly contracted. \end{itemize} \end{Proposition} \begin{proof} We first prove that $X(x)\in F(x)$ for any $x\in \Lambda$. Otherwise, using the domination, there exists a non-empty invariant compact subset $\Lambda'$ such that $X(x)\in E(x)$ for any $x\in \Lambda'$ and there exists a regular orbit in $\Lambda$ which accumulates in the past on $\Lambda'$.
Let $\mu$ be an ergodic measure on $\Lambda'$. If ${\rm supp}(\mu)$ is not a singularity, the domination implies that all the Lyapunov exponents along ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ are positive, hence the measure is hyperbolic and is supported on a periodic orbit that is a source. This contradicts the assumptions of Theorem A'.
If $\mu$ is supported on a singularity $\sigma$, it is by construction limit of regular points $x_n\in \Lambda$ such that $\RR X(x_n)$ converges towards $E(\sigma)$. This implies that one of the separatrices of $W^{ss}(\sigma)$ is contained in $\Lambda$, a contradiction. The first item follows.
Since $X\subset F$ and the angle between $E$ and $F$ is bounded away from zero, the projection of $T_xM=E\oplus F$ to $\cN_x$ along $\RR X(x)$ defines a splitting ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$ into one-dimensional subspaces, for each $x\in \Lambda\setminus {\rm Sing}(X)$. This splitting is continuous and invariant under $\psi$.
Let us consider two non-zero vectors $u\in {\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x)$ and $v\in {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$ and $t>0$. We have
$$\|\psi_{-t}.v\|\leq \|D\varphi_{-t}.v\|.$$ Since $E$ and ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}$ are uniformly transverse to $\RR X$, there is a constant $C>0$ such that
$$\|\psi_{-t}.u\|\geq C^{-1}\|D\varphi_{-t}.u\|.$$ The domination $E\oplus F$ thus implies the domination ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$.
One can then apply Lemma~\ref{l.domina-contraction}, whose proof is contained in the proof of~\cite[Lemma 3.6]{BGY}. \begin{Lemma}\label{l.domina-contraction} Consider a $C^1$ vector field and an invariant compact set $\Lambda$ endowed with a dominated splitting $T_\Lambda M=E\oplus F$ such that $E$ is one-dimensional, $X(x)\in F(x)$ for any $x\in \Lambda$ and $E(\sigma)$ is contracted for each $\sigma\in \Lambda\cap {\operatorname{Sing}}(X)$. Then $E$ is uniformly contracted. \end{Lemma} The bundle $E$ is thus uniformly contracted. This ends the proof of Proposition~\ref{p.oneside} \end{proof} The proof of Theorem A' is now complete.\qed
\section{$C^1$-generic three-dimensional vector fields}\label{s.generic}
In this section we prove Theorem~\ref{Thm-domination}, Corollary~\ref{c.main} and the Main Theorem.
\subsection{Chain-recurrence and genericity}\label{ss.chain}
For $\varepsilon>0$, we84 say that a sequence $x_0,\dots,x_n$ is an $\varepsilon$-\emph{pseudo-orbit} if for each $i=0,\dots,n-1$ there exists $t\geq 1$ such that $d(\varphi_t(x_i),x_{i+1})<\varepsilon$. A non-empty invariant compact set $K$ for a flow $\varphi$ is \emph{chain-transitive} if for any $x,y\in K$ (possibly equal) and any $\varepsilon>0$, there exists a $\varepsilon$-pseudo-orbit $x_0=x,\dots,x_n=y$ with $n\geq 1$.
A \emph{chain-recurrence class} of $\varphi$ is a chain-transitive set which is maximal for the inclusion. The \emph{chain-recurrent set} of $\varphi$ is the union of the chain-recurrence classes. See~\cite{Con}.
We then recall known results on generic vector fields. We say that a property is satisfied by generic vector fields in $\cX^r(M)$ if it holds on a dense G$_\delta$ subset of of $\cX^r(M)$.
\begin{Theorem}\label{Lem:generic} For any manifold $M$, any $r\geq 1$ and any generic vector field $X$ in $\cX^r(M)$, \begin{itemize} \item[--] each {periodic orbit or singularity} is hyperbolic and has simple (maybe complex) eigenvalues,
\item[--] there do not exist any invariant subset $\cT$ which is diffeomorphic to ${\mathbb T}^2$, normally expanded and supports a dynamics topologically equivalent to an irrational flow. \end{itemize} \end{Theorem} \begin{proof} The first part is similar to the proof of the Kupka-Smale property~\cite{kupka,smale}.
The second part can be obtained from a Baire argument by showing that if $X\in \cX^r(M)$ preserves an invariant subset $\cT$ which is diffeomorphic to ${\mathbb T}^2$, normally expanded, and which supports a dynamics topologically equivalent to an irrational flow, then there exists a neighborhood $U$ of $\cT$ and an open set of vector fields $X'$ $C^r$-close to $X$ whose the maximal invariant set in $U$ is $\cT$ and whose dynamics on $\cT$ has a hyperbolic periodic orbit. In order to prove this perturbative statement, one first notice that all the Lyapunov exponents along $\cT$ for $X$ vanish and
$\cT$ is $r$-normally hyperbolic. By~\cite[Theorem 4.1]{hirsch-pugh-shub}, the set $\cT$ is $C^r$-diffeomorphic to ${\mathbb T}^2$ and any $C^r$-perturbation of $X|_{\cT}$ extends to $M$. Since Morse-Smale vector fields are dense in $\cX^r({\mathbb T}^2)$ by~\cite{Pe}, the result follows. \end{proof}
Here are some consequence of the connecting lemma for pseudo-orbits. \begin{Theorem}\label{t.connecting} If $X$ is generic in $\cX^1(M)$, then for any non-trivial chain-recurrence class $C$ containing a hyperbolic singularity $\sigma$ whose unstable space is one-dimensional, $C$ is Lyapunov stable and every separatrix of $W^u(\sigma)$ is dense in $C$.
In particular if $\operatorname{dim}(M)=3$, a chain-transitive set which strictly contains a singularity is a chain-recurrence class. \end{Theorem} \begin{proof} The first part has been shown~\cite[Lemmas 3.14 and 3.19]{GY}: it is a consequence of the version for flows of the connecting lemma proved in~\cite{BC}.
If $\operatorname{dim}(M)=3$, and if $\Lambda$ is a non-trivial chain-transitive set containing a singularity $\sigma$, then $\sigma$ cannot be a sink, nor a source. Let us assume that $\sigma$ has a one-dimensional unstable space (otherwise it has one-dimensional stable space and the proof is similar). From the first part, $\Lambda$ should contain one of the separatrix of $W^u(\sigma)$. {Since every separatrix of $W^u(\sigma)$ is dense in $C(\sigma)$,} $\Lambda$ coincides with the chain-recurrence class of $\sigma$. \end{proof}
In order to obtain the singular hyperbolicity on a chain-transitive set, {it suffices to} check that the tangent flow has a dominated splitting. In the non-singular case, this is proved in~\cite[Lemma 3.1]{BGY} from~\cite{ARH}\footnote{Note that it could also be obtained from Theorem~A' with a Baire argument.}.
\begin{Theorem}\label{t.hyperbolic} If $\operatorname{dim}(M)=3$ and if $X$ is generic in $\cX^1(M)$, then for any chain-transitive set $\Lambda$ such that $\Lambda\cap {\operatorname{Sing}}(X)=\emptyset$, if the linear Poincar\'e flow on $\Lambda$ has a dominated splitting, then $\Lambda$ is hyperbolic. \end{Theorem}
In the singular case, this is \cite[Theorem C]{GY}.
\begin{Theorem}\label{t.GY} If $\operatorname{dim}(M)=3$ and if $X$ is generic in $\cX^1(M)$, then any non-trivial chain-recurrence class whose tangent flow has a dominated splitting and which contains a singularity is singular hyperbolic. \end{Theorem}
Let us summarize some properties satisfied by $C^1$ generic vector fields that are away from homoclinic tangencies.
\begin{Theorem}\label{Thm:Lorenz-like} Consider a generic $X\in \cX^1(M)$, a non-trivial chain-recurrent class $C$ and neighborhoods $\cU$, $U$ of $X$, $C$ such that for any $Y\in \cU$, the maximal invariant set of $Y$ in $U$ does not contain a homoclinic tangency of a hyperbolic periodic (regular) orbit. Then, there exists a dominated splitting on $C\setminus{\rm Sing}(X)$ for the linear Poincar\'e flow. \end{Theorem} \begin{proof} This is a variation of the arguments of~\cite{GY}. We first state a general genericity result.
\begin{Lemma} For any generic $X\in \cX^1(M)$, any non-trivial chain-transitive set $\Lambda$ is the limit for the Hausdorff topology of a sequence of hyperbolic periodic saddles. \end{Lemma} \begin{proof} Any non-trivial chain-transitive set is the limit for the Hausdorff topology of a sequence of hyperbolic periodic orbits $\gamma_n$. This has been shown in~\cite{crovisier-approximation} for diffeomorphisms, but the proof is the same for vector fields. If there exists infinitely many $\gamma_n$ that are saddles, the lemma is proved. One can thus deal with the case all the $\gamma_n$ are sinks (the case of sources is similar).
By~\cite[Lemma 2.23]{GY}, the sinks $\gamma_n$ are not uniformly contracting at the period (see the precise definition there). By~\cite[Lemma 2.6]{GY}, by an arbitrarily small $C^1$-perturbation, one can turn the $\gamma_n$, $n$ large, to saddles. Then by a Baire argument, one concludes that $\Lambda$ is the limit of a sequence of hyperbolic periodic saddles. \end{proof}
\cite[Corollary 2.10]{GY} asserts that if $\Lambda$ is the limit of a sequence of hyperbolic saddles for the Hausdorff topology, then there exists a dominated splitting for the linear Poincar\'e flow on $\Lambda\setminus {\operatorname{Sing}}(X)$, assuming $X$ is not accumulated by vector fields in $\cX^1(M)$ with a homoclinic tangency. The same proof can be localized, assuming that $\Lambda$ is a chain-recurrence class and that there is no homoclinic tangency in a neighborhood of $\Lambda$ for vector fields $C^1$-close to $X$. \end{proof}
We also state the result proved in~\cite{CY2} which asserts that singular hyperbolicity implies robust transitivity for generic vector fields in dimension $3$ (and improve a previous result by Morales and Pacifico~\cite{MP}).
\begin{Theorem}\label{t.robust-transitivity} If $\operatorname{dim}(M)=3$ and if $X\in \cX^1(M)$ is generic, any singular hyperbolic chain-recurrence class is robustly transitive. \end{Theorem}
\subsection{Singularities of Lyapunov stable chain-recurrence classes}
The domination of the linear Poincar\'e flow constrains the local dynamics at singularities.
\begin{Proposition}\label{p:Lorenz-like2} Assume $\operatorname{dim}(M)=3$. Consider a generic $X\in \cX^1(M)$ and a non-trivial chain-recurrence class $C$ containing a singularity { with stable dimension equal to $2$} such that there exists a dominated splitting on $C\setminus{\rm Sing}(X)$ for the linear Poincar\'e flow.
Then, any singularity $\sigma$ in $C$ has { stable dimension equal to $2$}, real simple eigenvalues and satisfies $W^{ss}(\sigma)\cap C=\{\sigma\}$. \end{Proposition}
Note that for any singularity $\sigma$ { with stable dimension equal to $2$} and real simple eigenvalues, any point $x\in W^u(\sigma)$ has a well defined two-dimensional center unstable plane $E^{cu}(x)$ (it is the unique plane at $x$ which converge to the center-unstable plane of $\sigma$ by backward iterations). We will use the next lemma.
\begin{Lemma}\label{l.non-domination} Assume $\operatorname{dim}(M)=3$. Consider any $X\in \cX^1(M)$, any singularity { with stable dimension equal to $2$} and real simple eigenvalues, any $x\in W^u_{loc}(\sigma)\setminus \{\sigma\}$ {satisfying $\omega(x)\cap W^{ss}(\sigma)\setminus\{\sigma\}\neq\emptyset$.} There exists $\alpha>0$ small such that for any neighborhood $\cU$ of $X$ in $\cX^1(M)$ and any $\varepsilon>0$, there is $Y\in \cU$ satisfying: \begin{itemize} \item[--] $X=Y$ on $\{\varphi_{-t}(x), t\geq 0\}\cup \{\sigma\}$, \item[--] there exists $s>0$ such that the flow $\varphi^Y$ associated to $Y$ satisfies $$d(\varphi^Y_s(x),x)<\varepsilon \text{ and } d(D\varphi^Y_s(x).E^{cu}(x), E^{cu}(x))>\alpha.$$ \end{itemize} \end{Lemma} \begin{proof} Up to replace $X$ by a vector field close, one can assume that: \begin{itemize} \item[--] $x\in (W^u(\sigma)\cap W^{ss}(\sigma))\setminus \{\sigma\}$ (using the connecting lemma~\cite{hayashi}). \item[--] There exists a chart on a neighborhood of $\sigma$ which linearizes $X$: in particular, at any point $z$ in the chart one defines the planes $H_z, V_z\subset T_zM$ which are parallel to $E^{ss}(\sigma)\oplus E^u(\sigma)$ and to $E^{cu}({\sigma})$ respectively; the flow along pieces of orbits in the chart preserves the bundle $H$. Moreover $E^{cu}(z)=V_z$ at points $z$ in the local unstable manifold of $\sigma$. \item[--] $E^{cu}(x)$ is not tangent to $T_xW^s(\sigma)$. \end{itemize}
Let $\alpha>0$ smaller than $d(H_\sigma,V_\sigma)$. { Note that if $E\subset TM$ is a plane at a point $z$ in the orbit of $x$ such that $E\neq E^{cu}(z)$ and $X(z)\in E$, then $D\varphi_{t}. E^{cu}(z)$ converge to the bundle $H$ when $t$ goes to $+\infty$. This is a direct consequence of the dominated splitting between $E^s$ and $E^u$.}
After a small perturbation which preserves $\{\sigma\}\cup \{\varphi_t(x),t\in \RR\}$ and $\{D\varphi_{-t}.E^{cu}(x),t>0\}$ and whose support is disjoint from a small neighborhood of $\sigma$, one can thus assume that there exist $t_1>0$ arbitrarily large such that for any $t>t_1$ $$ D\varphi_{t}. E^{cu}(x)=H_{\varphi_{t}(x)}.$$ { Indeed after a small perturbation in a small neighborhood of $\varphi_1(x)$, and which does not change the orbit of $x$, one can assume that the new center-unstable space $E$ at $\varphi_2(x)$ does not coincide with the initial one $E^{cu}_{\varphi_2(x)}$. The property mentioned above then implies that $ D\varphi_{t}. E^{cu}(x)$ converges $H_{\sigma}$ as $t$ goes to $+\infty$. A new perturbation at a large iterate of $x$ then guaranties that $ D\varphi_{t_1}. E^{cu}(x)=H_{\varphi_{t_1}(x)}.$ }
After a small perturbation near $\varphi_{t_1}(x)$, one gets a forward iterate $\varphi_{t_2}(x)$, $t_2>t_1$, arbitrarily close to $x$ such that $D\varphi_{t_2}.E^{cu}(x)=H_{\varphi_{t_2}(x)}$. This implies that for the new vector field $E^{cu}(x)\subset T_xM$ has a large forward iterate arbitrarily close to $H_x\subset T_xM$ as required. \end{proof}
It has the following consequence.
\begin{Corollary}\label{c.non-domination} Assume $\operatorname{dim}(M)=3$. For any generic $X\in \cX^1(M)$ and any chain-recurrence class $C$ containing a singularity $\sigma$ { with stable dimension equal to $2$ and} real simple eigenvalues such that $W^{ss}(\sigma)\cap C\neq \{\sigma\}$, there exists $x\in W^u_{{loc}}(\sigma)\cap C$ and $t_n\to +\infty$ such that \begin{equation}\label{e.non-domination} \varphi_{t_n}(x)\rightarrow x \text{ and } D\varphi_{t_n}(x).E^{cu}(x)\not \rightarrow E^{cu}(x). \end{equation} \end{Corollary} \begin{proof} For $\varepsilon, \delta, \alpha >0$ let us consider the open property: \begin{description} \item[$P(\varepsilon,\delta,\alpha)$:] $\exists x\in W^u_\delta(\sigma),\exists\;s>0,\; d(\varphi_s(x),x)<\varepsilon \text{ and } d(D\varphi_s(x).E^{cu}(x), E^{cu}(x))>\alpha$. \end{description} Consider $X$ is generic and a singularity $\sigma$ { with stable dimension equal to $2$ and} real simple eigenvalues. Lemma~\ref{l.non-domination} gives $\alpha>0$. By genericity, for any integers $N_1,N_2$, one can require that if $P(1/N_1, 1/N_2,\alpha)$ holds for an arbitrarily small perturbation of $X$ and the continuation of $\sigma$, then it holds also for $X, \sigma$.
If $W^{ss}(\sigma)\cap C\neq \{\sigma\}$, then Lemma~\ref{l.non-domination} shows that $P(1/N_1,1/N_2,\alpha)$ holds for any $N_1,N_2$ by small $C^1$-perturbation. Hence $X,\sigma$ satisfies $P(1/N_1,1/N_2,\alpha)$ for any $N_1,N_2$. This gives $x\in W^u(\sigma)\cap C$ and $t_n\to +\infty$ such that~\eqref{e.non-domination} holds. \end{proof}
\begin{proof}[Proof of Proposition~\ref{p:Lorenz-like2}] Since $X$ is generic, one can assume that any singularity is hyperbolic. Since $C$ contains a singularity { with stable dimension equal to $2$}, by Theorem~\ref{t.connecting} it is Lyapunov stable, in particular it contains the unstable manifolds of its singularities. Let $\cN={\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}\oplus {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ be the dominated splitting for the linear Poincar\'e flow on $C\setminus {\operatorname{Sing}}(X)$.
Any singularity $\sigma$ { with stable dimension equal to $2$} has real eigenvalues: indeed, by iterating backwards the dominated splitting of the linear Poincar\'e flow along an unstable orbit of $\sigma$, one deduces that the two stable eigenvalues have different moduli. If one assumes by contradiction that $W^{ss}(\sigma)\cap C(\sigma)\neq \{\sigma\}$, then there exists $x\in W^u(\sigma)\cap C$ and $t_n\to +\infty$ such that~\eqref{e.non-domination} holds. We have ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)=E^{cu}(x)\cap \cN_x$: indeed, for any two planes $E_1,E_2\in T_xM$ containing $X(x)$ and different from $E^{cu}(x)$, the backward iterates converge to $E^{ss}(\sigma)\oplus E^u(\sigma)$, and get arbitrarily close; hence for any two lines ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}_1,{\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}_2\subset \cN_x$ different from $E^{cu}(x)\cap \cN_x$, the backward iterates under the linear Poincar\'e flow $\psi$ get arbitrarily close and this property characterizes the space ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)$. Moreover ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}$ is continuous at non-singular points of $C$ (by uniqueness of the dominated splitting). But this contradicts~\eqref{e.non-domination} {as in Corollary~\ref{c.non-domination}} which can be restated as: $$\varphi_{t_n}(x)\rightarrow x \text{ and } \psi_{t_n}(x).{\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x)\not \rightarrow {\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x).$$ We thus have $W^{ss}(\sigma')\cap C(\sigma)= \{\sigma\}$.
Let us assume now by contradiction that $C$ contains a singularity { with stable dimension equal to $1$}. One can apply the previous discussions to $-X$: the class contains the stable manifold of its singularities. This contradicts the fact that $W^{ss}(\sigma)\cap C= \{\sigma\}$. Consequently any singularity in $C$ has { stable dimension equal to $2$}. This ends the proof. \end{proof}
\subsection{Dominated splitting on singular classes}\label{Sec:dom-spl-sing-class} Now we can prove Theorem~\ref{Thm-domination}. Let us consider $X$ in the residual set of vector fields satisfying Theorems~\ref{Lem:generic}, \ref{t.connecting}, \ref{t.hyperbolic}, \ref{Thm:Lorenz-like}, Proposition~\ref{p:Lorenz-like2} and the following property:
\emph{If the chain-recurrence class $C(\sigma)$ of a hyperbolic singularity has no dominated splitting for the tangent flow, then for the vector fields $Y$ that are $C^1$-close to $X$, the tangent flow on the chain-recurrence class $C(\sigma_Y)$ of the continuation of $X$ has no dominated splitting.}
This property can be deduced from the semi-continuity of $Y\mapsto C(\sigma_Y)$ and the fact that if the tangent flow on an invariant compact set $\Lambda_0$ for $Y_0$ is dominated, it is still the case for any compact set $\Lambda$ Hausdorff close to $\Lambda_0$ and vector fields $Y$ $C^1$-close to $Y_0$.
Let $\Lambda$ be a chain-transitive set of $X$ such that the tangent flow on $\Lambda$ has a dominated splitting $E\oplus F$. If $\Lambda$ is non-singular, let us consider the two disjoint invariant compact sets $K_E:=\{x, X(x)\in E\}$ and $K_F:=\{x, X(x)\in F\}$. For $\varepsilon>0$ let $A:=\{x, d(X(x),E)\geq \varepsilon\}$. By domination, the set $A$ is sent into its interior by large forward iterates. The chain-transitivity implies that $\Lambda=A$ or $A=\emptyset$. Since the orbit of any point $x\notin K_E\cup K_F$ accumulates to $K_F$ in the future and to $K_E$ in the past, this gives $\Lambda=K_E$ or $\Lambda=K_F$. Without loss of generality we assume the first case. Then $\operatorname{dim}(E)=2$, since otherwise $F$ is uniformly expanded by the domination and $\Lambda$ is a source. Thus for any $x\in \Lambda$, ${\cal E}} \def\cK{{\cal K}} \def\cQ{{\cal Q}} \def\cW{{\cal W}(x):=E(x)\cap \cN_x$ and ${\cal F}} \def\cL{{\cal L}} \def\cR{{\cal R}} \def\cX{{\cal X}(x):=(F(x)\oplus \RR X(x))\cap \cN_x$ are two one-dimensional lines which define two bundles invariant and dominated under $\psi$.
In the remaining case, $\Lambda$ contains a singularity $\sigma$. By Theorem~\ref{t.connecting}, $\Lambda$ is a chain-recurrence class. The existence of a dominated splitting is an open property: there are neighborhoods $\cU$, $U$ of $X$, $\Lambda$ such that for any $Y\in \cU$, the maximal invariant set of $Y$ in $U$ has a dominated splitting; in particular, it does not contain a homoclinic tangency of a periodic orbit. Hence the assumptions of Theorem~\ref{Thm:Lorenz-like} hold and the linear Poincar\'e flow on $\Lambda\setminus {\operatorname{Sing}}(X)$ is dominated. This proves one half of Theorem~\ref{Thm-domination}.
We now assume $\Lambda$ is a chain-transitive set of $X$ such that the linear Poincar\'e flow on $\Lambda\setminus {\operatorname{Sing}}(X)$ has a dominated splitting. If $\Lambda$ does not contain any singularity, it is a hyperbolic set by Theorem~\ref{t.hyperbolic}: hence it has a dominated splitting also. Thus we assume that $\Lambda$ contains a singularity $\sigma$ { with stable dimension equal to $2$} (the case of dimension $1$ is similar). By Theorem~\ref{t.connecting}, $\Lambda$ is a chain-recurrence class and is Lyapunov stable.
By Proposition~\ref{p:Lorenz-like2}, any singularity $\widetilde \sigma\in \Lambda$ has { stable dimension equal to $2$}, real simple eigenvalues and satisfies $W^{ss}(\widetilde \sigma)\cap \Lambda=\{\widetilde \sigma\}$. Note that these properties still hold and the linear Poincar\'e flow is still dominated for the vector fields $C^1$-close and the chain-recurrence class of the continuation of $\sigma$: one uses that the chain-recurrence class of the continuation of $\sigma$ vary semi-continuously with the vector field $X$, that the set of singularities is finite and Proposition~\ref{p.robustness-DS}. We consider a vector field $Y$ that is $C^1$-close to $X$, and that is $C^2$,whose regular periodic orbits are hyperbolic and with no normally expanded invariant torus whose dynamics is topologically equivalent to an irrational flow (see Theorem~\ref{Lem:generic} in $\cX^3(M)$). In particular the periodic orbits in the chain-recurrence class $C(\sigma_Y)$ of the continuation of $\sigma$ for $Y$ are neither sources nor sinks and have a negative Lyapunov exponent. One can thus apply Theorem~A': there exists a dominated splitting for the tangent flow on $C(\sigma_Y)$. By our choice of the generic vector field $X$, this is also the case for the chain-recurrence class of $\sigma$ for $X$, which is the set $\Lambda$.
The Theorem~\ref{Thm-domination} is proved. \qed
\subsection{Dichotomy for three-dimensional vector fields}
We now complete the proofs of the Main Theorem and of Corollary~\ref{c.main}.
\begin{proof}[Proof of the Main Theorem] Let us consider a vector field $X$ in the intersection of the residual sets provided by Theorems~\ref{Thm-domination}, \ref{Lem:generic}, \ref{t.hyperbolic}, \ref{t.GY} and \ref{Thm:Lorenz-like}. Let us assume that it can not be accumulated in $\cX^1(M)$ by vector fields with homoclinic tangencies.
Let $C$ be a chain-recurrence class of $X$. By Theorem~\ref{Lem:generic}, if $C$ is an isolated singularity or a regular periodic orbit, it is hyperbolic. If $C$ is non-trivial, by Theorem~\ref{Thm:Lorenz-like} the linear Poincar\'e flow on $C\setminus {\operatorname{Sing}}(X)$ is dominated. Using Theorem~\ref{t.hyperbolic} if $C\cap {\operatorname{Sing}}(X)=\emptyset$, the class $C$ is hyperbolic. In the remaining case, $C$ is non-trivial, contains a singularity, the tangent flow on $C$ is dominated (by Theorem~\ref{Thm-domination}). Hence $C$ is singular hyperbolic (by Theorem~\ref{t.GY} ).
Since any chain-recurrence class of $X$ is singular hyperbolic, $X$ is singular hyperbolic. \end{proof}
\begin{proof}[Proof of Corollary~\ref{c.main}] Let $\cO$ be the set of $C^1$ vector fields on $M$ whose chain-recurrence classes are robustly transitive. We then introduce the dense set $\cU=\cO\cup(\cX^{1}(M)\setminus \overline{\cO})$.
We claim that $\cO$ (and thus $\cU$) is open. Indeed for $X\in \cO$, each chain-recurrence class is isolated in the chain-recurrent set: let us consider a class $C$; by semi-continuity of the chain-recurrence classes for the Hausdorff topology, if $C'$ is another class having a point close to $C$, it is contained in a small neighborhood of $C$, hence coincide with $C$ by definition of the robust transitivity. This implies that $X$ has only finitely many chain-recurrence classes $C_1,\dots,C_k$. By robust transitivity, each of them admits a neighborhood $U_1,\dots,U_k$ so that for any $Y$ close to $X$ in $\cX^1(M)$, the maximal invariant set in each $U_i$ is robustly transitive. By semi-continuity of the chain-recurrence classes, each class of $Y$ has to be contained in one of the $U_i$, hence is robustly transitive, as required.
Let $\cG$ be the dense G$_\delta$ set of vector fields in $\cX^1(M)$ such that Theorem~\ref{t.robust-transitivity} holds. Let us consider any $X\in \cU$ that can not be approximated by vector fields exhibiting a homoclinic tangency. By the Main Theorem, there exists $X'$ arbitrarily close to $X$ in $\cX^1(M)$ which is singular hyperbolic. Since singular hyperbolicity is an open property and $\cG$ is dense, one can also require that $X'\in \cG$, hence each chain-recurrence class of $X$ is robustly transitive. We have thus shown that $X\in \overline \cO$. By definition of $\cU$ this gives $X\in \cO$ and the Corollary follows. \end{proof}
\small
\vskip 20pt
\begin{tabular}{l l l} \emph{\normalsize Sylvain Crovisier} & \quad\quad \quad & \emph{\normalsize Dawei Yang}
\\
Laboratoire de Math\'ematiques d'Orsay && School of Mathematical Sciences\\ CNRS - Universit\'e Paris-Sud && Soochow University\\ Orsay 91405, France && Suzhou, 215006, P.R. China\\ \texttt{[email protected]} && \texttt{[email protected], [email protected]} \end{tabular}
\end{document} | arXiv |
\begin{document}
\author{Michele Fornea} \email{[email protected]} \address{Columbia University, New York, USA.} \author{Zhaorong Jin} \email{[email protected]} \address{Princeton University, Princeton, USA.}
\classification{11F33, 11F41, 11F67, 11F80, 11G35, 11G40.}
\title{Hirzebruch--Zagier classes and rational elliptic curves over quintic fields}
\begin{abstract}
Conditionally on a conjecture on the \'etale cohomology of Hilbert modular surfaces and some minor technical assumptions, we establish new instances of the equivariant BSD-conjecture in rank $0$ with applications to the arithmetic of rational elliptic curves over quintic fields. The key ingredients are a refinement of twisted triple product \emph{$p$-adic $L$-functions}, the construction of a compatible collection of \emph{Hirzebruch--Zagier cycles} and an \emph{explicit reciprocity law} relating the two. \end{abstract} \maketitle
\vspace*{6pt}\tableofcontents
\section{Introduction} The most general result towards the BSD-conjecture was established by Shouwu Zhang and his school \cite{Heights} as a major generalization of the methods of Gross-Zagier \cite{GZformula} and Kolyvagin \cite{Koly}. The result states that if $E_{/F}$ is a modular elliptic curve over a totally real field $F$ such that either $E_{/F}$ has at least one prime of multiplicative reduction or $[F:\mathbb{Q}]$ is odd, then \[
r_\mathrm{an}(E/F)\in\{0,1\}\implies r_\mathrm{an}(E/F)=r_\mathrm{alg}(E/F). \] It is important not to forget that the modularity of $E_{/F}$ is currently the only known way to access the analytic properties of the $L$-function $L(E/F,s)$. Because of this, it becomes natural to expect that cycles on Shimura varieties will play a role in any strategy to establish the BSD-conjecture.
The three pillars of Gross-Zagier and Kolyvagin's approach are: $(i)$ the existence of a non-constant map $X_{/F}\to E_{/F}$ from a Shimura curve to the elliptic curve, $(ii)$ the existence of CM points on $X_{/F}$ with their significance for Selmer groups, and $(iii)$ formulas for the derivative of certain base-change $L$-functions of $E_{/F}$ in terms of the height of images of CM points, called Heegner points. These three items are at the same time the strengths and the limitations of the most effective strategy developed so far to prove instances of the BSD-conjecture. Firstly, the strong form of geometric modularity in $(i)$ can only be realized for certain elliptic curves over totally real fields, hence the first pillar topples down right away when considering elliptic curves defined over general number fields. However, a lot can be proved using congruences for general rank zero elliptic curves over totally real fields, as it was shown by Longo \cite{Longo} and Nekovar \cite{LevelRaisingNekovar}. Secondly, suppose one fixed an elliptic curve over a totally real field $F$ and took a finite extension $K/F$; what could be said about the BSD-conjecture for $E_{/K}$? In this case, even though a modular parametrization could still be available, one would lack a systematic way to produce points over extensions of a general $K$. Indeed, Heegner points are defined over certain dihedral extensions of $F$, and therefore miss all non-solvable extensions. Finally, what if one contented themselves with tackling the BSD-conjecture over totally real fields? In this case all the pillars could still be standing, but $(ii)$ and $(iii)$ would have nothing to say about higher rank situations. The striking feature of CM points is their explicit relation to \emph{first} derivative of $L$-functions; thus, as soon as the rank is greater than or equal to two, they become torsion.
In recent years the $p$-adic approach to the BSD-conjecture of Coates and Wiles \cite{CoatesWiles} has been revitalized by Bertolini, Darmon and Rotger (\cite{BDR}, \cite{DR2}). In their works the focus is to explore the arithmetic of rational elliptic curves over field extensions that are not contained in ring class fields of quadratic imaginary fields. The present paper is part of this new line of inquiry and studies the arithmetic of rational elliptic curves over non-solvable quintic fields.
\subsubsection{The equivariant BSD-conjecture.}
Let $F$ be a totally real number field and $K/F$ a finite Galois extension. For any elliptic curve $E_{/F}$, the Galois group $G(K/F)$ naturally acts on the $\mathbb{C}$-vector space $E(K)\otimes\mathbb{C}$ generated by the group of $K$-rational points. Since complex representations of finite groups are semisimple, the representation $E(K)\otimes\mathbb{C}$ decomposes into a direct sum of $\varrho$-isotypic components $E(K)^\varrho=\mathrm{Hom}_{G(K/F)}(\varrho,E(K)\otimes\mathbb{C})$, indexed by irreducible representations $\varrho\in\mathrm{Irr}\big( G(K/F)\big)$, each with its multiplicity. It is natural to define the algebraic rank of $E$ with respect to some $\varrho$ as \[ r_\mathrm{alg}(E,\varrho):=\dim_\mathbb{C}E(K)^\varrho. \] On the analytic side, for any $\varrho\in\mathrm{Irr}\big( G(K/F)\big)$ one can define a twisted $L$-function $L(E,\varrho,s)$ as the $L$-function associated to the Galois representation $\varrho\otimes\mathrm{V}_p(E)$ of the absolute Galois group of $F$. If $L(E,\varrho,s)$ admitted meromorphic continuation to the whole complex plane, then the analytic rank of $E$ with respect to some $\varrho$ could be defined as \[ r_\mathrm{an}(E,\varrho):=\mathrm{ord}_{s=1}L(E,\varrho,s). \] The Artin formalism of $L$-functions can be used to show that the BSD-conjecture for an elliptic curve $E_{/F}$ base-changed to $K$ should be equivalent to the equality of ranks \[ r_\mathrm{an}(E,\varrho)\overset{?}{=}r_\mathrm{alg}(E,\varrho)\quad \text{for all}\quad \varrho\in\mathrm{Irr}\big( G(K/F)\big). \]
The advantage of this point of view resides in the fact that it splits the BSD-conjecture into more manageable pieces. Specifically, when the considered representation $\varrho$ arises from an automorphic form, the right framework to be explored becomes apparent. For example, Bertolini, Darmon and Rotger \cite{BDR} proved new instances of the conjecture in rank $0$ for rational elliptic curves in the case of $\varrho$ an odd, irreducible two dimensional Artin representation. By modularity, both $\varrho$ and $E_{/\mathbb{Q}}$ correspond to some automorphic representation of $\mathrm{GL}_{2,\mathbb{Q}}$, thus it should not come as a total surprise that the main theorem of \cite{BDR} is obtained by a careful analysis of elements of higher Chow groups of a product of modular curves. Another noteworthy result was obtained by Darmon and Rotger \cite{DR2} when $\varrho$ is the tensor product of two odd, irreducible two dimensional Artin representations. In this case, generalized Kato classes -- constructed from diagonal cycles on triple products of modular curves -- are used to establish the first cases of the BSD-conjecture in rank $0$ for rational elliptic curves over $A_5$-quintic extensions of $\mathbb{Q}$.
Bhargava \cite{Bar5} showed that $100\%$ of quintic fields have Galois group isomorphic to $S_5$. To access some of these quintic fields, in this paper we are interested in the case of $\varrho$ the tensor induction of a totally odd, irreducible two dimensional Artin representation of the absolute Galois group of a real quadratic field. Conditionally on a conjecture on the \'etale cohomology of Hilbert modular surfaces (\ref{wishingOhta1}) and some technical assumptions, we prove new instances of the equivariant BSD-conjecture in rank $0$ by analyzing Hirzebruch--Zagier (HZ) cycles on a product of a Hilbert modular surface and a modular curve.
\subsection{Main results} Let $L$ be a real quadratic field and $\varrho:\Gamma_L\to\mathrm{GL}_2(\mathbb{C})$ a totally odd, irreducible two-dimensional Artin representation of the absolute Galois group of $L$. The Asai representation \[ \mathrm{As}(\varrho)=\otimes\mbox{-}\mathrm{Ind}_L^\mathbb{Q}(\varrho) \] is a $4$-dimensional complex representation obtained as the tensor induction of $\varrho$ from $\Gamma_L$ to $\Gamma_\mathbb{Q}$.
We suppose that $\varrho$ has conductor $\mathfrak{Q}$ and that the tensor induction of the determinant $\det(\varrho)$ is the trivial character so that $\mathrm{As}(\varrho)$ is self-dual.
For any rational elliptic curve $E_{/\mathbb{Q}}$ of conductor $N$ prime to $\frak{Q}$, we are interested in understanding when \[ r_\mathrm{an}(E,\mathrm{As}(\varrho))\overset{?}{=}r_\mathrm{alg}(E,\mathrm{As}(\varrho)). \] We rely on the modularity of totally odd Artin representations and rational elliptic curves (\cite{W}, \cite{TW}, \cite{PS}) to establish the meromorphic continuation, functional equation and analyticity at the center of the twisted triple product $L$-function $L\big(E,\mathrm{As}(\varrho),s\big)$.
\begin{thmx} Suppose that $N$ is coprime to $\mathfrak{Q}$, split in $L$, and there exists an ordinary prime $p\nmid 2N\cdot\frak{Q}$ for $E_{/\mathbb{Q}}$ such that
\begin{itemize}
\item[($1$)] $p$ splits in $L$ with narrowly principal factors;
\item[($2$)] there is no totally positive unit in $L$ congruent to $-1$ modulo $p$;
\item[($3$)] the eigenvalues of $\mathrm{Fr}_p$ on $\mathrm{As}(\varrho)$ are all distinct modulo $p$.
\end{itemize} If, additionally, $\varrho$ is residually not solvable and Conjecture \ref{wishingOhta1} holds, then \[ r_\mathrm{an}\big(E,\mathrm{As}(\varrho)\big)=0\quad\implies\quad r_\mathrm{alg}\big(E,\mathrm{As}(\varrho)\big)=0. \] \end{thmx}
\begin{remark} Conditions ($1$),($2$),($3$) on the auxiliary ordinary prime $p$ are minor technical assumptions and they can always be satisfied in our applications (Proposition \ref{choiceofp}).
\noindent We introduce ($1$) in Section \ref{section AJ p-adic} to relate the action of certain Hecke correspondences on different Shimura varieties, while ($2$) appears in Proposition \ref{prop comparison different models} to compare Hilbert modular surfaces with different level structures. Furthermore, we use ($3$) in Proposition \ref{somekindoffil} to obtain a Galois stable filtration in a projective limit of \'etale cohomology groups, and in Theorem \ref{Main Theorem} to ensure that the four global cohomology classes we constructed are linearly independent.
\noindent Regarding the remaining assumptions, we require $\varrho$ to be residually not solvable in Section \ref{geomrealiz} to apply recent results of Caraiani--Tamiozzo \cite{Caraiani-Tamiozzo}. This assumption is satisfied in our applications because we consider Artin representations with projective image isomorphic to $A_5$. The final hypothesis, Conjecture \ref{wishingOhta1}, appears in Section \ref{motivic p-adic L-function} and we refer to Section \ref{ontheconjectures} for a detailed discussion. \end{remark}
\noindent We apply Theorem A to Artin representations constructed in \cite{MicAnalytic}. The outcome is a result concerning the arithmetic of rational elliptic curves over $S_5$-quintic fields.
\begin{corollary}\label{CorolQuintic}
Let $K/\mathbb{Q}$ be a non-totally real $S_5$-quintic extension whose Galois closure contains a real quadratic field $L$. Suppose that $N$ is odd, unramified in $K/\mathbb{Q}$ and split in $L$, and that Conjecture \ref{wishingOhta1} holds, then
\[
r_\mathrm{an}(E/K)=r_\mathrm{an}(E/\mathbb{Q})\quad\implies\quad r_\mathrm{alg}(E/K)=r_\mathrm{alg}(E/\mathbb{Q}).
\] \end{corollary}
\noindent The strategy of proof of these results consists in producing enough global cohomology classes functioning as annihilators, and whose non-triviality is controlled by an automorphic $L$-value. As one expects cycles on Shimura varieties to play a prominent role in any plan to establish cases of the BSD-conjecture, the \'etale Abel--Jacobi map becomes a pivotal tool to convert null-homologous cycle classes into Selmer classes. However, the representation \[ \mathrm{V}_{\varrho,E}:=\mathrm{As}(\varrho)\otimes\mathrm{V}_p(E) \] is not known to appear in the \'etale cohomology of a Shimura variety and even if it was, we are looking for annihilators of Mordell-Weil groups, not Selmer classes. To our rescue comes the idea of \emph{$p$-adic deformation}: Corollary \ref{correctspec} allows us to realize $\mathrm{V}_{\varrho,E}$ as the $p$-adic limit of Galois representations appearing in the \'etale cohomology of Shimura threefolds. Thus, we can obtain the sought-after annihilators as limits of Abel--Jacobi images of HZ-cycles, that need not remain Selmer at $p$. The compatible collection of HZ-cycles also gives rise to a \emph{motivic} $p$-adic $L$-function via Perrin-Riou's machinery. Conjecture \ref{wishingOhta1} ensures that the non-triviality of a value of that function implies the non-triviality of the annihilators. Finally, the last step of the proof's startegy entails an explicit reciprocity law (Theorem \ref{comparison aut-mot}) comparing the motivic with the \emph{automorphic} $p$-adic $L$-function, the latter retaining information about the automorphic $L$-value.
In the remaining of the introduction we present in more detail the main steps of the proof.
\subsection{Overview of the proof} Let $\mathsf{g}_\varrho$ be the Hilbert cuspform of parallel weight one associated to the Artin representation $\varrho:\Gamma_L\to\mathrm{GL}_2(\mathbb{C})$, and let $\mathsf{f}_E$ be the elliptic cuspform associated to $E_{/\mathbb{Q}}$. After choosing a rational prime $p$ and ordinary $p$-stabilizations $\mathsf{g}_\varrho^{\mbox{\tiny $(p)$}}$, $\mathsf{f}_E^{\mbox{\tiny $(p)$}}$, one can find Hida families $\mathscr{G}, \mathscr{F}$ -- over $\boldsymbol{\cal{W}}_{\mathscr{G}}=\mathrm{Spf}(\mathbf{I}_\mathscr{G})^\mathrm{rig}$ and $\boldsymbol{\cal{W}}_{\mathscr{F}}=\mathrm{Spf}(\mathbf{I}_\mathscr{F})^\mathrm{rig}$ respectively -- passing through them. These families are equipped with big Galois representations of the absolute Galois group of $\mathbb{Q}$, $\mathrm{As}(\mathbf{V}_\mathscr{G})$ of rank $4$ and $\mathbf{V}_\mathscr{F}$ of rank $2$, each interpolating the representations of the eigenforms in the families. Specifically, there are arithmetic points $\mathrm{P}_\circ\in\boldsymbol{\cal{W}}_\mathscr{G}$, $\mathrm{Q}_\circ\in\boldsymbol{\cal{W}}_\mathscr{F}$ such that \[ \mathrm{As}(\mathbf{V}_{\mathscr{G}_{\mathrm{P}_\circ}})\cong\mathrm{As}(\varrho)\qquad \text{and}\qquad\mathbf{V}_{\mathscr{F}_{\mathrm{Q}_\circ}}\cong\mathrm{V}_p(E). \] One can show that there exists a twist $\mathbf{V}_{\mathscr{G},\mathscr{F}}^\dagger$ of \[ \mathbf{V}_{\mathscr{G},\mathscr{F}}=\mathrm{As}(\mathbf{V}_{\mathscr{G}})(-1)\otimes \mathbf{V}_{\mathscr{F}} \] interpolating Kummer self-dual representations: for any pair of arithmetic points $\mathrm{P}\in \boldsymbol{\cal{W}}_\mathscr{G}$ and $\mathrm{Q}\in \boldsymbol{\cal{W}}_\mathscr{F}$ the specialization \[ \mathbf{V}_{\mathscr{G}_{\mathrm{P}},\mathscr{F}_{\mathrm{Q}}}^\dagger=\Big( \mathrm{As}(\mathbf{V}_{\mathscr{G}_{\mathrm{P}}})(-1)\otimes \mathbf{V}_{\mathscr{F}_{\mathrm{Q}}}\Big)^\dagger \] is the Kummer self-dual twist of the $8$-dimensional Galois representation attached to $\mathscr{G}_\mathrm{P}$ and $\mathscr{F}_\mathrm{Q}$. In particular, the specialization at the arithmetic points $\mathrm{P}_\circ\in\boldsymbol{\cal{W}}_\mathscr{G}$, $\mathrm{Q}_\circ\in\boldsymbol{\cal{W}}_\mathscr{F}$ is \[ \mathbf{V}_{\mathscr{G}_{\mathrm{P_\circ}},\mathscr{F}_{\mathrm{Q}_\circ}}^\dagger= \mathrm{V}_{\varrho,E}. \]
When the pair $(\mathrm{P},\mathrm{Q})$ is $\mathbb{Q}$-dominated (\cite{BlancoFornea}, Definition 1.3), Ichino's formula (\cite{I}) gives an expression for the central $L$-value \[ L\Big(\mathbf{V}_{\mathscr{G}_\mathrm{P},\mathscr{F}_\mathrm{Q}}^\dagger, c\Big) \] which is suitable for $p$-adic interpolation. It is then possible to construct a rigid meromorphic function $\mathscr{L}_p(\breve{\mathscr{G}},\mathscr{F}):\boldsymbol{\cal{W}}_{\mathscr{G},\mathscr{F}}\to\mathbb{C}_p$ whose values at crystalline $\mathbb{Q}$-dominated pairs satisfy \[ \mathscr{L}_p(\breve{\mathscr{G}},\mathscr{F})(\mathrm{P},\mathrm{Q})\overset{\cdot}{\sim} L^\mathrm{alg}\Big(\mathbf{V}_{\mathscr{G}_\mathrm{P},\mathscr{F}_\mathrm{Q}}^\dagger, c\Big), \] and whose values at certain crystalline points outside the range of interpolation are related to the syntomic Abel--Jacobi image of \emph{generalized Hirzebruch--Zagier cycles} (\cite{BlancoFornea}, Theorem 1.7).
\noindent In the present work we focus our attention on the one-variable Hida family $\mathscr{G}$ interpolating parallel weight Hilbert cuspforms. By restricting the number of variables we are able to refine the construction of $\mathscr{L}_p(\breve{\mathscr{G}},\mathscr{F})$ and obtain a rigid analytic function on the disk $\boldsymbol{\cal{W}}_{\mathscr{G}}$, the \emph{automorphic $p$-adic $L$-function} (Definition \ref{autpadicLfun}) \[ \mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_E):\boldsymbol{\cal{W}}_{\mathscr{G}}\longrightarrow\mathbb{C}_p, \] whose value at the arithmetic point $\mathrm{P}_\circ$ of weight one is equal, up to a non-zero constant, to \[ \mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_E)(\mathrm{P}_\circ)\overset{\cdot}{\sim} L^\mathrm{alg}(E,\mathrm{As}(\varrho), 1). \] Furthermore, its value at \emph{any} arithmetic point $\mathrm{P}\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight $2$ is explicitly given in terms of $p$-adic cuspforms. In other words, we construct a rigid analytic function containing the information about the vanishing or non-vanishing of an automorphic $L$-value at $\mathrm{P}_\circ\in\boldsymbol{\cal{W}}_\mathscr{G}$, and whose values at \emph{every} arithmetic points of weight $2$ have the potential of being related to syntomic Abel--Jacobi images of algebraic cycles.
\subsubsection{Hirzebruch--Zagier classes.} When working with the one-variable ordinary Hida family $\mathscr{G}$ and the single form $\mathsf{f}_E$, the big Galois representation takes a simpler form: there is a $\mathbf{I}_\mathscr{G}^\times$-valued character such that \[ \mathbf{V}_{\mathscr{G},E}^\dagger= \mathrm{As}(\mathbf{V}_{\mathscr{G}})^\dagger(-1)\otimes \mathrm{V}_p(E). \] The realization of this Galois representation in the cohomology of a tower of threefolds with increasing level at $p$ plays a crucial role in the construction of the sought-after annihilators. Suppose $p$ is a rational prime splitting in the real quadratic field $L$, and write $p\cal{O}_L=\frak{p}_1\frak{p}_2$.
\begin{definition}
For any $\alpha\ge1$ and any compact open $K \le \mathrm{GL}_2(\mathbb{A}_{L,f})$ hyperspecial at $p$, we set \[
K_{\diamond,t}(p^\alpha):=\left\{\begin{pmatrix}a&b\\c&d \end{pmatrix}\in K_0(p^\alpha)\Big\lvert\ a_{\mathfrak{p}_1}d_{\mathfrak{p}_1}\equiv a_{\mathfrak{p}_2}d_{\mathfrak{p}_2},\ d_{\mathfrak{p}_1}d_{\mathfrak{p}_2}\equiv 1 \pmod{p^{\alpha}}\right\} \] and denote by $S(K_{\diamond,t}(p^\alpha))$ the corresponding Hilbert modular surface. \end{definition} \noindent The reason for considering these unusual level structures is that for any arithmetic point $\mathrm{P}\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight $2$ and level $p^\alpha$ the interior cohomology admits a Galois equivariant surjection \[ \mathrm{H}^2_!\big(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},E_\wp(2)\big)\twoheadrightarrow \mathrm{As}(\mathbf{V}_{\mathscr{G}_{\mathrm{P}}}). \] Inspired by \cite{DR2}, for every $\alpha\ge1$ we produce a null-homologous codimension $2$ cycle, called \emph{Hirzebruch--Zagier cycle}, \[ \Delta_\alpha^\circ\in\mathrm{CH}^2\big(Z_\alpha(K)\big)\big(\mathbb{Q}(\zeta_{p^\alpha})\big)\otimes\mathbb{Z}_p \] on the Shimura threefold $Z_\alpha(K)=S(K_{\diamond,t}(p^\alpha))\times X_0(Np)$. Moreover, the action of $\mathrm{Gal}(\mathbb{Q}(\zeta_{p^\alpha})/\mathbb{Q})$ is such that $\Delta^\circ_\alpha$ corresponds to a null-homologous rational cycle class \[ \Delta_\alpha^\circ\in\mathrm{CH}^2\big(Z^\dagger_\alpha(K)\big)(\mathbb{Q})\otimes\mathbb{Z}_p \]
on a twisted threefold $Z^\dagger_\alpha(K)$ with the following appealing property: for every arithmetic point $\mathrm{P}\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight $2$ and level $p^\alpha$ there are Galois equivariant surjections \[ \mathrm{H}^3_!\big(Z^\dagger_\alpha(K)_{\bar{\mathbb{Q}}},E_\wp(2)\big)\twoheadrightarrow \mathrm{As}(\mathbf{V}_{\mathscr{G}_{\mathrm{P}}})^\dagger(-1)\otimes \mathrm{V}_p(E). \] The ordinary parts of the Abel--Jacobi images \[ \mathrm{AJ}^{\acute{\mathrm{e}}\mathrm{t}}_p(\Delta_\alpha^\circ)\in\mathrm{H}^1\big(\mathbb{Q},\mathrm{H}^3_!\big(Z^\dagger_\alpha(K)_{\bar{\mathbb{Q}}},O(2)\big)\big) \] can be made compatible under the degeneracy maps $\varpi_2:Z^\dagger_{\alpha+1}(K)\to Z^\dagger_\alpha(K)$ and packaged together to form a global big cohomology class (Definition \ref{BigCohomology}) \[ \boldsymbol{\kappa}_{\mathscr{G},E}\in\mathrm{H}^1\big(\mathbb{Q},\boldsymbol{\cal{V}}_{\mathscr{G},E}\big). \] This class retains information about the Abel--Jacobi image of algebraic cycles at arithmetic points of weight two and it can be specialized at the arithmetic point $\mathrm{P}_\circ\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight one. Then, Corollary \ref{correctspec} implies that we obtain a $\mathrm{V}_{\varrho,E}$-valued global class by specialization \[ \kappa_E(\mathsf{g}_\circ^{\mbox{\tiny $(p)$}})\in\mathrm{H}^1\big(\mathbb{Q},\mathrm{V}_{\varrho,E}\big). \] In order to make apparent the relationship between $\boldsymbol{\kappa}_{\mathscr{G},E}$ and the automorphic $p$-adic $L$-function, we use Perrin-Riou's machinery to fabricate the \emph{motivic $p$-adic $L$-function}.
\subsubsection{The motivic $p$-adic $L$-function.} This construction is naturally divided in two steps.
First, the localization at $p$ of the big cohomology class can be projected to a Galois cohomology group \[ \boldsymbol{\kappa}_p:=\mathrm{Im}(\boldsymbol{\kappa}_{\mathscr{G},E}) \in \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{U}}^E_\mathscr{G}(\boldsymbol{\Theta})\big) \] valued in a subquotient $\boldsymbol{\cal{U}}^E_\mathscr{G}(\boldsymbol{\Theta})$ of the Galois module $\boldsymbol{\cal{V}}_{\mathscr{G},E}$ on which $\Gamma_{\mathbb{Q}_p}$ acts through characters. Perrin-Riou's big logarithm (Proposition \ref{prop: big log}) valued in the big Dieudonn\'e module $\mathbb{D}(\boldsymbol{\cal{U}}^E_\mathscr{G})$ gives an element \[ \boldsymbol{\cal{L}}(\boldsymbol{\kappa}_p)\in\mathbb{D}\big(\boldsymbol{\cal{U}}^E_\mathscr{G}\big) \]
interpolating the Bloch--Kato logarithm of the specialization of the class at arithmetic points of weight $\ge2$, and the Bloch--Kato dual exponential at the arithmetic point $\mathrm{P}_\circ\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight one.
The second step entails the definition of a linear map ($\mathbf{I}_\mathscr{G}$-valued assuming Conjecture \ref{wishingOhta1})
\begin{equation}\label{linmap}
\big\langle\ ,\omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\big\rangle: \mathbb{D}\big(\boldsymbol{\cal{U}}^E_\mathscr{G}\big)\longrightarrow \mathbf{I}_\mathscr{G}
\end{equation}
producing rigid-analytic functions out of elements of the Dieudonn\'e module.
Then, the motivic $p$-adic $L$-function is defined by setting (Definition \ref{motpadicLfun}) \[ \mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},E):=\big\langle\boldsymbol{\cal{L}}(\boldsymbol{\kappa}_p) ,\ \omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\big\rangle. \] As the notation suggests, the value at \emph{every} arithmetic point $\mathrm{P}\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight two \[ \mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},E)(\mathrm{P})\overset{\cdot}{\sim}\big\langle\log_\mathrm{BK}(\boldsymbol{\kappa}_p(P)) ,\ \omega_{\breve{\mathscr{G}_\mathrm{P}}}\otimes\eta_\circ'\big\rangle_\mathrm{dR} \] is computed by the de Rham pairing between the Bloch--Kato logarithm of the specialization of the big class and a de Rham class associated to the cuspforms $\mathscr{G}_\mathrm{P}$ and $\mathsf{f}_E$. Crucially, these quantities are values of \emph{syntomic Abel--Jacobi images} of HZ-cycles.
\subsubsection{Explicit reciprocity law.} The comparison of the two $p$-adic $L$-functions is the bridge between the automorphic and the algebro-geometric worlds. It transfers information about the non-vanishing of an automorphic $L$-value into information on the non-triviality of annihilators of Mordell-Weil groups. It is achieved by an explicit reciprocity law (Theorem \ref{comparison aut-mot}) \[ \boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}(\mathrm{P})\cdot \mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}) = \mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}) \] for arithmetic points $\mathrm{P}\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight two. The proof relies on an explicit expression of syntomic Abel--Jacobi images of HZ-cycles in terms of $p$-adic modular forms (Theorem \ref{AJ formula}). Then, the key implication is given by the following theorem.
\begin{thmx}
Suppose that $N$ is coprime to $\mathfrak{Q}$, split in $L$, and there exists an ordinary prime $p\nmid 2N\cdot\frak{Q}$ for $E_{/\mathbb{Q}}$ such that
\begin{itemize}
\item[($1$)] $p$ splits in $L$ with narrowly principal factors;
\item[($2$)] there is no totally positive unit in $L$ congruent to $-1$ modulo $p$;
\item[($3$)] the eigenvalues of $\mathrm{Fr}_p$ on $\mathrm{As}(\varrho)$ are all distinct modulo $p$.
\end{itemize}
If, additionally, $\varrho$ is residually not solvable and Conjecture \ref{wishingOhta1} holds, then for any choice of an ordinary $p$-stabilization $\mathsf{g}_\varrho^{\mbox{\tiny $(p)$}}$ of $\mathsf{g}_\varrho$, \[ L(E,\mathrm{As}(\varrho), 1)\not=0 \qquad\implies\qquad\kappa_E(\mathsf{g}_\circ^{\mbox{\tiny $(p)$}})\in\mathrm{H}^1\big(\mathbb{Q},\mathrm{V}_{\varrho,E}\big)\quad \text{not Selmer at $p$}. \] \end{thmx} \noindent The result is used as follows: by assuming that $p$ splits in $L$ and the eigenvalues of $\mathrm{Fr}_p$ on $\mathrm{As}(\varrho)$ are all distinct, the eigenform $\mathsf{g}_\varrho$ has four distinct ordinary $p$-stabilizations. Hence, we obtain four global cohomology classes by repeatedly applying Theorem B, and their images are linearly independent in the singular quotient at $p$ (Theorem \ref{criterion crystalline}). As the self-dual representation $\mathrm{As}(\varrho)$ is four dimensional, these annihilators suffice to prove that the relevant part of the Mordell-Weil group is trivial (Lemma \ref{zerolocalization}).
\subsection{On the conjecture}\label{ontheconjectures} We conclude the introduction by discussing the conjecture on the cohomology of Hilbert modular surfaces that we assume in our work. Consider the Galois module \[ \cal{V}_\alpha:=e_\mathrm{n.o.}\mathrm{H}^2_!\big(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O(2)\big). \] The $0$-th graded piece $\mathrm{Gr}^0\cal{V}_\alpha$ of the \'etale cohomology, with respect to the ordinary filtration, is an unramified $\Gamma_{\mathbb{Q}_p}$-representation (see Proposition \ref{somekindoffil}), therefore \[ \mathbb{D}\big(\mathrm{Gr}^0\cal{V}_\alpha\big):=\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes\widehat{\mathbb{Z}}_p^\mathrm{ur}\big)^{\Gamma_{\mathbb{Q}_p}} \] is a lattice in the de-Rham cohomology group $\mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes_OE_\wp\big)$. We are interested in comparing two integral structure for the de Rham cohomology of Hilbert modular surfaces: one coming from integral \'etale cohomology, the other arising from ordinary Hilbert modular forms of parallel weight two.
\begin{conjecture}\label{wishingOhta1}
For every large enough prime $p$ and every $\alpha\ge1$ the image of the natural map
\[
S^\mathrm{ord}_{2t_L,t_L}\big(K_{\diamond,t}(p^\alpha);O\big)
\longrightarrow
\mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes_OE_\wp\big)
\]
is contained in the lattice $\mathbb{D}\big(\mathrm{Gr}^0\cal{V}_\alpha\big)$. \end{conjecture}
\noindent This conjecture (Conjecture \ref{wishingOhta} in the body of the article) is a generalization of (\cite{Ohta95}, Proposition 3.3.6). Ohta's proof relies on Jacobians of modular curves, and therefore it cannot be directly generalized to higher dimensional Shimura varieties. It seems reasonable to expect that the insights of Pilloni's higher Hida theory (\cite{HigherHida}) will help make progress on Conjecture \ref{wishingOhta1}. The crucial input of Conjecture \ref{wishingOhta1} in our work is in establishing the integrality of the linear map \eqref{linmap}. In particular, for $\mathrm{P}_\circ\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ the arithmetic point of weight one, it provides the implication \[ \mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},E)(\mathrm{P}_\circ)\not=0\qquad\implies\qquad \boldsymbol{\kappa}_p(\mathrm{P}_\circ)\quad \text{non-trivial}. \]
\begin{remark}
The existence of the automorphic $p$-adic $L$-function can be used to meromorphically continue $\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},E)$ to the whole disk $\boldsymbol{\cal{W}}_\mathscr{G}$ without assuming Conjecture \ref{wishingOhta1} -- even though that is insufficient for our arithmetic applications. Indeed, the motivic $p$-adic $L$-function is a priori only an element of the huge ring $\boldsymbol{\Pi} \otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}$; concretely, this means that $\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},E)$ can be evaluated only at arithmetic points of weight $2$ of $\boldsymbol{\cal{W}}_\mathscr{G}$.
However, there is a natural inclusion $\mathbf{I}_\mathscr{G}\hookrightarrow\boldsymbol{\Pi} \otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}$ which in terms of functions corresponds to the restriction of the domain from $\boldsymbol{\cal{W}}_\mathscr{G}$ to the subset of arithmetic points of weight $2$. As Hida families are \'etale at arithmetic points of weight $2$, an element of $\boldsymbol{\Pi} \otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}$ is zero if and only if \emph{all} its specializations are zero. Therefore, Theorem \ref{comparison aut-mot} shows that $\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},E)$ extends to a rigid meromorphic function on $\boldsymbol{\cal{W}}_\mathscr{G}$. \end{remark}
\section{Review of Hilbert cuspforms} In this section $L$ denotes a totally real number field with ring of integers $\cal{O}_L$ and different $\mathfrak{d}_L$. The following algebraic groups play a prominent role in the article \begin{equation} D = \mathrm{Res}_{L/\mathbb{Q}}\big(\mathbb{G}_{m,L}\big) ,\qquad G = \mathrm{Res}_{L/\mathbb{Q}}\big(\mathrm{GL}_{2,L}\big) ,\qquad G^* = G \times_D \mathbb{G}_m. \end{equation} We denote by $\mathrm{I}_L$ the set of field embeddings of $L$ into $\overline{\mathbb{Q}}$, then there is an identification of $L_\infty := L\otimes_\mathbb{Q}\mathbb{R}$ with $\mathbb{R}^{\mathrm{I}_L}$. If $\frak{H}$ denotes the Poincar\'e upper half plane, the identity component $G(\mathbb{R})_+ $ of $G(\mathbb{R})= \mathrm{GL}_2(L_\infty)$ naturally acts on $\frak{H}^{\mathrm{I}_L}$. We denote by $i = \sqrt{-1} \in \frak{H}$ the square root of $-1$ belonging to $\frak{H}$ and $\mathbf{i} = (i,...,i) \in \frak{H}^{\mathrm{I}_L}$.
\begin{definition} Every element $s = \sum_{\tau}s_\tau\cdot[\tau]$ of the free group $\mathbb{Z}[\mathrm{I}_L]$ gives a power map \[ (-)^s:L\otimes_\mathbb{Q}\overline{\mathbb{Q}}_v\longrightarrow\overline{\mathbb{Q}}_v,\qquad \ell\otimes c\mapsto \prod_{\tau} \big(c\cdot\tau(\ell)\big)^{s_\tau} \] for any place $v$ of $\mathbb{Q}$. \end{definition}
\noindent The group $Z_L(1)$ is defined by the following short exact sequence \[\xymatrix{
1\ar[r]&\overline{L^\times \widehat{\cal{O}}_L^{p,\times} L_{\infty,+}^\times}\ar[r]& \mathbb{A}_L^\times \ar[r]&Z_L(1)\ar[r]&1, }\] and the $p$-adic cyclotomic character can be expressed as \begin{equation} \varepsilon_L: Z_L(1) \longrightarrow \mathbb{Z}^\times_p,\qquad y\mapsto y_p^{-t_L}\lvert y^\infty\rvert_{\mathbb{A}_L}^{-1} \end{equation} where $t_L := \sum_{\tau \in \mathrm{I}_L} [\tau]\ \in\ \mathbb{Z}[\mathrm{I}_L]$. Moreover, the canonical isomorphism $\mathbb{Z}_p^\times\cong(1+p\mathbb{Z}_p)\times\mu_{p-1}$ induces the factorization \begin{equation} \varepsilon_L=\eta_L\cdot\theta_L. \end{equation}
\subsection{Adelic Hilbert cuspforms} Let $K\le G(\mathbb{A}_f)$ be a compact open subgroup and $(k,w)\in\mathbb{Z}[\mathrm{I}_L]^2$ an element satisfying $k-2w=m\cdot t_L$, then a holomorphic Hilbert cuspform of weight $(k,w)$ and level $K$ is a function $\mathsf{f}:G(\mathbb{A})\to\mathbb{C}$ that satisfy the following properties: \begin{itemize} \item[$\bullet$] $\mathsf{f}(\alpha x u)=\mathsf{f}(x)j_{k,w}(u_\infty,\mathbf{i})^{-1}$ where $\alpha\in G(\mathbb{Q})$, $u\in K\cdot C_{\infty}^+$ for $C_{\infty}^+$ the stabilizer of $\mathbf{i}$ in $G(\mathbb{R})^+$, and where the automorphy factor is given by
$j_{k,w}\big(\gamma,z\big)=(ad-bc)^{-w}(cz+d)^k$ for $\gamma=\begin{pmatrix}a& b\\ c&d\end{pmatrix} \in G(\mathbb{R})$, $z\in\mathfrak{H}^{\mathrm{I}_L}$; \item[$\bullet$] for every finite adelic point $ x\in G(\mathbb{A}_f)$ the well-defined function $\mathsf{f}_x:\frak{H}^{\mathrm{I}_L}\to\mathbb{C}$ given by $\mathsf{f}_x(z)=\mathsf{f}(xu_\infty)j_{k,w}(u_\infty,\mathbf{i})$ is holomorphic, where for each $z\in\mathfrak{H}^{\mathrm{I}_L}$ one chooses $u_\infty\in G(\mathbb{R})_+$ such that $u_\infty\mathbf{i}=z$. \item[$\bullet$] for all adelic points $x\in G(\mathbb{A})$ and for all additive measures on $L\backslash\mathbb{A}_L$ we have
\[\int_{L\backslash\mathbb{A}_L}\mathsf{f}\bigg(\begin{pmatrix}1&a\\0&1\end{pmatrix}x\bigg)da=0.\] \item[$\bullet$] If the totally real field is the field of rational numbers, $L=\mathbb{Q}$, we need to impose the extra condition that for all finite adelic point $x\in G(\mathbb{A}^\infty)$ the function $\lvert \text{Im}(z)^\frac{k}{2}\mathsf{f}_x(z)\rvert$ is uniformly bounded on $\frak{H}$. \end{itemize} The $\mathbb{C}$-vector space of Hilbert cuspforms of weight $(k,w)$ and level $K$ is denoted by $S_{k,w}(K;\mathbb{C})$. Let $dx$ be the Tamagawa measure on the quotient $[G(\mathbb{A})] := \mathbb{A}_L^\times G(\mathbb{Q}) \backslash G(\mathbb{A})$. For any pair $\mathsf{f}_1, \mathsf{f}_2 \in S_{k,w}(K;\mathbb{C})$ of cuspforms whose weight satisfies $k-2w = m\cdot t_L$, their Petersson inner product is given by \begin{equation} \langle \mathsf{f}_1, \mathsf{f}_2 \rangle := \int_{[G(\mathbb{A})]} \mathsf{f}_1(x) \overline{\mathsf{f}_2(x)} \cdot\lvert\det(x)\rvert^m_{\mathbb{A}_L} dx. \end{equation}
\begin{definition} For any $\cal{O}_L$-ideal $\mathfrak{N}$ we consider the following compact open subgroups of $G(\mathbb{A}_f)$ \begin{itemize} \item[\bfcdot] $V_0(\mathfrak{N})=\bigg\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in G(\widehat{\mathbb{Z}})\bigg\lvert\ c\in\mathfrak{N}\widehat{\cal{O}}_L\bigg\}$, \item[\bfcdot] $V_1(\mathfrak{N})=\bigg\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in V_0(\mathfrak{N})\bigg\lvert\ d\equiv1 \pmod{\mathfrak{N}\widehat{\cal{O}}_L}\bigg\}$, \item[\bfcdot] $V^1(\mathfrak{N})=\bigg\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in V_0(\mathfrak{N})\bigg\lvert\ a\equiv1 \pmod{\mathfrak{N}\widehat{\cal{O}}_L}\bigg\}$, \item[\bfcdot] $V(\mathfrak{N})=V_1(\mathfrak{N})\cap V^1(\mathfrak{N})$. \end{itemize} \end{definition} When $K=V_1(\frak{N})$ for some $\cal{O}_L$-ideal $\frak{N}$ we will write $S_{k,w}(\frak{N};\mathbb{C})$ instead of $S_{k,w}(K;\mathbb{C})$.
\subsubsection{Adelic q-expansion.}
Let $\mathrm{cl}_L^+(\mathfrak{N}) := L^\times_+\backslash \mathbb{A}_{L,f}^\times /\det V(\mathfrak{N})$ be a narrow class group of $L$ of cardinality $h_L^+(\mathfrak{N})$, and fix a set of representatives $\{ a_i\}_i \subset \mathbb{A}_{L,f}^\times$ for $\mathrm{cl}_L^+(\mathfrak{N})$. Then the adelic points of $G$ can be written as a disjoint union \[ G(\mathbb{A}) = \coprod_{i=1}^{h_L^+(\mathfrak{N})} G(\mathbb{Q}) \begin{pmatrix}a_i^{-1}&0\\ 0&1\end{pmatrix} V(\mathfrak{N})G(\mathbb{R})_+ \] using strong approximation. Given a Hilbert cuspform $\mathsf{f}\in S_{k,w}(V(\frak{N});\mathbb{C})$ one can consider the holomorphic function $\mathsf{f}_i:\frak{H}^{\mathrm{I}_L}\to\mathbb{C}$ \[ \mathsf{f}_i(z)=y_\infty^{-w}\mathsf{f}\left(t_i\begin{pmatrix} y_\infty& x_\infty\\ 0&1 \end{pmatrix}\right)=\underset{\xi\in(\frak{a}_i\frak{d}_L^{-1})_+}{\sum}a(\xi,\mathsf{f}_i)e_L(\xi z) \] where $z=x_\infty+\mathbf{i}y_\infty$, $\frak{a}_i=a_i\cal{O}_L$ and $e_L(\xi z)=\text{exp}\big(2\pi i\sum_{\tau\in \mathrm{I}_L}\tau(\xi)z_\tau\big)$ for every index $i$. The Fourier expansions of these functions can be packaged together into a single adelic $q$-expansion:
\noindent fix a finite idele $\mathsf{d}_L\in\mathbb{A}_{L,f}^{\times}$ such that $\mathsf{d}_L\cal{O}_L=\frak{d}_L$. Let $L^\text{Gal}$ be the Galois closure of $L$ in $\overline{\mathbb{Q}}$ and write $\mathcal{V}$ for the ring of integers or a valuation ring of a finite extension $L_0$ of $L^\text{Gal}$ such that for every ideal $\frak{a}$ of $\cal{O}_L$, for all $ \tau\in \mathrm{I}_L$, the ideal $\frak{a}^\tau\mathcal{V}$ is principal. Choose a generator $\{\frak{q}^\tau\}\in\mathcal{V}$ of $\frak{q}^\tau\mathcal{V}$ for each prime ideal $\frak{q}$ of $\cal{O}_L$ and by multiplicativity define $\{\frak{a}^v\}\in\cal{V}$ for each fractional ideal $\frak{a}$ of $L$ and each $v\in\mathbb{Z}[\mathrm{I}_L]$. Then, we set $\{y^v\}:=\{y^v\cal{O}_L\}\in\cal{V}$ for each idele of $L$. Every idele $y$ in $\mathbb{A}^\times_{L,+}:=\mathbb{A}_{L,f}^{\times} L^\times_{\infty,+}$ can be written as $y=\xi a_i^{-1}\mathsf{d}_Lu$ for $\xi\in L_+^\times$ and $u\in\det V(\frak{N})L^\times_{\infty,+}$, then the following functions \[ \mathsf{a}(-,\mathsf{f}):\mathbb{A}^\times_{L,+}\longrightarrow\mathbb{C},\qquad \mathsf{a}_p(-,\mathsf{f}):\mathbb{A}^\times_{L,+}\longrightarrow\overline{\mathbb{Q}}_p \] are defined by \[ \mathsf{a}(y,\mathsf{f}):=a(\xi,\mathsf{f}_i)\{y^{w-t_L}\}\xi^{t_L-w}\lvert a_i\rvert_{\mathbb{A}_L}\qquad\text{and}\qquad \mathsf{a}_p(y,\mathsf{f}):=a(\xi,\mathsf{f}_i)y_p^{w-t_L}\xi^{t_L-w}\varepsilon_L(a_i)^{-1} \] if $y\in\widehat{\cal{O}_L}L^\times_{\infty,+}$ and zero otherwise.
\begin{theorem}{(\cite{pHida}, Theorem 1.1)}\label{thm: adelic q-exp}
Consider the additive character of the ideles $\chi_L:\mathbb{A}_L/L\to\mathbb{C}^\times$ which satisfies $\chi_L(x_\infty)=e_L(x_\infty)$. Each cuspform $\mathsf{f}\in S_{k,w}(V(\frak{N});\mathbb{C})$ has an adelic $q$-expansion of the form
\[
\mathsf{f}\left(\begin{pmatrix}
y & x \\
0 & 1
\end{pmatrix}\right)=\lvert y\rvert_{\mathbb{A}_L}\underset{\xi\in L_+}{\sum}\mathsf{a}(\xi y\mathsf{d}_L,\mathsf{f})\{(\xi y\mathsf{d}_L)^{t_L-w}\}(\xi y_\infty)^{w-t_L}e_L(\mathbf{i}\xi y_\infty)\chi_L(\xi x)
\]
for $y\in\mathbb{A}^\times_{L,+}$, $x\in\mathbb{A}^\times_L$, and the function $\mathsf{a}(-,\mathsf{f}):\mathbb{A}^\times_{L,+}\to \mathbb{C}$ vanishes outside $\widehat{\cal{O}}_LL^\times_{\infty,+}$.
\end{theorem}
\subsubsection{Diagonal restriction.} The degree map $\mathbb{Z}[\mathrm{I}_L]\to\mathbb{Z}$ denoted by $\ell\mapsto \lvert\ell\rvert$ satisfies $\lvert t_L\rvert=[L:\mathbb{Q}]$. For any positive integer $N$ the natural inclusion $\zeta:\text{GL}_2(\mathbb{A})\hookrightarrow\text{GL}_2(\mathbb{A}_L)$ defines by composition a \emph{diagonal restriction} map \[ \zeta^*:S_{k,w}(V(N\cal{O}_L);\mathbb{C})\to S_{\lvert k\rvert, \lvert w\rvert}(V(N);\mathbb{C}) \] from Hilbert cuspforms over $L$ to elliptic cuspforms. \begin{lemma}\label{q-exp diagonal}
Let $\mathsf{g}\in S_{\ell,x}(V(N\cal{O}_L);\overline{\mathbb{Q}})$ be a Hilbert cuspform over $L$, then for $y \in \widehat{\mathbb{Z}}\cdot \mathbb{R}^\times_{+}$ written as $y=\xi a_i^{-1}u$ for $\xi\in \mathbb{Q}^\times_+$ and $u\in\det(V(N))\mathbb{R}^\times_{+}$, we have
\[
\mathsf{a}_p(y,\zeta^*\mathsf{g})=y_p^{\lvert x\rvert-1}\xi^{1-\lvert x\rvert}\varepsilon_\mathbb{Q}(a_i)^{-1}\sum_{\mathrm{Tr}_{L/\mathbb{Q}}(\eta)=\xi}\mathsf{a}_p(y_\eta,\mathsf{g})(y_\eta)_p^{t_L-x}\eta^{x-t_L}
\]
where $\eta\in L^\times_+$ and $y_\eta=\eta a_i^{-1}\mathsf{d}_Lu$. \end{lemma} \begin{proof}
A direct computation. \end{proof} \begin{remark}
When $p$ is unramified in $L/\mathbb{Q}$ one sees that $y_p=(\xi u)_p$, $(y_\eta)_p=(\eta u)_p$ and that the formula becomes
\begin{equation}
\mathsf{a}_p(y,\zeta^*\mathsf{g})=u_p\varepsilon_\mathbb{Q}(a_i)^{-1}\sum_{\mathrm{Tr}_{L/\mathbb{Q}}(\eta)=\xi}\mathsf{a}_p(y_\eta,\mathsf{g}).
\end{equation} \end{remark}
\subsection{Hecke Theory} Let $K \le G(\mathbb{A}_f)$ be an open compact subgroup satisfying $V(\mathfrak{N})\le K\le V_0(\mathfrak{N}) $. Suppose $\cal{V}$ is the valuation ring corresponding to the fixed embedding $\iota_p: L^\mathrm{Gal} \hookrightarrow \overline{\mathbb{Q}}_p$, then we assume $\{\frak{q}\} = 1$ whenever the ideal $\frak{q}$ is prime to $p\cal{O}_L$. For every $g \in G(\mathbb{A})$, one can consider the following double coset operator $[KgK]$. By decomposing the double coset into a disjoint union \[ K g K = \coprod_i \gamma_i K, \] its action on Hilbert cuspforms of level $K$ is given by \begin{equation} \big([KgK]\mathsf{f}\big) (x) = \sum_i \mathsf{f}(x\gamma_i). \end{equation}
\begin{definition} For every prime ideal $\mathfrak{q}\le\cal{O}_L$ and a choice of uniformizer $\varpi_\mathfrak{q}$ of $\cal{O}_{L,\mathfrak{q}}$, the Hecke operators at $\frak{q}$ acting on $S_{k,w}(K;\mathbb{C})$ are defined as \[ T(\varpi_\mathfrak{q}) = \{\varpi_\mathfrak{q}^{w-t_L}\} \Big[ K\begin{pmatrix} \varpi_\mathfrak{q} & 0 \\ 0 & 1 \end{pmatrix} K \Big]. \] For every invertible element
$a \in \cal{O}_{L,\mathfrak{N}}^\times = \Pi_{\mathfrak{q} | \mathfrak{N}} \cal{O}_{L,\mathfrak{q}}^\times$ there a Hecke operator \[ T(a,1) = \Big[ K \begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix} K \Big]. \] For any element $z \in Z_G(\mathbb{A}_f)$ in the center of $G(\mathbb{A}_f)$, the associated diamond operator $\langle z \rangle$ acts through the rule $(\langle z \rangle\mathsf{f})(x) = \mathsf{f}(xz)$. \end{definition}
\noindent It turns out that if the ideal $\mathfrak{q}$ is coprime to the level $\mathfrak{N}$, then $T_0(\varpi_\mathfrak{q})$ and $\langle \varpi_\mathfrak{q} \rangle$ are independent of the particular choice of uniformizer $\varpi_\mathfrak{q}$, thus we simply denote them by $T(\mathfrak{q})$ and $\langle \mathfrak{q} \rangle$. However, if $\mathfrak{q}\mid \mathfrak{N}$, then $T_0(\varpi_\mathfrak{q})$ does depend on $\varpi_\mathfrak{q}$, and we denote it $U_0(\varpi_\mathfrak{q})$. Any $y \in \widehat{\cal{O}_L}\cap\mathbb{A}_L^\times$ can be written as \[ y = au \prod_\mathfrak{q} \varpi_\mathfrak{q}^{e(\mathfrak{q})}\qquad\text{for}\qquad a \in \cal{O}_{L,\mathfrak{N}}^\times,\ u \in \det V(\mathfrak{N}). \]
Write $\mathfrak{n}$ for the ideal $\big(\prod_{\mathfrak{q} \nmid \mathfrak{N}} \varpi_\mathfrak{q}^{e(\mathfrak{q})}\big) \cal{O}_L$, then we define the Hecke operators associated to the adele $y$ by \begin{equation} T(y) = T(a,1) T(\mathfrak{n}) \prod_{\mathfrak{q}\mid \mathfrak{N}} U(\varpi_\mathfrak{q}^{e(\mathfrak{q})}),\qquad\text{and}\qquad T_0(y) = \{y^{w-t_L}\}T(y). \end{equation}
\begin{definition} A cuspform $\mathsf{f}\in S_{k,w}(K;\mathbb{C})$ is said to be an eigenform if it is an eigenvector for all the Hecke operators $T_0(y)$, and it is normalized if $\mathsf{a}(1,\mathsf{f}) = 1$. \end{definition}
\noindent For finite ideles $b \in \mathbb{A}_{L,f}^\times$ there are other operators $V(b)$ on cuspforms defined by
\begin{equation}
(V(b)\mathsf{f})(x) = \mathrm{N}_{L/\mathbb{Q}}(b\cal{O}_L)^{-1}\mathsf{f}\left(x\begin{pmatrix} b^{-1} & 0 \\ 0 & 1 \end{pmatrix}\right).
\end{equation} These operators are right inverses of the $U$-operators
\begin{equation}
U(\varpi_\mathfrak{q})\circ V(\varpi_\mathfrak{q}) = 1.
\end{equation}
\subsection{Hida families}\label{sect Hida families} Let $O$ be a valuation ring in $\overline{\mathbb{Q}}_p$ finite flat over $\mathbb{Z}_p$ and containing $\iota_p(\cal{V})$. For $\mathfrak{N}$ an $\cal{O}_L$-ideal prime to $p$ and compact open subgroups satisfying $V_1(\frak{N})\le K\le V_0(\mathfrak{N})$ we set $K(p^\alpha) = K \cap V(p^\alpha)$ and $K(p^\infty) = \cap_{\alpha\ge1} K(p^\alpha)$. The projective limit of $p$-adic Hecke algebras \[ \mathbf{h}_L(K;O):=\underset{\leftarrow,\alpha}{\lim}\ \mathsf{h}_{k,w}(K(p^\alpha);O) \qquad \text{acts on} \qquad \underset{\rightarrow,\alpha}{\lim}\ S_{k,w}(K(p^\alpha);O) \]
through the Hecke operators $\mathbf{T}(y) = \varprojlim_\alpha T(y) y_p^{w-t_L}$ and it is independent of the weight $(k,w)$. Since $\mathbf{h}_L(K;O)$ is a compact ring, it can be written as a direct sum of algebras \[ \mathbf{h}_L(K;O) = \mathbf{h}^{\text{n.o.}}_L(K;O) \oplus \mathbf{h}_L^{\text{ss}}(K;O) \] such that $\mathbf{T}(\varpi_p)$ is a unit in $\mathbf{h}_L^{\text{n.o.}}(K;O)$ and topologically nilpotent in $\mathbf{h}_L^{\text{ss}}(K;O)$. We denote by \[ e_{\text{n.o.}} = \underset{n \to \infty}{\lim} \mathbf{T}(\varpi_p)^{n!} \] the idempotent corresponding to the nearly ordinary part $\mathbf{h}^{\text{n.o.}}_L(K;O)$. For any $\alpha\ge1$ we set \begin{equation} Z^\alpha_L(K) := \mathbb{A}_L^\times/L^\times (\mathbb{A}_{L,f} \cap K(p^\alpha)) L_{\infty,+}^\times\quad\text{and}\quad \mathbb{G}_L^\alpha(K):= Z^\alpha_L(K)\times(\cal{O}_L/p^\alpha\cal{O}_L)^\times. \end{equation} If we denote the projective limits by \begin{equation} Z_L(K):= \underset{\leftarrow,\alpha}{\lim}\ Z^\alpha_L(K),\qquad \mathbb{G}_L(K) :=\underset{\leftarrow,\alpha}{\lim}\ \mathbb{G}^\alpha_L(K), \end{equation} then $\mathbb{G}_L(K) = Z_L(K)\times\cal{O}_{L,p}^\times$ and there is a group homomorphism \begin{equation} \mathbb{G}_L(K)\to \mathbf{h}_L(K;O)^\times,\qquad (z,a)\mapsto \langle z,a\rangle:=\langle z\rangle T(a^{-1},1) \end{equation} that endows $\mathbf{h}_L(K;O)$ with a structure of $O\llbracket\mathbb{G}_L(K)\rrbracket$-algebra.
\noindent Let $\mathrm{cl}_L^+(\mathfrak{N}p)$ be the strict ray class group of modulus $\mathfrak{N}p$ and $\overline{\cal{E}}^+_{\mathfrak{N}p}$ the closure in $\cal{O}_{L,p}^\times$ of the totally positive units of $\cal{O}_L$ congruent to $1$ $\pmod{\mathfrak{N}p}$. There is a short exact sequence \[ \xymatrix{ 1\ar[r] & \overline{\cal{E}}^+_{\mathfrak{N}p}\backslash\big(1+p\cal{O}_{L,p}\big) \ar[r]& Z_L(K) \ar[r] & \mathrm{cl}_L^+(\mathfrak{N}p) \ar[r] & 1, } \] that splits when $p$ is large enough because then the group $\mathrm{cl}_L^+(\mathfrak{N}p)$ has order prime to $p$. We denote the canonical decomposition by \begin{equation}\label{GaloisDecomposition}
Z_L(K) \overset{\sim}{\to} \overline{\cal{E}}^+_{\mathfrak{N}p}\backslash\big(1+p\cal{O}_{L,p}\big) \times \mathrm{cl}_L^+(\mathfrak{N}p),\qquad z\mapsto \big(\xi_z,\bar{z}\big). \end{equation} If we set $\boldsymbol{\mathfrak{I}}_L = \overline{\cal{E}}^+_{\mathfrak{N}p}\backslash\big(1+p\cal{O}_{L,p}\big) \times (1+p \cal{O}_{L,p})$, then the short exact sequence \[\xymatrix{ 1\ar[r]& \boldsymbol{\mathfrak{I}}_L \ar[r]& \mathbb{G}_L(K)\ar[r]& \mathrm{Cl}_L^+(\mathfrak{N}p)\times (\cal{O}_L/p)^\times\ar[r]& 1 }\]
splits canonically when $p$ is large enough. The group $\boldsymbol{\mathfrak{I}}_L$ is a finitely generated $\mathbb{Z}_p$-module of $\mathbb{Z}_p$-rank $[L:\mathbb{Q}]+1+\delta$, where $\delta$ is Leopoldt's defect for $L$. Let $\mathbf{W}$ be the torsion-free part of $\boldsymbol{\mathfrak{I}}_L$ and denote by $\boldsymbol{\Lambda}_L = O\llbracket \mathbf{W} \rrbracket$ the associated completed group ring.
\begin{theorem}{(\cite{nearlyHida}, Theorem 2.4)} The nearly ordinary Hecke algebra $\mathbf{h}_L^{\text{n.o.}}(K;O)$ is finite and torsion-free over $\boldsymbol{\Lambda}_L$. \end{theorem}
\noindent The completed group ring $O\llbracket\mathbb{G}_L(\mathfrak{N})\rrbracket$ naturally decomposes as the direct sum $\bigoplus_\chi \boldsymbol{\Lambda}_{L,\chi}$ ranging over all the characters of the torsion subgroup $\mathbb{G}_L(\mathfrak{N})_\mathrm{tor} = \mathfrak{I}_{L,\mathrm{tor}}\times \mathrm{cl}_L^+(\mathfrak{N}p)\times (\cal{O}_L/p)^\times$. It induces a decomposition of the nearly ordinary Hecke algebra $ \mathbf{h}_L^{\text{n.o.}}(K;O) = \bigoplus_{\chi}\mathbf{h}_L^{\text{n.o.}}(K;O)_\chi$.
\begin{definition}\label{def I-adic cuspforms}
Let $\chi: \mathbb{G}_L(K)_\mathrm{tor} \rightarrow O^\times$ be a character. For any $\boldsymbol{\Lambda}_{L,\chi}$-algebra $\mathbf{I}$, the space of nearly ordinary $\mathbf{I}$-adic cuspforms of tame level $K$ and character $\chi$ is
\[
\bar{\mathbf{S}}_L^\mathrm{n.o.}(K,\chi;\mathbf{I}) :=
\mathrm{Hom}_{\boldsymbol{\Lambda}_\chi\mbox{-}\mathrm{mod}} \big( \mathbf{h}_L^\mathrm{n.o.}(K(p^\infty);O)_\chi, \mathbf{I}\big).
\]
When an $\mathbf{I}$-adic cuspforms is also a $\boldsymbol{\Lambda}_\chi$-algebra homomorphism, we call it a Hida family. \end{definition}
Let $\psi: \mathrm{cl}_L^+(\mathfrak{N}p^\alpha) \rightarrow O^\times$, $\psi': (\cal{O}_L/p^\alpha)^\times \rightarrow O^\times$ be a pair of characters and $(k,w)$ a weight satisfying $k-2w = mt_L$. The group homomorphism \[ \mathbb{G}_L(K)\to O^\times,\qquad (z,a) \mapsto \psi(z)\psi'(a)\varepsilon_L(z)^ma^{t_L-w} \] determines a $O$-algebra homomorphism $\mathrm{P}_{k,w,\psi,\psi'}:O\llbracket\mathbb{G}_L(K) \rrbracket\rightarrow O$.
\begin{definition} For a $\boldsymbol{\Lambda}_{L,\chi}$-algebra $\mathbf{I}$ the set of \emph{arithmetic points}, denoted by $\cal{A}_\chi(\mathbf{I})$, is the subset of $\mathrm{Hom}_{O\mbox{-}\mathrm{alg}}(\mathbf{I},\overline{\mathbb{Q}}_p)$ consisting of homomorphisms that coincide with some $\mathrm{P}_{k,w,\psi,\psi'}$ when restricted to $\boldsymbol{\Lambda}_{L,\chi}$. \end{definition}
\begin{definition}
Let $(k,w)$ be a weight such that $k-2w=mt_L$. For any pair of characters
$\psi: \mathrm{cl}_L^+(\mathfrak{N}p^\alpha) \rightarrow O^\times$
and
$\psi': (\cal{O}_L/p^\alpha)^\times \rightarrow O^\times$,
one defines
\[
S_{k,w}(K(p^\alpha);\psi,\psi';O)\subseteq S_{k,w}(K(p^\alpha);O)
\]
to be the submodule of cuspforms satisfying
\[
\langle z,a \rangle \mathsf{f} = \varepsilon_L(z)^m\psi(z)\psi'(a)\cdot\mathsf{f} \qquad \forall (z,a) \in \mathbb{G}_L(K).
\]
\end{definition}
\subsection{Twists of cuspforms} First we recall Hida's twists of Hilbert cuspforms by Hecke characters (\cite{pHida}, Section 7F), then we define a twist of cuspforms by local characters and relate it to the Atkin-Lehner involution.
\noindent Let $\Psi:\mathbb{A}_L^\times/L^\times\to\mathbb{C}^\times$ be a Hecke character of conductor $C(\Psi)$ and infinity type $m\cdot t_L$, $m\in\mathbb{Z}$. Since $\Psi$ has algebraic values on finite ideles, Hida defined the map \[\begin{split} -\otimes\Psi: S_{k,w}\big(\mathfrak{N}p^\alpha,\psi,\psi';&O\big)\to S_{k,w+m\cdot t_L}\big(C(\Psi)\mathfrak{N}p^\alpha,\psi\Psi^2,\psi'\Psi_p^{-1};O\big)\\ &\mathsf{f}\mapsto\mathsf{f}\otimes\Psi \end{split}\] where the cuspform $\mathsf{f}\otimes\Psi$ has adelic Fourier coefficients given by \begin{equation} \mathsf{a}_p(y,\mathsf{f}\otimes\Psi)=\Psi(y^\infty)\mathsf{a}_p(y,\mathsf{f})y_p^{m\cdot t_L}. \end{equation} When $\Psi=\lvert-\rvert_{\mathbb{A}_L}^m$ is a integral power of the adelic norm character, one finds that
\[
\mathsf{f}\otimes\lvert-\rvert_{\mathbb{A}_L}^m\in S_{k,w+m\cdot t_L}\big(\mathfrak{N}p^\alpha,\psi,\psi';O\big)
\]
and
\[
\mathsf{a}_p(y,\mathsf{f}\otimes\lvert-\rvert_{\mathbb{A}_L}^m)=\varepsilon_F(y)^{-m}\mathsf{a}_p(y,\mathsf{f}).
\] Note that twisting does not affect the classical Fourier expansion on the identity component of the Hilbert modular surface.
\begin{lemma}\label{twist classical expansion}
Let $\mathsf{g}\in S_{k,w}\big(\mathfrak{N}p^\alpha,\psi,\psi';O\big)$ and $\mathsf{g}_1$ be the first component of the corresponding tuple of classical Hilbert modular forms. Then for any Hecke character $\Psi:\mathbb{A}_L^\times/L^\times\rightarrow \mathbb{C}^\times$, we have
\[
(\mathsf{g}\otimes \Psi)_1 = \Psi(\mathsf{d}_L) \mathsf{g}_1.
\] \end{lemma} \begin{proof}
If $\xi\in (\frak{d}_L)_+$, it follows directly from the definitions that
\[
\mathsf{a}_p(\xi \mathsf{d}_L,\mathsf{g}) = a(\xi,\mathsf{g}_1).
\]
Suppose $\Psi$ has infinity type $m\cdot t_L$, then one can compute that
\[\begin{split}
a(\xi,(\mathsf{g}\otimes\Psi)_1)= \mathsf{a}_p(\xi \mathsf{d}_L,\mathsf{g}\otimes \Psi)
&=
\mathsf{a}_p(\xi \mathsf{d}_L,\mathsf{g})\Psi((\xi \mathsf{d}_L)^\infty) (\xi\mathsf{d}_L)_p^{m\cdot t_L}\\
&=
\Psi(\mathsf{d}_L)\cdot\mathsf{a}_p(\xi \mathsf{d}_L,\mathsf{g})\\
&=
\Psi(\mathsf{d}_L)\cdot a(\xi ,\mathsf{g}_1).
\end{split}\] \end{proof}
\noindent Now let $\mathfrak{N}, \mathfrak{A}$ be integral $\cal{O}_L$-ideals prime to $p$ and let $\chi:\mathrm{Cl}_L^+(\mathfrak{A}p^\alpha)\to O^\times$ be a finite order Hecke character. Hida defined a function \[\begin{split} -_{\lvert\chi}: S_{k,w}\big(\mathfrak{N}p^\alpha,\psi,\psi';&O\big)\to S_{k,w}\big(\mathfrak{N}p^\alpha\mathfrak{A}^2,\psi\chi^2,\psi';O\big)\\
&\mathsf{f}\mapsto\mathsf{f}_{\lvert \chi} \end{split}\] where $\mathsf{f}_{\lvert \chi}$ is a cuspform whose adelic Fourier coefficients are given by \[ \mathsf{a}_p(y,\mathsf{f}_{\lvert \chi}) = \begin{cases}\chi(y)\mathsf{a}_p(y,\mathsf{f}) & \text{if}\ y_{\mathfrak{A}p}\in\cal{O}_{L,\mathfrak{A}p}^\times\\ 0&\text{otherwise}. \end{cases} \] We define a new kind of twist by slightly modifying Hida's work. Let $p>[L:\mathbb{Q}]$ be a rational prime, $\mathfrak{p}\mid p$ an $\cal{O}_L$-prime ideal and $\chi:\cal{O}_{L,\mathfrak{p}}^\times\to O^\times$ a finite order character of conductor $c(\chi)$. \begin{proposition}\label{TwistHMF}
There is a function
\[\begin{split}
-\star\chi: S_{k,w}\big(\mathfrak{N}p^\alpha,\psi,\psi';&O\big)\to S_{k,w}\big(\mathfrak{N}p^{\mathrm{max}\{\alpha,c(\chi)\}},\psi,\psi'\chi^{-1};O\big)\\
&\mathsf{f}\mapsto\mathsf{f}\star\chi
\end{split}\]
where $\mathsf{f}\star\chi$ has adelic Fourier coefficients given by
\[
\mathsf{a}_p(y,\mathsf{f}\star\chi)=\begin{cases}\chi(y_\mathfrak{p})\mathsf{a}_p(y,\mathsf{f})& \text{if}\ y_{\mathfrak{p}}\in\cal{O}_{F,\mathfrak{p}}^\times\\ 0&\text{otherwise}. \end{cases}
\] \end{proposition} \begin{proof}
Define
\[
\mathsf{h}(x):=\sum_{u\in\left(\cal{O}_{L}/\mathfrak{p}^{c(\chi)}\right)^\times}\chi^{-1}(u)\mathsf{f}\left(x\begin{pmatrix}
1& u\varpi_\mathfrak{p}^{-c(\chi)}\mathsf{d}_L\\
0&1
\end{pmatrix}\right).
\]
Looking at the adelic $q$-expansion we see that for all $y\in\widehat{\cal{O}_L}$
\[
\mathsf{a}_p(y,\mathsf{h})=\left(\sum_{u\in\left(\cal{O}_{L}/\mathfrak{p}^{c(\chi)}\right)^\times}\chi^{-1}(u)\chi_L\big(yu\varpi_\mathfrak{p}^{-c(\chi)}\big)\right)\mathsf{a}_p(y,\mathsf{f}).
\]
Then one notices that $\chi_L\big(yu\varpi_\mathfrak{p}^{-c(\chi)}\big)$ is a $p$-power root of unity of order $c(\chi)-\mathrm{val}_\frak{p}(y_\frak{p})$ so that
\[
\sum_{u\in\left(\cal{O}_{L}/\mathfrak{p}^{c(\chi)}\right)^\times}\chi^{-1}(u)\chi_L\big(yu\varpi_\mathfrak{p}^{-c(\chi)}\big)=\begin{cases}
\chi(y_\mathfrak{p})G(\chi)&\text{if}\ y_{\mathfrak{p}}\in\cal{O}_{L,\mathfrak{p}}^\times\\
0&\text{otherwise}
\end{cases}
\]
where $G(\chi)=\sum_{u}\chi^{-1}(u)\chi_L\big(u\varpi_\mathfrak{p}^{-c(\chi)}\big)$ is a Gauss sum. The cuspform
\[
\mathsf{f}\star\chi:=G(\chi)^{-1}\mathsf{h}
\]
has the claimed adelic $q$-expansion and another direct calculation shows that the operators $T(a^{-1},1)$'s acts on it by
\[
\mathsf{a}_p\big(y,(\mathsf{f}\star\chi)_{\lvert T(a^{-1},1)}\big)=\chi^{-1}(a_\mathfrak{p})\psi'(a)\mathsf{a}_p(y,\mathsf{f}\star\chi).
\] \end{proof}
\noindent Let $\mathsf{f}\in S_{k,w}(\mathfrak{N}p^\alpha,\psi,\psi';O)$ be an eigencuspform. For $\mathfrak{p}$ a prime $\cal{O}_L$-ideal, let $\tau_{\mathfrak{p}^\alpha}\in\mathrm{GL}_2(\mathbb{A}_L)$ be defined by \[ (\tau_{\mathfrak{p}^\alpha})_\mathfrak{p}=\begin{pmatrix}
0&-1\\
\varpi_\mathfrak{p}^\alpha&0 \end{pmatrix},\qquad (\tau_{\mathfrak{p}^\alpha})_v=\mathbbm{1}_2\qquad \text{for}\qquad v\not=\mathfrak{p}. \] We have the following operator \[ \mathsf{f}\lvert\tau_{\mathfrak{p}^\alpha}^{-1}(x):=\mathsf{f}(x\tau_{\mathfrak{p}^\alpha}^{-1}). \] Moreover, for any prime ideal $\frak{p}\mid p$, the $\frak{p}$-depletion of a cuspform $\mathsf{f}$ is the cuspform \[ \mathsf{f}^{[\frak{p}]}=\left(1-V(\varpi_\frak{p})\circ U(\varpi_\frak{p})\right)\mathsf{f} \]
whose Fourier coefficient $\mathsf{a}_p(y,\mathsf{f}^{[\frak{p}]})$ equals $\mathsf{a}_p(y,\mathsf{f})$ if $y_\frak{p}\in \cal{O}_{F,\frak{p}}^\times$ and $0$ otherwise. \begin{lemma}\label{Atkin-Lehner}
Let $\mathsf{f}\in S_{2t_L,t_L}(\mathfrak{N}p^\alpha;\psi,\psi';O)$ be an eigenform, then
\[
\mathsf{a}_p(\varpi_\mathfrak{p},\mathsf{f})^\alpha \cdot
\Big(\mathsf{f}\lvert\tau^{-1}_{\mathfrak{p}^\alpha}\Big)^{\mbox{\tiny $[\mathfrak{p}]$}}
=
G(\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2})\cdot\Big(\mathsf{f}\star\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2}\Big)
\]
the equality taking place in $ S_{2t_L,t_L}\big(\mathfrak{N}p^\alpha;\psi,\psi'\cdot\psi_\mathfrak{p}^{-1}(\psi_\mathfrak{p}')^{-2};O\big)$. \end{lemma} \begin{proof}
For $u,v\in\cal{O}^\times_{L,\mathfrak{p}}$ such that $uv\equiv-1 \pmod{\mathfrak{p}^\alpha}$ we have the following identity in $\mathrm{GL}_2(L_\frak{p})$
\begin{equation}\label{matrix identity}
\begin{pmatrix}
0&-1\\
\varpi_\mathfrak{p}^\alpha&0
\end{pmatrix}^{-1}
\begin{pmatrix}
\varpi_\mathfrak{p}^\alpha&u\\
0&1
\end{pmatrix}
=
\begin{pmatrix}
1&v\varpi_\mathfrak{p}^{-\alpha}\\
0&1
\end{pmatrix}
\begin{pmatrix}
v & (1+uv)\varpi_\mathfrak{p}^{-\alpha}\\
-\varpi_\mathfrak{p}^\alpha & -u
\end{pmatrix}.
\end{equation}
Let $\delta_{v,\alpha}=\begin{pmatrix}
1&v\varpi_\mathfrak{p}^{-\alpha}\mathsf{d}_L\\
0&1
\end{pmatrix}\in G(\mathbb{A})$, denote by
$\gamma_{u,\alpha}\in G(\mathbb{A})$ the matrix satisfying
\[
(\gamma_{u,\alpha})_\frak{p}=\begin{pmatrix}
\varpi_\mathfrak{p}^\alpha&u\\
0&1
\end{pmatrix},
\qquad (\gamma_{u,\alpha})_v=\mathbbm{1}_2\qquad \text{for}\qquad v\not=\frak{p}
\]
and by
$\beta_{u,v,\alpha}\in G(\mathbb{A})$ the matrix satisfying
\[
(\beta_{u,v,\alpha})_\frak{p}=\begin{pmatrix}
v & (1+uv)\varpi_\mathfrak{p}^{-\alpha}\\
-\varpi_\mathfrak{p}^\alpha & -u
\end{pmatrix},
\qquad (\beta_{u,v,\alpha})_v=\begin{pmatrix}
1 & -(\mathsf{d}_L)_v\\
0 & 1
\end{pmatrix}\qquad \text{for}\qquad v\not=\frak{p}.
\]
Then equation (\ref{matrix identity}) implies that
\[
(\tau_{\frak{p}^\alpha})^{-1}\cdot\gamma_{u,\alpha}=\delta_{v,\alpha}\cdot\beta_{u,v,\alpha}
\]
Right translation by $\beta_{u,v,\alpha}$ on a Hilbert cuspform
corresponds to the action of the element $\langle v^{-1}, v^{-2}\rangle$ in $\mathbb{G}_L(K)$ and we can write
\begin{equation}\label{AL equation}
\mathsf{f}{\lvert\gamma_{u,\alpha}\lvert \tau_{\mathfrak{p}^\alpha}^{-1}}
=
(\langle v^{-1}, v^{-2}\rangle\mathsf{f}){\lvert\delta_{v,\alpha}}=
\psi(v^{-1})\psi'(v^{-2})\cdot\mathsf{f}{\lvert\delta_{v,\alpha}}.
\end{equation}
On one hand, summing the right hand side of (\ref{AL equation}) over $v\in(\cal{O}_{L}/\mathfrak{p}^\alpha)^\times$ gives
\[
\sum_{v\in(\cal{O}_{L}/\mathfrak{p}^\alpha)^\times}
(\psi(\psi')^{2})^{-1}(v) \mathsf{f}{\lvert\delta_{v,\alpha}}
=
G(\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2})\cdot\big(\mathsf{f}\star\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2}\big).
\]
On the other hand, summing the left hand side of (\ref{AL equation}) over $u\in(\cal{O}_{L}/\mathfrak{p}^\alpha)^\times$ gives
\[\begin{split}
\sum_{u\in(\cal{O}_L/\mathfrak{p}^\alpha)^\times}
\mathsf{f}{\lvert\gamma_{u,\alpha}}{\lvert \tau_{\mathfrak{p}^\alpha}^{-1}}
&=
\Big(\sum_{u\in(\cal{O}_{L}/\mathfrak{p}^\alpha)}
\mathsf{f}{\lvert\gamma_{u,\alpha}}\Big){\lvert \tau_{\mathfrak{p}^\alpha}^{-1}}
-
\Big(\sum_{u'\in(\mathfrak{p}\cal{O}_{L}/\mathfrak{p}^{\alpha})}
\mathsf{f}{\lvert\gamma_{u',\alpha-1}}
\lvert \tau_{\mathfrak{p}^\alpha}^{-1}
\Big)\lvert\mbox{\tiny $\begin{pmatrix}
1&\\
&\varpi_\mathfrak{p}
\end{pmatrix}$} \\
&=
\Big(U(\varpi_{\mathfrak{p}})^\alpha\mathsf{f} \Big){\lvert\tau_{\mathfrak{p}^\alpha}^{-1}}
-
\Big(U(\varpi_{\mathfrak{p}})^{\alpha-1}\mathsf{f}\Big)\lvert \tau_{\mathfrak{p}^\alpha}^{-1}\lvert \mbox{\tiny $\begin{pmatrix}
1&\\
&\varpi_\mathfrak{p}
\end{pmatrix}$},
\end{split}\]
where in the first equality we used the following identity for
$u\in\mathfrak{p}(\cal{O}_L/\mathfrak{p}^\alpha)$:
\[\begin{pmatrix}
0&-1\\
\varpi_\mathfrak{p}^\alpha&0
\end{pmatrix}^{-1}
\begin{pmatrix}
\varpi_\mathfrak{p}^\alpha&u\\
0&1
\end{pmatrix}=\begin{pmatrix}
1&0\\
0&\varpi_{\mathfrak{p}}
\end{pmatrix}\begin{pmatrix}
0&-1\\
\varpi_\mathfrak{p}^\alpha&0
\end{pmatrix}^{-1}
\begin{pmatrix}
\varpi_\mathfrak{p}^{\alpha-1}&u/\varpi_\mathfrak{p}\\
0&1
\end{pmatrix}.\]
Finally, taking the $\mathfrak{p}$-depletion of both expressions we obtain
\[
\mathsf{a}_p(\varpi_\mathfrak{p},\mathsf{f})^\alpha \cdot
\Big(\mathsf{f}\lvert\tau^{-1}_{\mathfrak{p}^\alpha}\Big)^{\mbox{\tiny $[\mathfrak{p}]$}}
=
G(\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2})\cdot\Big(\mathsf{f}\star\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2}\Big)
\]
because
\[
\left[\big(U_{\mathfrak{p}}^{\alpha-1}\mathsf{f}\big)\lvert \tau_{\mathfrak{p}^\alpha}^{-1}\lvert \mbox{\tiny $\begin{pmatrix}
1&\\
&\varpi_\mathfrak{p}
\end{pmatrix}$}\right]^{\mbox{\tiny $[\mathfrak{p}]$}}
= 0
\qquad \text{and}\qquad \Big(\mathsf{f}\star\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2}\Big)^{\mbox{\tiny $[\mathfrak{p}]$}}=\mathsf{f}\star\psi_\mathfrak{p}(\psi'_\mathfrak{p})^{2}.
\] \end{proof}
\section{Automorphic $p$-adic $L$-functions} Let $L/\mathbb{Q}$ be a real quadratic extension, $N$ a positive integer and $\frak{Q}$ an $\cal{O}_L$-ideal. We consider a primitive Hilbert cuspform $\mathsf{g}_\circ\in S_{t_L,t_L}(\mathfrak{Q};\chi_\circ; \overline{\mathbb{Q}})$ over $L$ with associated Artin representation $\varrho:\Gamma_L\to\mathrm{GL}_2(\mathbb{C})$, and a primitive elliptic cuspform $\mathsf{f}_\circ\in S_{2,1}(N; \psi_\circ; \overline{\mathbb{Q}})$ such that the characters $\chi_\circ:\mathrm{Cl}^+_L(\mathfrak{Q})\to O^\times$, $\psi_\circ:\mathrm{Cl}^+_\mathbb{Q}(N)\to O^\times$ satisfy \begin{equation}\label{assumption characters} \chi_{\circ\lvert\mathbb{Q}}\cdot\psi_\circ\equiv 1. \end{equation} Denote by $\Pi=\pi_{\mathsf{g}_\circ}^u\otimes\sigma_{\mathsf{f}_\circ}^u$ the unitary cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A}_{L\times\mathbb{Q}})$ associated to the two cuspforms. The restriction to the ideles $\mathbb{A}_\mathbb{Q}^\times$ of its central character $\omega_\Pi$ is trivial under hypothesis ($\ref{assumption characters}$). Therefore the twisted triple product $L$-function $L(s,\Pi,\mathrm{r})$ admits meromorphic continuation to $\mathbb{C}$, functional equation $L(s,\Pi,\mathrm{r})=\epsilon(s,\Pi,\mathrm{r})L(1-s,\Pi,\mathrm{r})$ and it is holomorphic at the center $s=1/2$ (\cite{BlancoFornea}, Section 3.1). By a direct inspection of Euler products (\cite{MicAnalytic}, Section 5) one deduces that the twisted triple product $L$-function does not vanish at its center if and only if the $L$-function $L\big(\mathsf{f}_\circ,\mathrm{As}(\varrho),s\big)$ associated to the Galois representation $\mathrm{As}(\varrho)\otimes\mathrm{V}_{\mathsf{f}_\circ}$ -- where $\mathrm{V}_{\mathsf{f}_\circ}$ is a $\mathsf{f}_\circ$-isotypic quotient of the \'etale cohomology of a modular curve -- does not vanish at its center: \[ L\Big(\frac{1}{2},\Pi,\mathrm{r}\Big)\not=0\qquad \iff\qquad L\Big(\mathsf{f}_\circ,\mathrm{As}(\varrho),1\Big)\not=0. \] We let $\mu:\mathbb{A}_\mathbb{Q}^\times\to\mathbb{C}^\times$ denote the quadratic character attached to $L/\mathbb{Q}$ by class field theory and assume from now on that \begin{equation}\label{epsilonfactors} \epsilon_\ell\left(\frac{1}{2},\Pi_\ell,\mathrm{r}\right)\cdot\mu_\ell(-1)=+1\qquad\qquad\forall\ \text{prime}\ \ell. \end{equation} \begin{remark} The assumption on local $\epsilon$-factors can be satisfied by requiring $N$ split in $L$ and coprime to $\frak{Q}$ (\cite{epsilonprasad}, Theorems $\text{B},\text{D}$ and Remark 4.1.1). \end{remark}
\begin{theorem}\label{Ichino}
The central $L$-value $L\big(\mathsf{f}_\circ,\mathrm{As}(\varrho),1\big)$ does not vanish if and only if there exists an integer $M$ supported on the prime divisors of $N\cdot\mathrm{N}_{L/\mathbb{Q}}(\frak{Q})\cdot d_{L/\mathbb{Q}}$, and an eigenform for the good Hecke operators $\breve{\mathsf{g}}_\circ\in S_{t_L,t_L}(M\cal{O}_L;O)[\mathsf{g}_\circ]$ such that the Petersson inner product
\[
\Big\langle\zeta^*(\breve{\mathsf{g}}_\circ)\otimes\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}^{-1},\ \mathsf{f}_\circ^*\Big\rangle\not=0
\]
does not vanish. Here $\mathsf{f}_\circ^*\in S_{2,1}(N;\psi_\circ^{-1};\overline{\mathbb{Q}})$ denotes the elliptic eigenform whose Hecke eigenvalues are the complex conjugates of those of $\mathsf{f}_\circ$. \end{theorem} \begin{proof}
This theorem is a refinement of a special case of (\cite{BlancoFornea}, Theorem 3.2 $\&$ Lemma 3.4). One can justify the use of $\mathsf{f}_\circ^*$ instead of some eigenform for the good Hecke operators $\breve{\mathsf{f}}_\circ^*\in S_{2,1}(M;O)[\mathsf{f}_\circ^*]$ by looking at the expression of Ichino's local functionals in terms of matrix coefficients. \end{proof}
\subsection{Construction} Let $p$ be a rational prime split in $L$, coprime to the levels $\mathfrak{Q},N$ and such that $\mathsf{g}_\circ$, $\mathsf{f}_\circ$ are $p$-ordinary.
\begin{definition}
Let $\Gamma$ denote the $p$-adic group $1+p\mathbb{Z}_p$ and
$\boldsymbol{\Lambda} = O\llbracket \Gamma \rrbracket$ its completed group ring. We consider the weight space
$\boldsymbol{\cal{W}}=\mathrm{Spf}(\boldsymbol{\Lambda})^\mathrm{rig}$ whose arithmetic points correspond to continuous homomorphisms of the form
\[
\mathsf{w}_{\ell,\chi_{\mbox{\tiny $\spadesuit$}}}:\boldsymbol{\Lambda} \longrightarrow \overline{\mathbb{Q}}_p ,\qquad [u]\mapsto\chi_{\mbox{\tiny $\spadesuit$}}(u)u^{\ell-2}
\]
for $\ell\in\mathbb{Z}_{\ge1}$ and $\chi_{\mbox{\tiny $\spadesuit$}}:\Gamma\to\bar{\mathbb{Q}}_p^\times$ a finite order character. \end{definition} \noindent Given the character \begin{equation} \boldsymbol{\chi}:\mathrm{Cl}_L^+(\mathfrak{Q}p)\longrightarrow O^\times,\qquad z\mapsto \chi_\circ(z)\theta_L^{-1}(\bar{z}) \end{equation} one can define the surjection \begin{equation} \phi_{\boldsymbol{\chi}}:O\llbracket\mathbb{G}_L(\mathfrak{Q})\rrbracket\twoheadrightarrow \boldsymbol{\Lambda},\qquad [(z,a)]\mapsto\boldsymbol{\chi}(z) [\xi_z^{-t_L}]. \end{equation}
\begin{remark} Any arithmetic point $\mathrm{P}:O\llbracket\mathbb{G}_L(\mathfrak{Q})\rrbracket\to\overline{\mathbb{Q}}_p$ of weight $(\ell t_L,t_L)$ and character $(\chi_\circ\theta_L^{1-\ell}\chi^{-1},\mathbbm{1})$, for $\chi:Z_L(\mathfrak{Q})\to O^\times$ a $p$-power order character factoring through the norm, $\chi=\chi_{\mbox{\tiny $\spadesuit$}}\circ\mathrm{N}_{L/\mathbb{Q}}$, factors through $\phi_{\boldsymbol{\chi}}$: \begin{equation} \mathrm{P}_{\ell t_L,t_L,\chi_\circ\theta_L^{1-\ell}\chi^{-1},\mathbbm{1}}=\mathsf{w}_{\ell,\chi_{\mbox{\tiny $\spadesuit$}}}\circ\phi_{\boldsymbol{\chi}} \end{equation} Furthermore, any arithmetic point $\mathrm{P}:O\llbracket\mathbb{G}_L(\mathfrak{Q})\rrbracket\to\overline{\mathbb{Q}}_p$ factoring through $\phi_{\boldsymbol{\chi}}$ has that form. \end{remark}
\noindent Fix $\mathsf{g}_\circ^{\mbox{\tiny $(p)$}}$ an ordinary $p$-stabilization of $\mathsf{g}_\circ$ and let $\mathscr{G}_\mathrm{n.o.}\in \overline{\mathbf{S}}_L^\mathrm{n.o.}(\mathfrak{Q};\boldsymbol{\chi};\mathbf{I}_{\mathscr{G}_\mathrm{n.o.}})$ be the nearly ordinary Hida family passing through it, that is, there exists an arithmetic point $\mathrm{P}_\circ\in\cal{A}(\mathbf{I}_{\mathscr{G}_{\mathrm{n.o.}}})$ of weight $(t_L,t_L)$ and character $(\chi_\circ,\mathbbm{1})$ such that $\mathscr{G}_\mathrm{n.o.}(\mathrm{P}_\circ)=\mathsf{g}_\circ^{\mbox{\tiny $(p)$}}$. If we set
\begin{equation}\label{OrdinaryFamily}
\mathbf{I}_{\mathscr{G}}:=\mathbf{I}_{\mathscr{G}_\mathrm{n.o.}}\otimes_{\phi_{\boldsymbol{\chi}}} \boldsymbol{\Lambda}
\qquad\text{and}\qquad \mathscr{G}:=(1\otimes\phi_{\boldsymbol{\chi}})\circ\mathscr{G}_\mathrm{n.o.},
\end{equation} then $\mathscr{G}\in \overline{\mathbf{S}}_L^\mathrm{ord}(\mathfrak{Q};\boldsymbol{\chi};\mathbf{I}_{\mathscr{G}})$ is the ordinary Hida family passing through $\mathsf{g}_\circ^{\mbox{\tiny $(p)$}}$ (\cite{Wiles-rep}, Theorem 3).
\subsubsection{The $\Lambda$-adic cuspform.} Write $\cal{P}$ for the set of prime $\cal{O}_L$-ideals dividing $p$. The choice of ordinary $p$-stabilization of $\mathsf{g}_\circ$ determines an ordinary $p$-stabilization $\breve{\mathsf{g}}_\circ^{\mbox{\tiny $(p)$}}$ of any cuspform $\breve{\mathsf{g}}_\circ$ arising from Theorem $\ref{Ichino}$. Let $\breve{\mathscr{G}}$ be the $\mathbf{I}_{\mathscr{G}}$-adic cuspform passing through $\breve{\mathsf{g}}_\circ^{\mbox{\tiny $(p)$}}$ defined as in (\cite{DR} Section 2.6). For a choice a prime $\frak{p}\in\cal{P}$ we define a homomorphism of $\boldsymbol{\Lambda}_{L,\boldsymbol{\chi}}$-modules $d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}:\mathbf{h}_L(M\cal{O}_L;O)\longrightarrow\mathbf{I}_{\mathscr{G}}$ by \[ d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\left(\langle z\rangle\mathbf{T}(y)\right) = \begin{cases} \breve{\mathscr{G}}\left(\langle z\rangle\mathbf{T}(y)\right) \phi_{\boldsymbol{\chi}}\big([ y_\mathfrak{p}, 1]\big) y_\mathfrak{p}^{-1} \qquad &\text{if}\ y_p\in\cal{O}_{L,p}^\times \\ 0\qquad&\text{otherwise}. \end{cases} \]
Using diagonal restriction (\cite{BlancoFornea}, Section 2.3) \[ \zeta: \mathbf{h}_\mathbb{Q}(M;O)\to\mathbf{h}_L(M\cal{O}_L;O),\qquad \zeta([z,a])=[\Delta(z),\Delta(a)]a^{-1}, \] we define the ordinary $\mathbf{I}_{\mathscr{G}}$-adic cuspform \begin{equation} e_\mathrm{ord}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger\in \overline{\mathbf{S}}_\mathbb{Q}^\mathrm{ord}\big(M;\psi_\circ^{-1};\mathbf{I}_{\mathscr{G}}\big) \end{equation} by setting \begin{equation}\label{our rule} \zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^{\dagger}\big(\langle z\rangle\mathbf{T}(y)\big)=\theta_\mathbb{Q}(\bar{y})\cdot\phi_{\boldsymbol{\chi}}\big([\Delta(\xi_z)^{-1}\Delta(\xi_y)^{-1/2},1]\big)\cdot d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\Big(\zeta\left[\langle z\rangle\mathbf{T}(y)\right]\Big), \end{equation} where $\Delta:1+p\mathbb{Z}_p\hookrightarrow1+p\cal{O}_{L,p}$ is the diagonal embedding. \begin{remark} By Definition \ref{def I-adic cuspforms}, proving that $e_\mathrm{ord}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger$ is an ordinary $\mathbf{I}_\mathscr{G}$-adic cuspform amounts to showing that \eqref{our rule} determines a $\boldsymbol{\Lambda}_{\psi_\circ^{-1}}$-linear homomorphism. Then, Proposition \ref{specializations} justifies our claim. \end{remark} \noindent Recall that for each $\mu\in \mathrm{I}_L$ there is a differential operator on $p$-adic cuspforms given on adelic $q$-expansions by $\mathsf{a}_p(y,d_\mu\mathsf{g})=y_p^\mu\mathsf{a}_p(y,\mathsf{g})$. (\cite{pHida}, Section $6\text{G}$). \begin{proposition}\label{specializations}
Let $\mathrm{P}\in \cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ be an arithmetic point of weight $(\ell t_L,t_L)$ and character $(\chi_\circ\theta_L^{1-\ell}\chi^{-1},\mathbbm{1})$. If we define the local character $\chi_\mathfrak{p}\theta^{\ell-1}_{L,\mathfrak{p}}:\ \cal{O}_{L,\mathfrak{p}}^\times\longrightarrow O^\times$ by
$x\mapsto \chi\theta^{\ell-1}_{L}(x)$ and $\mu\in\mathrm{I}_L$ is the embedding inducing $\frak{p}$ then
\[
e_\mathrm{ord}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P})=e_\mathrm{ord}\zeta^*\Big[d_\mu^{1-\ell}\big(\breve{\mathsf{g}}^{\mbox{\tiny $[\cal{P}]$}}_\mathrm{P}\star\chi^{-1}_\mathfrak{p}\theta_{L,\mathfrak{p}}^{1-\ell}\big)\Big]\otimes\theta_\mathbb{Q}^{\ell-1}\chi_{\mbox{\tiny $\spadesuit$}}\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}^{\ell-2}
\]
is a classical cuspform of weight $(2,1)$ and character $(\psi_\circ^{-1},\mathbbm{1})$. \end{proposition} \begin{proof}
The direct computation
\[\begin{split}
\mathrm{P}\circ e_\mathrm{ord}\zeta^*&\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^{\dagger}([z,a])=\theta_\mathbb{Q}(a^{-1})\cdot\mathrm{P}\circ\phi_{\boldsymbol{\chi}}([\Delta(\xi_z)^{-1}\Delta(\xi_a)^{1/2},1])\cdot\mathrm{P}\circ d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\Big([\Delta(z),\Delta(a)]a^{-1}\Big)\\
&=\theta_{\mathbb{Q}}^{-1}(a)\cdot \chi^{2}_{\mbox{\tiny $\spadesuit$}}(z)\eta_\mathbb{Q}^{2(2-\ell)}(z)\chi^{-1}_{\mbox{\tiny $\spadesuit$}}(a)\eta_\mathbb{Q}^{\ell-2}(a)\cdot \mathrm{P}\circ\phi_{\boldsymbol{\chi}}([\Delta(z)\Delta(a)_\mathfrak{p}^{-1},\Delta(a)]) \\ &=\theta_{\mathbb{Q}}^{-1}(a)\cdot \chi^{2}_{\mbox{\tiny $\spadesuit$}}(z)\eta_\mathbb{Q}^{2(2-\ell)}(z)\chi^{-1}_{\mbox{\tiny $\spadesuit$}}(a)\eta_\mathbb{Q}^{\ell-2}(a)\cdot (\chi_\circ\theta_L^{1-\ell}\chi^{-1})(\Delta(z)\Delta(a)_\mathfrak{p}^{-1})\varepsilon_L^{\ell-2}(\Delta(z)\Delta(a)_\mathfrak{p}^{-1})\\
&=\psi^{-1}_\circ(z)
\end{split}\]
shows that $e_\mathrm{ord}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P})$ is a cuspform of weight $(2,1)$ and character $(\psi_\circ^{-1},\mathbbm{1})$. Then we compute
\[
\mathsf{a}_p(y,d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}(\mathrm{P}))= y_\mathfrak{p}^{1-\ell}\mathsf{a}_p(y,\breve{\mathsf{g}}^{\mbox{\tiny $[\cal{P}]$}}_\mathrm{P})\cdot\chi^{-1}(y_\mathfrak{p})\theta_L^{1-\ell}(y_\mathfrak{p})
\] which implies
\[
\zeta^*d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}(\mathrm{P})=\zeta^*\Big[d_\mu^{1-\ell}\big(\breve{\mathsf{g}}^{\mbox{\tiny $[\cal{P}]$}}_\mathrm{P}\star\chi^{-1}_\mathfrak{p}\theta_{L,\mathfrak{p}}^{1-\ell}\big)\Big].
\]
Finally,
\[\begin{split}
\mathsf{a}_p\Big(y,\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P})\Big)&=\theta_{\mathbb{Q}}(y)\cdot \chi_{\mbox{\tiny $\spadesuit$}}(y)\eta_\mathbb{Q}^{2-\ell}(y)\cdot \mathsf{a}_p\Big(y, \zeta^*d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}(\mathrm{P})\Big)\\
&= \theta^{\ell-1}_{\mathbb{Q}}(y) \chi_{\mbox{\tiny $\spadesuit$}}(y)\varepsilon_\mathbb{Q}^{2-\ell}(y)\cdot \mathsf{a}_p\Big(y, \zeta^*d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}(\mathrm{P})\Big)
\end{split}\]
proves the last claim
\[
e_\mathrm{ord}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P})=e_\mathrm{ord}\zeta^*\Big[d_\mu^{1-\ell}\big(\breve{\mathsf{g}}^{\mbox{\tiny $[\cal{P}]$}}_\mathrm{P}\star\chi^{-1}_\mathfrak{p}\theta_{L,\mathfrak{p}}^{1-\ell}\big)\Big]\otimes\theta_\mathbb{Q}^{\ell-1}\chi_{\mbox{\tiny $\spadesuit$}}\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}^{\ell-2}.
\] \end{proof}
\begin{corollary}\label{diagonal restriction family}
We have
\[
e_{\mathrm{ord}}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger\in S^\mathrm{ord}_{2,1}\big(Mp;\psi^{-1}_\circ;O\big)\otimes_O\mathbf{I}_{\mathscr{G}}.
\] \end{corollary} \begin{proof} By Proposition $\ref{specializations}$ \[ e_{\mathrm{ord}}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P})\in S^\mathrm{ord}_{2,1}\big(Mp;\psi^{-1}_\circ;O\big) \] for any arithmetic crystalline point $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight $(\ell t_L,t_L)$ and character $(\chi_\circ\theta_L^{1-\ell},\mathbbm{1})$. By the density of such crystalline points, the homomorphism $e_{\mathrm{ord}}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger$ factors through the reduction to weight $(2,1)$, level $Mp$ and character $\psi^{-1}_\circ$ of $\mathbf{h}_\mathbb{Q}^\mathrm{ord}(M;O)$. \end{proof}
\subsubsection{The automorphic $p$-adic $L$-function.} Let $\mathsf{f}_\circ^*\in S_{2,1}(N,\psi_\circ^{-1};\overline{\mathbb{Q}})$ be the elliptic eigenform whose Hecke eigenvalues are the complex conjugates of those of $\mathsf{f}_\circ$, then we can write \[ e_{\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}}e_{\mathrm{ord}}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger=\sum_{d\mid{(M/N)}}\boldsymbol{\lambda}_d\cdot \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}(q^d)\qquad\text{for}\quad\{\boldsymbol{\lambda}_d\}_d\subset\mathbf{I}_\mathscr{G}, \] and define \[ \mathbf{L}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)=\sum_{d\mid{(M/N)}}\boldsymbol{\lambda}_d\cdot \frac{\big\langle\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}(q^d),\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}(q)\big\rangle}{\big\langle\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}},\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\big\rangle}\in\mathbf{I}_\mathscr{G}. \]
\begin{definition}\label{autpadicLfun} Set $\boldsymbol{\cal{W}}_\mathscr{G}=\mathrm{Spf}(\mathbf{I}_\mathscr{G})^\mathrm{rig}$, then the automorphic $p$-adic $L$-function attached to $\big(\breve{\mathscr{G}}, \mathsf{f}_\circ\big)$ is the rigid-analytic function \[ \mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ):\boldsymbol{\cal{W}}_{\mathscr{G}}\longrightarrow\mathbb{C}_p \] determined by $\mathbf{L}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)\in\mathbf{I}_\mathscr{G}$. \end{definition} \noindent For any arithmetic point $\mathrm{P}\in\boldsymbol{\cal{W}}_\mathscr{G}$ of weight $(\ell t_L,t_L)$ and character $(\chi_\circ\theta_L^{1-\ell}\chi^{-1},\mathbbm{1})$ we have \[ \mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}) =\frac{\Big\langle e_\mathrm{ord}\zeta^*\Big[d_\mu^{1-\ell}\big(\breve{\mathsf{g}}^{\mbox{\tiny $[\cal{P}]$}}_\mathrm{P}\star\chi^{-1}_\mathfrak{p}\theta_{L,\mathfrak{p}}^{1-\ell}\big)\Big]\otimes\theta_\mathbb{Q}^{\ell-1}\chi_{\mbox{\tiny $\spadesuit$}}\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}^{\ell-2},\ \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle}{\Big\langle \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}, \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle}. \]
\begin{remark} A different choice of prime $\frak{p}$ above $p$ changes the automorphic $p$-adic $L$-function $\mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)$ by a sign. \end{remark}
\subsection{Weight one specialization}
Consider the $T_0(\varpi_p)$-Hecke polynomial of $\mathsf{f}_\circ^*\in S_{2,1}(N;\psi_\circ^{-1};\overline{\mathbb{Q}})$
\[ 1-\mathsf{a}_p(\varpi_p,\mathsf{f}_\circ^*)X+\psi^{-1}_\circ(p)pX^2=(1-\alpha_{\mathsf{f}_\circ^*}X)(1-\beta_{\mathsf{f}_\circ^*}X)
\]
and suppose $\alpha_{\mathsf{f}_\circ^*}$ denotes the inverse of the root which is a $p$-adic unit. By defining \[ \mathsf{E}(\mathsf{f}_\circ^*):=(1-\beta_{\mathsf{f}_\circ^*}\alpha_{\mathsf{f}_\circ^*}^{-1}) \] we can rewrite the values of the $p$-adic $L$-function at every arithmetic point $\mathrm{P}\in\boldsymbol{\cal{W}}_\mathscr{G}$ as
\begin{equation}\label{eq: second expression p-adic L-function}
\mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}) =\frac{1}{\mathsf{E}(\mathsf{f}_\circ^*)}\frac{\left\langle e_\mathrm{ord}\zeta^*\Big[d_\mu^{1-\ell}\big(\breve{\mathsf{g}}^{\mbox{\tiny $[\cal{P}]$}}_\mathrm{P}\star\chi^{-1}_\mathfrak{p}\theta_{L,\mathfrak{p}}^{1-\ell}\big)\Big]\otimes\theta_\mathbb{Q}^{\ell-1}\chi_{\mbox{\tiny $\spadesuit$}}\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}^{\ell-2},\ \mathsf{f}_\circ^{*}\right\rangle}{\big\langle \mathsf{f}_\circ^{*}, \mathsf{f}_\circ^{*}\big\rangle}. \end{equation} \begin{lemma}\label{specialvalue}
We have
\[
\mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\text{P}_\circ)
=
\frac{\cal{E}_p^\mathrm{sp}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)}
{\mathsf{E}(\mathsf{f}_\circ^*)\cal{E}_{1,p}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)}
\frac{\left\langle \zeta^*(\breve{\mathsf{g}}_\circ)\otimes\lvert-\rvert^{-1}_{\mathbb{A}_\mathbb{Q}}, \mathsf{f}_\circ^{*}\right\rangle}
{\big\langle \mathsf{f}_\circ^{*}, \mathsf{f}_\circ^{*}\big\rangle}
\]
for
\[
\cal{E}^\mathrm{sp}_{p}(\mathsf{g}_\circ,\mathsf{f}^*_\circ)=\underset{\bfcdot,\star\in\{\alpha,\beta\}}{\prod}\left(1-\bfcdot_1\star_2 \beta_{\mathsf{f}_\circ^*}^{-1}\right),\qquad\cal{E}_{1,p}(\mathsf{g}_\circ,\mathsf{f}^*_\circ)=1-\alpha_1\beta_1\alpha_2\beta_2(\beta_{\mathsf{f}^*_\circ})^{-2}.
\]
where $\alpha_i,\beta_i$ are the inverses of the roots of the $T(\mathfrak{p}_i)$-Hecke polynomial for $\mathsf{g}_\circ$, $i=1,2$. \end{lemma} \begin{proof}
The value of the $p$-adic $L$-function at $\mathrm{P}_\circ\in\boldsymbol{\cal{W}}_\mathscr{G}$ can be expressed as
\[
\mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}_\circ) =
\frac{1}{\mathsf{E}(\mathsf{f}_\circ^*)}\frac{\left\langle e_\mathrm{ord}\zeta^*\big(\breve{\mathsf{g}}^{\mbox{\tiny $[\cal{P}]$}}_\mathrm{P}\big),\ \mathsf{f}_\circ^{*}\otimes\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}\right\rangle}{\big\langle \mathsf{f}_\circ^{*}, \mathsf{f}_\circ^{*}\big\rangle}.
\]
The $T_0(\varpi_p)$-Hecke polynomial for the eigenform $\mathsf{f}_\circ^{*}\otimes\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}\in S_{2,2}(N;\psi_\circ^{-1};\overline{\mathbb{Q}})$ is
\[
1-\mathsf{a}_p(\varpi_p,\mathsf{f}_\circ^*)p^{-1}X+\psi_\circ^{-1}(p)p^{-1}X^2=(1-\alpha_{\mathsf{f}^*_\circ\otimes\lvert-\rvert}X)(1-\beta_{\mathsf{f}^*_\circ\otimes\lvert-\rvert}X).
\]
Therefore the inverse of the root which is a $p$-adic unit is
\[
\alpha_{\mathsf{f}^*_\circ\otimes\lvert-\rvert}=\beta_{\mathsf{f}^*_\circ}\cdot p^{-1}.
\]
The result follows applying (\cite{BlancoFornea}, Lemma 3.11):
\[\begin{split}
\mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}_\circ) &=
\frac{1}{\mathsf{E}(\mathsf{f}_\circ^*)}\frac{\underset{\bfcdot,\star\in\{\alpha,\beta\}}{\prod}\left(1-\bfcdot_1\star_2 \big(\alpha_{\mathsf{f}_\circ^*\otimes\lvert-\rvert}\cdot p\big)^{-1}\right)}{1-\alpha_1\beta_1\alpha_2\beta_2\big(\alpha_{\mathsf{f}^*_\circ\otimes\lvert-\rvert}\cdot p\big)^{-2}}
\frac{\Big\langle e_\mathrm{ord}\zeta^*\big(\breve{\mathsf{g}}_\mathrm{P}\big),\ \mathsf{f}_\circ^{*}\otimes\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}\Big\rangle}{\big\langle \mathsf{f}_\circ^{*}, \mathsf{f}_\circ^{*}\big\rangle}\\
&=
\frac{\cal{E}_p^\mathrm{sp}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)}
{\mathsf{E}(\mathsf{f}_\circ^*)\cal{E}_{1,p}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)} \frac{\Big\langle e_\mathrm{ord}\zeta^*\big(\breve{\mathsf{g}}_\mathrm{P}\big),\ \mathsf{f}_\circ^{*}\otimes\lvert-\rvert_{\mathbb{A}_\mathbb{Q}}\Big\rangle}{\big\langle \mathsf{f}_\circ^{*}, \mathsf{f}_\circ^{*}\big\rangle}\\
&=
\frac{\cal{E}_p^\mathrm{sp}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)}
{\mathsf{E}(\mathsf{f}_\circ^*)\cal{E}_{1,p}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)}
\frac{\left\langle \zeta^*(\breve{\mathsf{g}}_\circ)\otimes\lvert-\rvert^{-1}_{\mathbb{A}_\mathbb{Q}}, \mathsf{f}_\circ^{*}\right\rangle}
{\big\langle \mathsf{f}_\circ^{*}, \mathsf{f}_\circ^{*}\big\rangle}.
\end{split}\] \end{proof}
\begin{corollary}\label{firststep}
\[
L\Big(\mathsf{f}_\circ,\mathrm{As}(\varrho),1\Big)\not=0\qquad\iff\qquad\mathscr{L}^\mathrm{aut}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\text{P}_\circ)\not=0.
\] \end{corollary} \begin{proof}
It follows from Theorem \ref{Ichino} and Lemma $\ref{specialvalue}$. The $p$-adic valuations of $\mathsf{E}(\mathsf{f}_\circ^*)$, $\cal{E}_{1,p}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)$, and $\cal{E}_p^\mathrm{sp}(\mathsf{g}_\circ,\mathsf{f}_\circ^*)$ show they are not zero. \end{proof}
\section{Review of Hilbert modular varieties}\label{review HMV}
Recall the algebraic groups \[ D = \mathrm{Res}_{L/\mathbb{Q}}\big(\mathbb{G}_{m,L}\big) ,\qquad G = \mathrm{Res}_{L/\mathbb{Q}}\big(\mathrm{GL}_{2,L}\big) ,\qquad G^* = G \times_D \mathbb{G}_{m,\mathbb{Q}}. \] Given $K \le G(\widehat{\mathbb{Z}})$ a compact open subgroup we define \begin{equation} K^*= K\cap G^*(\mathbb{A}_f),\qquad K'=K \cap \mathrm{GL}_2(\mathbb{A}_f) \end{equation}
with associated Shimura varieties \begin{equation}
\begin{split} &S(K)(\mathbb{C}) := G(\mathbb{Q})_+ \backslash \mathfrak{H}^2 \times G(\mathbb{A}_f)/K,\\ & S^*(K^*)(\mathbb{C}) := G^*(\mathbb{Q})_+ \backslash \mathfrak{H}^2 \times G^*(\mathbb{A}_f)/K^*,\\ & Y(K')(\mathbb{C}) := \mathrm{GL}_2(\mathbb{Q})_+ \backslash \mathfrak{H} \times \mathrm{GL}_2(\mathbb{A}_f)/K'. \end{split} \end{equation} If $K$ is sufficiently small, then $S(K)(\mathbb{C})$, $S^*(K^*)(\mathbb{C})$ and $Y(K')(\mathbb{C})$ have smooth canonical models $S(K)$, $S^*(K^*)$ and $Y(K')$ defined over $\mathbb{Q}$.
\subsubsection{Special level subgroups.} Let $p$ be a rational prime split in $L$, $p\cal{O}_L = \mathfrak{p}_1\mathfrak{p}_2$ and fix isomorphisms $\cal{O}_{L,\mathfrak{p}_1} \simeq \mathbb{Z}_p$, $\cal{O}_{L,\mathfrak{p}_2} \simeq \mathbb{Z}_p$ to identify elements of $\cal{O}_{L,p} $ with pairs $(a_{\frak{p}_1},a_{\frak{p}_2})\in\mathbb{Z}_p\times\mathbb{Z}_p$.
\begin{definition}\label{LevelSubgroups1} For any $\alpha\ge1$ and any compact open $K \subseteq G(\mathbb{A}_f)$ hyperspecial at $p$ we set \[ K_\diamond(p^\alpha):=\left\{\begin{pmatrix}a&b\\c&d \end{pmatrix}\in K_0(p^\alpha)\Big\lvert\ a_{\mathfrak{p}_1}\equiv a_{\mathfrak{p}_2},\ d_{\mathfrak{p}_1}\equiv d_{\mathfrak{p}_2}\pmod{p^{\alpha}}\right\},\quad K_{\diamond,1}(p^\alpha):=K_\diamond(p^\alpha)\cap V_1(p^\alpha) \] \[
K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha):=\left\{\begin{pmatrix}a&b\\c&d \end{pmatrix}\in K_0(p^\alpha)\Big\lvert\ a_{\mathfrak{p}_1}\equiv d_{\mathfrak{p}_2},\ d_{\mathfrak{p}_1}\equiv a_{\mathfrak{p}_2}\equiv1\pmod{p^{\alpha}}\right\}, \] and \[
K_{\diamond,t}(p^\alpha):=\left\{\begin{pmatrix}a&b\\c&d \end{pmatrix}\in K_0(p^\alpha)\Big\lvert\ a_{\mathfrak{p}_1}d_{\mathfrak{p}_1}\equiv a_{\mathfrak{p}_2}d_{\mathfrak{p}_2},\ d_{\mathfrak{p}_1}d_{\mathfrak{p}_2}\equiv 1 \pmod{p^{\alpha}}\right\}. \] \end{definition}
\begin{definition}\label{LevelSubgroups2} For any $\alpha\ge1$ and any compact open $K \subseteq G(\mathbb{A}_f)$ hyperspecial at $p$ we set
\[
K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha)
:=
\left\{\begin{pmatrix}a&b\\c&d \end{pmatrix}\in K_0(p^\alpha)\Big\lvert\ a_{\mathfrak{p}_1}d_{\mathfrak{p}_1}\equiv a_{\mathfrak{p}_2}d_{\mathfrak{p}_2},\ d_{\mathfrak{p}_1}a_{\mathfrak{p}_2}\equiv 1 \pmod{p^{\alpha}}\right\}, \] it is the subgroup of $K_0(p^\alpha)$ generated by $K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha)$ and matrices $\gamma$ of the form \[\gamma_v=\mathbbm{1}_2\quad\text{for}\ v\not=p,\qquad\gamma_p=\begin{pmatrix}d^{-1}&\\&d \end{pmatrix}\quad\text{with}\quad d_{\mathfrak{p}_1}\equiv d_{\mathfrak{p}_2} \pmod{p^{\alpha}}. \] \end{definition}
\subsubsection{Geometrically connected components.} The determinant $\det: G\to D$ induces a bijection between the set of geometric connected components of $S(K)$ and $\text{cl}^+_L(K)=L^\times_+\backslash \mathbb{A}_{L,f}^{\times}/\det(K)$ the strict class group of $K$. The natural surjection $\text{cl}^+_L(K)\twoheadrightarrow \text{cl}^+_L$ to the strict ideal class group of $L$ can be used to label the geometrically connected components of $S(K)$ as follows. Fix fractional ideals $\mathfrak{c}_1,\dots,\mathfrak{c}_{h^+_L}$ forming a set of representatives of $\text{cl}^+_L$. For every such ideal $\frak{c}$ choose $[\frak{c}]_K\subseteq G(\mathbb{A}_f)$ a set of diagonal matrices with lower right entry equal to $1$ and whose determinants represent the preimage of the class $[\mathfrak{c}]$ in $\text{cl}^+_L(K)$. By strong approximation there is a decomposition \[ S(K)(\mathbb{C})=G(\mathbb{Q})_+\backslash \mathfrak{H}^{2}\times G(\mathbb{A}_f)/K=\underset{[\mathfrak{c}]\in \text{cl}^+_L(K)}{\coprod} S^\mathfrak{c}(K)(\mathbb{C}), \] \begin{equation}\label{complex uniformization} \text{where}\qquad S^\mathfrak{c}(K)(\mathbb{C})=\underset{g\in[\mathfrak{c}]_K}{\coprod}\Gamma(g,K)\backslash\mathfrak{H}^{2}\qquad\text{for}\qquad \Gamma(g,K)=gKg^{-1}\cap G(\mathbb{Q})_+. \end{equation}
\begin{proposition}\label{prop comparison different models} If $\cal{O}^\times_L$ does not contain a totally positive unit congruent to $-1$ modulo $p$, then the complex uniformizations of $S(K_\diamond(p^\alpha))$ and $S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))$ can be canonically identified: \[ S(K_\diamond(p^\alpha))(\mathbb{C})=S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))(\mathbb{C})\qquad \forall \alpha\ge1. \] \end{proposition} \begin{proof}
First we note that $\mathrm{det}\big(K_\diamond(p^\alpha)\big)$ and $\mathrm{det}\big(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha)\big)$ both equal $(\mathbb{Z}_p^\times+p^\alpha\cal{O}_{L,p})\mathrm{det}\big(K^p\big)$, thus we can choose the same set of matrices $[\frak{c}]_{K_\diamond(p^\alpha)}=[\frak{c}]_{K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha)}$
for any fractional ideal $\frak{c}$. To conclude we claim that for every matrix $g$ in those sets we have
\[
\Gamma(g,K_\diamond(p^\alpha))=\Gamma(g,K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha)).
\]
We only write the argument showing $\Gamma(g,K_\diamond(p^\alpha))\subseteq\Gamma(g,K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))$ because the other inclusion is analogous. Let $\gamma$ be a matrix in $\Gamma(g,K_\diamond(p^\alpha))$, then its entries $a_\gamma, b_\gamma, c_\gamma, d_\gamma\in L$ satisfy $(a_\gamma)_p, (d_\gamma)_p\in\mathbb{Z}_p^\times+p^\alpha\cal{O}_{L,p}$, $(c_\gamma)_p\in p^\alpha\cal{O}_{L,p}$ and its determinant $\det(\gamma)\in\cal{O}^\times_{L,+}$ is a totally positive unit. In order to prove the claimed inclusion we need to show that
\[(a_\gamma d_\gamma)_{\mathfrak{p}_1}\equiv_{p^\alpha}(a_\gamma d_\gamma)_{\mathfrak{p}_2}\qquad\text{and}\qquad (d_\gamma)_{\mathfrak{p}_1}\equiv_{p^\alpha}(a_\gamma)_{\mathfrak{p}_2}.
\] Since $N_{L/\mathbb{Q}}\big(\det(\gamma)\big)=1$, one sees that either $a_\gamma d_\gamma- 1\in p^\alpha\cal{O}_{L}$ or $a_\gamma d_\gamma+ 1\in p^\alpha\cal{O}_{L}$. The second option implies that $\det(\gamma)\equiv_p-1$ that is ruled out by our assumption. Hence we must have $a_\gamma d_\gamma- 1\in p^\alpha\cal{O}_{L}$. \end{proof}
\subsection{Hecke correspondences}
We recall the conventions for Hecke correspondences that are used in the rest of this work. For $K\le G(\widehat{\mathbb{Z}})$ an open compact subgroup and an element $g \in G(\mathbb{A}_f)$ there is a map \begin{equation} \mathfrak{T}_g: S(K) \longrightarrow S(gKg^{-1}), \qquad [x,h]\mapsto[x, hg^{-1}], \end{equation} descending to a morphism of $\mathbb{Q}$-varieties. The double coset $[KgK]$ defines the following correspondence \begin{equation}\label{HeckeCorrespondence} \xymatrix{
S( g^{-1} K g\cap K)
\ar[r]^-{\mathfrak{T}_{g}} \ar[d]^{\mathrm{pr}}
&
S( K \cap g K g^{-1} )
\ar[d]^-{\mathrm{pr}'}
\\
S(K)
&
S(K)
} \end{equation} acting on the cohomology of $S(K)$ via $(\mathrm{pr}')_{*}\circ(\mathfrak{T}_{g})_*\circ (\mathrm{pr})^{*}$. Suppose $V(\mathfrak{N})\le K$ for some $\cal{O}_L$-ideal $\mathfrak{N}$, then for $a,b \in \cal{O}_{L,\mathfrak{N}}^\times$ and $z \in \mathbb{A}_{L,f}^\times$, we can consider correspondences associated to the double cosets \begin{equation} T(a,b) = \left[K\begin{pmatrix} a&0\\0&b \end{pmatrix}K\right],\qquad \langle z \rangle= \left[K\begin{pmatrix} z&0\\0&z \end{pmatrix}K\right]. \end{equation} Since $z\cdot\mathbbm{1}_2$ belongs to the center, the action of $\langle z\rangle$ is that of
$(\mathfrak{T}_{z})_* = (\mathfrak{T}_{z^{-1}})^*$. Moreover, if the matrix $D_{a,b}:=\mbox{\tiny $\begin{pmatrix} a&0\\0&b \end{pmatrix}$}$ normalizes the compact open subgroup $K$,
then $T(a,b)$ acts as
$(\mathfrak{T}_{D_{a,b}})_* = (\mathfrak{T}_{D_{a,b}^{-1}})^*$.
\begin{definition}\label{diamonds on cohomology}
Given an element $(z,a)\in\mathbb{G}_L(K)$ we set \begin{equation} \langle z, a \rangle:= \big(\mathfrak{T}_{z}\big)\circ \big(\mathfrak{T}_{D_{a^{-1},1}}\big). \end{equation} \end{definition}
\subsubsection{The $U_p$-correspondence.}\label{def Up}
Recall that $\varpi_{\mathfrak{p}} \in \mathbb{A}_{L,f}^\times$ is the element whose $\mathfrak{p}$-component is $p$ and every other component is 1, and $\varpi_p = \varpi_{\mathfrak{p}_1}\varpi_{\mathfrak{p}_2}$. Consider the matrix $g_p = \mbox{\tiny $\begin{pmatrix}\varpi_p&0\\0&1\end{pmatrix}$}$ satisfying $g_p^{-1}K(p^\alpha)g_p \cap K(p^\alpha) = K(p^\alpha)\cap K_0(p^{\alpha+1})$. We denote by \[ \pi_1: S(K(p^\alpha)\cap K_0(p^{\alpha+1})) \longrightarrow S(K(p^\alpha)), \qquad \pi_2: S(K(p^\alpha)\cap K_0(p^{\alpha+1})) \longrightarrow S(K(p^\alpha)), \] the two projections $\pi_1=\mathrm{pr}$ and $\pi_2=\mathrm{pr}'\circ\mathfrak{T}_{g_p}$. Then the $U_p$-correspondence and its adjoint $U_p^*$ are given by \begin{equation} U_p = (\pi_{2})_*\circ(\pi_1)^*,\qquad U_p^* = (\pi_{1})_*\circ(\pi_2)^*. \end{equation} Analogously, there is a $U_\mathfrak{p}$-correspondence for each prime $\frak{p}$ above $p$. Let $ g_\mathfrak{p}
=
\mbox{\tiny $\begin{pmatrix}
\varpi_{\mathfrak{p}}&0\\
0&1
\end{pmatrix}$}$, then
$g_\mathfrak{p}^{-1}K(p^\alpha)g_\mathfrak{p}
=
K(p^\alpha) \cap K_0(\mathfrak{p}^{\alpha+1})$, and there are projections
\[
\pi_{1,\mathfrak{p}}, \pi_{2,\mathfrak{p}}: S(K(p^\alpha) \cap K_0(\mathfrak{p}^{\alpha+1}))\longrightarrow S(K(p^\alpha)).
\]
The $U_{\mathfrak{p}}$ operator is defined as $U_\mathfrak{p}=(\pi_{2,\mathfrak{p}})_*\circ(\pi_{1,\mathfrak{p}})^*$
with its adjoint given by $U^*_\mathfrak{p}=(\pi_{1,\mathfrak{p}})_*\circ(\pi_{2,\mathfrak{p}})^*$. The above discussion holds verbatim if we change $K(p^\alpha)$ to any other level subgroup defined in Defintions \ref{LevelSubgroups1} and \ref{LevelSubgroups2}.
\subsubsection{Atkin--Lehner map.} Recall the matrix $\tau_{\mathfrak{p}_2^\alpha} \in G(\mathbb{A}_{f})$ defined by \[ (\tau_{\mathfrak{p}_2^\alpha})_{\mathfrak{p}_2}=\begin{pmatrix}
0&-1\\
\varpi_{\mathfrak{p}_2}^\alpha&0 \end{pmatrix},\qquad (\tau_{\mathfrak{p}^\alpha})_v=\mathbbm{1}_2\quad \text{for}\; v\not=\mathfrak{p}_2 \] which normalizes $K(p^\alpha)$ and the induced the morphism \[ \mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}: S(K(p^\alpha)) \longrightarrow S(K(p^\alpha)) ,\qquad [x,h] \mapsto [x,h\tau_{\mathfrak{p}_2^\alpha}^{-1}]. \] It intertwines diamonds operators as
\begin{equation}\label{AL-diamonds}
\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}} \circ \big\langle z,a \big\rangle
=
\big\langle z\cdot a_{\mathfrak{p}_2}, a\cdot a_{\mathfrak{p}_2}^{-2}\big\rangle\circ \mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}
\end{equation} if $(z,a)\in \mathbb{G}_L(K)$. Furthermore, \begin{equation} \big(\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}\big)^2=\big\langle-\varpi_{\mathfrak{p}_2}^\alpha,1\big\rangle. \end{equation} It is also useful to know how $\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}$ interacts with the Hecke operators at $p$. For $\frak{p}$ equal to either $\frak{p}_1$ or $\frak{p}_2$ there are commutative diagrams
\[\xymatrix{
S(K(p^\alpha)\cap K_0(p^{\alpha+1}))
\ar[d]^{\nu_{1,\mathfrak{p}}} \ar[drr]^{\pi_1}
&&&
S(K(p^\alpha)\cap K_0(p^{\alpha+1}))
\ar[d]^{\nu_{2,\mathfrak{p}}} \ar[drr]^{\pi_2}
& \\
S(K(p^\alpha)\cap K_0(\mathfrak{p}^{\alpha+1}))
\ar[rr]_{\qquad \pi_{1,\mathfrak{p}}}
&&
S(K(p^\alpha)),
&
S(K(p^\alpha)\cap K_0(\mathfrak{p}^{\alpha+1}))
\ar[rr]_{\qquad \pi_{2,\mathfrak{p}}}
&&
S(K(p^\alpha)).
}\]
A direct calculation shows the following relations \begin{equation}\label{Commute1} \pi_{1,\mathfrak{p}_2}\circ \mathfrak{T}_{\tau_{\mathfrak{p}_2^{\alpha+1}}} = \mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}\circ\pi_{2,\mathfrak{p}_2} ,\qquad \pi_{2,\mathfrak{p}_2}\circ \mathfrak{T}_{\tau_{\mathfrak{p}_2^{\alpha+1}}} = \langle\varpi_\mathfrak{p}^{-1},1\rangle \circ \mathfrak{T}_{\tau_{\mathfrak{p}^\alpha}} \circ \pi_{1,\mathfrak{p}} \end{equation} and \begin{equation}\label{Commute2} \nu_{1,\mathfrak{p}_1} \circ \mathfrak{T}_{\tau_{\mathfrak{p}_2^{\alpha+1}}} = \mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}} \circ \nu_{2,\mathfrak{p}_1} ,\qquad \nu_{2,\mathfrak{p}_1}\circ \mathfrak{T}_{\tau_{\mathfrak{p}_2^{\alpha+1}}} = \langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle \circ \mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}} \circ \nu_{1,\mathfrak{p}_1}. \end{equation} It follows that \begin{equation}\label{U_pAdjoint} (\mathfrak{T}_{\tau_{\mathfrak{p}_2^{\alpha}}})_*\circ U_{\mathfrak{p}_2}\circ (\mathfrak{T}_{\tau_{\mathfrak{p}_2^{\alpha}}})^* = U^*_{\mathfrak{p}_2}\circ \langle\varpi_{\mathfrak{p}_2},1\rangle. \end{equation} Furthermore, it is clear that $U_{\mathfrak{p}_1}$ commutes with $(\mathfrak{T}_{\tau_{\mathfrak{p}_2^{\alpha}}})^*$.
\begin{remark}
The level subgroups of Definitions $\ref{LevelSubgroups1}, \ref{LevelSubgroups2}$ are related by the Atkin-Lehner map
\[
\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}: S(K_{\diamond,1}(p^\alpha))\longrightarrow S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha)),\qquad
\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}: S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha)) \longrightarrow
S(K_{\diamond,t}(p^\alpha)).
\]
\end{remark}
\subsection{On different models of Hilbert modular surfaces}
From now on we assume that there is no totally positive units in $\cal{O}^\times_{L}$ congruent to $-1$ modulo $p$ and that the open compact subgroup $K\le G(\widehat{\mathbb{Z}})$ satisfies $\mathrm{det}(K)=\widehat{\cal{O}_L}^\times$. \begin{definition}
Given $a\in(\mathbb{Z}/p^\alpha\mathbb{Z})^\times$ we denote by $\sigma_a\in\mathrm{Gal}(\mathbb{Q}(\zeta_{p^ \alpha})/\mathbb{Q})$ the element corresponding to $a$ by class field theory. Specifically, $\sigma_a(\zeta_{p^\alpha})=(\zeta_{p^\alpha})^{a^{-1}}$.
\end{definition}
\begin{lemma}\label{diffent models}
There are canonical isomorphisms of $\mathbb{Q}$-varieties
\[\xymatrix{
& S(K(p^\alpha))\ar[dl]_{\sim}\ar[dr]^{\sim}& &\\
S(K_{\diamond,1}(p^\alpha)) \times_{\mathbb{Q}}\mathbb{Q}(\zeta_{p^\alpha})&&
S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha)) \times_{\mathbb{Q}}\mathbb{Q}(\zeta_{p^\alpha}).
}\] \end{lemma} \begin{proof}
It is well-known that the canonical projection $S(K(p^\alpha))\rightarrow S(K_{?,1}(p^\alpha))$ induce an isomorphism of $\mathbb{Q}$-varieties
\[
S(K(p^\alpha)) \simeq
S(K_{?,1}(p^\alpha)) \times_{\pi_0(S(K_{?,1}(p^\alpha)))} \pi_0(S(K(p^\alpha)))\qquad \text{for}\ ?=\diamond,\mbox{\tiny $\mathrm{X}$}.
\]
Therefore in order to prove the lemma we have to show that there is an isomorphism of $\mathbb{Q}$-group schemes between $\pi_0(S(K(p^\alpha)))$ and $\pi_0(S(K_{?,1}(p^\alpha))) \times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha})$. The component groups of $S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha)), S(K_{\diamond,1}(p^\alpha))$ and $S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha))$ are $0$-dimensional Shimura varieties defined over $\mathbb{Q}$ and the Galois group $\mathrm{Gal}(\mathbb{Q}(\zeta_{p^\alpha})/\mathbb{Q})$ acts on their points via the Artin map and the diagonal embedding of $\mathbb{A}_{\mathbb{Q},f}^\times$ inside $\mathbb{A}_{L,f}^\times$. Moreover, all points of $\pi_0(S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha)))$ and $\pi_0(S(K_{\diamond,1}(p^\alpha)))$ are defined over $\mathbb{Q}$ because
\[
\det(K_{\diamond,1}(p^\alpha))
=\det(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha))
= (\mathbb{Z}^\times_p+p^\alpha\cal{O}_{L,p})\widehat{\cal{O}}_L^{p,\times}.
\] The projection $\pi_0(S(K(p^\alpha))) \rightarrow \pi_0(S(K_{?,1}(p^\alpha)))$ corresponds to the natural quotient map
\[
L^\times_+\backslash\mathbb{A}^\times_{L,f}/(1+p^\alpha\cal{O}_{L,p})\widehat{\cal{O}}_L^{p,\times}
\longrightarrow
L^\times_+\backslash\mathbb{A}^\times_{L,f}/(\mathbb{Z}^\times_p+p^\alpha\cal{O}_{L,p})\widehat{\cal{O}}_L^{p,\times}.
\]
Its kernel can be identified with
\[
\big(\mathbb{Z}^\times_p+p^\alpha\cal{O}_{L,p}\big)\bigg/\Big(\big[\cal{O}_{L,+}^\times\cap\big(\mathbb{Z}^\times_p+p^\alpha\cal{O}_{L,p}\big)\big]\cdot\big(1+p^\alpha\cal{O}_{L,p}\big)\Big)\cong (\mathbb{Z}/p^\alpha\mathbb{Z})^\times
\]
because there is no totally positive unit congruent to $-1$ modulo $p$. Hence the Galois action of $\mathrm{Gal}(\mathbb{Q}(\zeta_{p^\alpha})/\mathbb{Q})$ on the fibers of the projection $\pi_0(S(K(p^\alpha))) \rightarrow \pi_0(S(K_{?,1}(p^\alpha)))$ can be canonically identified with the simply transitive action of $(\mathbb{Z}/p^\alpha\mathbb{Z})^\times$ on itself and we conclude that
\[
\pi_0(S(K(p^\alpha)))\cong\pi_0(S(K_{?,1}(p^\alpha))) \times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha})\qquad \text{for}\ ?=\diamond,\mbox{\tiny $\mathrm{X}$}.
\] \end{proof}
\begin{proposition} \label{nu_alpha} There is an isomorphism of $\mathbb{Q}$-varieties \[ \nu_\alpha: S(K_{\diamond}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) \overset{\sim}{\longrightarrow} S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) \] such that $ \nu_\alpha\circ(1\times\sigma_a) = (\langle 1,(a,a)\rangle\times\sigma_a)\circ\nu_\alpha. $ \end{proposition} \begin{proof} Let $a\in\mathbb{Z}_p^\times$ and consider the action of $\mbox{\tiny $\begin{pmatrix}(a^{-1},a^{-1})&0\\0&1\end{pmatrix}$}\in \mathrm{GL}_2(\cal{O}_{L,p})$ on $S(K(p^\alpha))$. On the one hand, it acts on $S(K_{\diamond,1}(p^\alpha)) \times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha})$ as $1\times \sigma_a$ via the isomorphism of Lemma \ref{diffent models}. On the other hand, it acts as $\langle 1,(a,a)\rangle \times \sigma_a$ on $S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha)) \times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha})$. In other words, the isomorphism \[ \tilde{\nu}_\alpha:S(K_{\diamond,1}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) \overset{\sim}{\longrightarrow} S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) \] obtained from Lemma \ref{diffent models} satisfies $ \tilde{\nu}_\alpha\circ(1\times\sigma_a) = (\langle 1,(a,a)\rangle\times\sigma_a)\circ\tilde{\nu}_\alpha. $
\noindent The quotient of $S(K_{\diamond,1}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha})$ by the subgroup of matrices $\gamma$ of the form \[\gamma_v=\mathbbm{1}_2\quad\text{for}\ v\not=p,\qquad\gamma_p=\begin{pmatrix}d^{-1}&\\&d \end{pmatrix}\quad\text{with}\quad d_{\mathfrak{p}_1}\equiv d_{\mathfrak{p}_2} \pmod{p^{\alpha}} \] is $ S(K_{\diamond}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) $ because those matrices have determinant $1$. Similarly, the quotient of $S(K_{\mbox{\tiny $\mathrm{X}$},1}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha})$ by the same group is $ S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) $ on the target. We denote by $\nu_\alpha$ the resulting isomorphism \[ \nu_\alpha: S(K_{\diamond}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) \overset{\sim}{\longrightarrow} S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) \] which also satisfies $ \nu_\alpha\circ(1\times\sigma_a) = (\langle 1,(a,a)\rangle\times\sigma_a)\circ\nu_\alpha. $ \end{proof}
\begin{corollary}\label{proj-nu_alpha-commute}
The isomorphism \[ \nu_\alpha: \Big(S(K_\diamond(p^\alpha))\times_\mathbb{Q}\mathbb{Q}(\zeta_{p^\alpha})\Big)(\mathbb{C}) \overset{\sim}{\longrightarrow} \Big(S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))\times_\mathbb{Q}\mathbb{Q}(\zeta_{p^\alpha})\Big)(\mathbb{C}) \] is the identity with respect to the complex uniformizations ($\ref{complex uniformization}$). In particular $\nu_\alpha$ commutes with the projections $\nu_{1,\mathfrak{p}}$ and $\pi_{1,\mathfrak{p}}$ for $\mathfrak{p}\mid p$. \end{corollary}
\subsubsection{Compatibility with Hecke correspondences.} Even though the complex uniformization of $\nu_\alpha$ is the identity, the map induced in cohomology does not commute in general with the Hecke operators. Indeed, even when $g\in G(\mathbb{A}_f)$ normalizes the congruence subgroups $K_\diamond(p^\alpha)$ and $K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha)$, the morphisms $\frak{T}_g:S(K_\diamond(p^\alpha))\to S(K_\diamond(p^\alpha))$ and $\frak{T}_g:S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))\to S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))$ may differ. However, there is an important case when the commutativity holds.
\begin{lemma}\label{U_p-nu_alpha-commute}
The identity
\[
U_p\circ(\nu_\alpha)^*
=
(\nu_\alpha)^*\circ U_p
\]
holds in cohomology. \end{lemma} \begin{proof}
The argument in Proposition $\ref{nu_alpha}$ also yields an isomorphism
\begin{equation}\label{TwistVariety3}
\nu_\alpha:
S\big(K_{\diamond}(p^\alpha)\cap K_0(p^{\alpha+1})\big)\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha})
\overset{\sim}{\longrightarrow}
S\big(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))\cap K_0(p^{\alpha+1})\big)\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}),
\end{equation}
for any $\alpha\ge1$ factoring through a quotient of $S(K(p^\alpha)\cap K_0(p^{\alpha+1}))$. Since the determinant of the matrix $g_p = \mbox{\tiny $\begin{pmatrix}\varpi_p&0\\0&1\end{pmatrix}$}$ lies in $L^\times_+\cdot(1+p^\alpha\cal{O}_{L,p})\widehat{\cal{O}}_L^{p,\times}$, both projections
\[
\pi_{1}, \pi_{2}:
S\big(K(p^\alpha)\cap K_0(p^{\alpha+1})\big)
\longrightarrow
S(K(p^\alpha))
\]
induce the identity map between the component groups. Therefore the projections descend to $\pi_{1}\times 1$, $\pi_{2}\times 1$ on both domain and target of the isomorphism ($\ref{TwistVariety3}$).
In other words, both $\pi_{1}\times 1$ and $\pi_{2}\times1$ commute with $\nu_\alpha$ which implies the claim. \end{proof}
\subsection{Atkin--Lehner correspondence} \begin{definition} Let $\frak{p}$ be a $\cal{O}_L$-prime ideal above $p$ and $\alpha\ge 1$ be a positive integer. The Atkin-Lehner correspondence $w_{\mathfrak{p}^\alpha_2} := (\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}\times1) \circ \nu_{\alpha}$
is an isomorphism of $\mathbb{Q}$-varieties \[ w_{\mathfrak{p}^\alpha} : S(K_{\diamond}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}) \overset{\sim}{\longrightarrow} S(K_{\diamond,t}(p^\alpha))\times_\mathbb{Q} \mathbb{Q}(\zeta_{p^\alpha}). \] \end{definition} \noindent By Lemma $\ref{U_p-nu_alpha-commute}$ and equation (\ref{U_pAdjoint}), the cohomological map $(w_{\frak{p}_2^\alpha})^*$ is intertwined with the Hecke correspondences at $p$ through the rule
\begin{equation}\label{U_pAtkinLehner}
U_p\circ (w_{\mathfrak{p}_2^\alpha})^*
=
(w_{\mathfrak{p}_2^\alpha})^*\circ U_{\mathfrak{p}_1}\circ U_{\mathfrak{p}_2}^*\circ \langle \varpi_{\mathfrak{p}_2},1 \rangle.
\end{equation}
\subsubsection{Galois action.} Consider the Galois character \begin{equation}\label{GalCal}
\delta _\alpha:\Gamma_\mathbb{Q}\longrightarrow O[\mathbb{G}_L^\alpha(K)]^\times \end{equation} obtained by composing the projection $\Gamma_\mathbb{Q}\rightarrow\mathrm{Gal}(\mathbb{Q}(\zeta_{p^ \alpha})/\mathbb{Q})$ with the homomorphism \begin{equation} \sigma_a\mapsto\big\langle (1,a^{-1}),(a,a^{-1})\big\rangle. \end{equation}
\begin{lemma}\label{Cohomology} There is a $\Gamma_\mathbb{Q}$-equivariant isomorphism \[ (w_{\mathfrak{p}^\alpha_2})_*: \mathrm{H}^{\bfcdot}_{\acute{\mathrm{e}}\mathrm{t}}\big(S(K_{\diamond}(p^\alpha))_{\bar{\mathbb{Q}}},O\big) \overset{\sim}{\longrightarrow} \mathrm{H}^{\bfcdot}_{\acute{\mathrm{e}}\mathrm{t}}\big(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O\big)(\delta _\alpha). \] \end{lemma} \begin{proof} Since $\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}$ is defined over $\mathbb{Q}$ we can use equation ($\ref{AL-diamonds}$) to compute \begin{equation}\label{ALGalois} \begin{split} w_{\mathfrak{p}^\alpha_2}\circ(1\times\sigma_a) &= \mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}} \circ (\langle 1,(a,a)\rangle\times\sigma_a)\circ\nu_\alpha\\ &= (\langle(1,a^{-1}),(a,a^{-1})\rangle\times\sigma_a) \circ w_{\mathfrak{p}^\alpha_2}. \end{split}\end{equation} \end{proof}
\section{Hirzebruch--Zagier classes}
We keep the notations and assumptions made at the beginning of the previous chapter. Let $K=V_1(M\cal{O}_L)$, $K' := \mathrm{GL}_2(\mathbb{A}_\mathbb{Q}) \cap K$ and note that $K'_0(p^\alpha)=\mathrm{GL}_2(\mathbb{A}_\mathbb{Q}) \cap K_\diamond(p^\alpha)$. Thus there is a cartesian diagram of Shimura varieties over $\mathbb{Q}$ \begin{equation}\label{cartesiano}
\xymatrix{ \ar @{} [drr] |{\mbox{\large $\diamond$}}
Y(K'_0(p^{{\alpha+1}}))
\ar[d]^{\pi_1}\ar@{^{(}->}[rr]^{\zeta(\mathfrak{p}_1)}
&&
S(K_\diamond(p^\alpha)\cap U_0(\mathfrak{p}_1^{\alpha+1}))
\ar[d]^{\pi_{1,\mathfrak{p}_1}}
\\
Y(K'_0(p^{\alpha}))
\ar@{^{(}->}[rr]^{\zeta}
&&
S(K_\diamond(p^\alpha)) }\end{equation} since the horizontal arrows are closed embeddings and the vertical ones are finite of degree $p$.
\begin{definition} For $\alpha\ge1$ we set \[ \Delta^\flat_\alpha := \zeta_*\left[Y(K'_0(p^{\alpha}))\right]\in \mathrm{CH}^1\big(S(K_\diamond(p^\alpha)\big)(\mathbb{Q}) \] and \[ \Delta_\alpha^\flat(\mathfrak{p}_1) := \zeta(\mathfrak{p}_1)_*\left[Y(K'_0(p^{\alpha+1}))\right] \] as a codimension one cycle class in $\mathrm{CH}^1\big( S(K_\diamond(p^\alpha)\cap U_0(\mathfrak{p}_1^{\alpha+1}))\big)(\mathbb{Q})$. \end{definition}
\begin{lemma}\label{firstformula} We have \begin{equation} (\pi_{1,\mathfrak{p}_1})^*\Delta_{\alpha}^{\flat}=\Delta^\flat_{\alpha}(\mathfrak{p}_1) \end{equation}
in $\mathrm{CH}^1\big(S(K_\diamond(p^\alpha)\cap U_0(\mathfrak{p}_1^{\alpha+1}))\big)(\mathbb{Q})$. \end{lemma} \begin{proof} Since the diagram ($\ref{cartesiano}$) is cartesian, the push-pull formula \[ (\pi_{1,\mathfrak{p}_1})^*\circ\zeta_* = \zeta(\mathfrak{p}_1)_*\circ (\pi_1)^* \] implies the claim. \end{proof}
\subsubsection{Twisting by Atkin--Lehner.} \begin{definition} We consider the class of the codimension one cycle \[ \Delta_{\alpha}^{\mbox{\tiny $\sharp$}} = (w_{\mathfrak{p}_{2}^{\alpha}})_*\Delta_{\alpha}^{\flat}\in \mathrm{CH}^{1}\big(S(K_{\diamond,t}(p^{\alpha}))\big)(\mathbb{Q}(\zeta_{p^\alpha})) \] defined over $\mathbb{Q}(\zeta_{p^\alpha})$. \end{definition} \noindent There are two natural degeneracy maps $\varpi_1, \varpi_2: S(K_{\diamond,t}(p^{\alpha+1}))\to S(K_{\diamond,t}(p^{\alpha}))$ which are described by the following commutative diagrams
\[\resizebox{\displaywidth}{!}{
\xymatrix{
S(K_{\diamond,t}(p^{\alpha+1}))
\ar[d]^{\mu} \ar[drr]^{\varpi_1}
&&&
S(K_{\diamond,t}(p^{\alpha+1}))
\ar[d]^{\mu} \ar[drr]^{\varpi_2}
& \\
S(K_{\diamond,t}(p^\alpha)\cap K_0(p^{\alpha+1}))
\ar[rr]_{\qquad \pi_{1}}
&&
S(K_{\diamond,t}(p^\alpha)),
&
S(K_{\diamond,t}(p^\alpha)\cap K_0(p^{\alpha+1}))
\ar[rr]_{\qquad \pi_{2}}
&&
S(K_{\diamond,t}(p^\alpha)).
}}\]
\begin{proposition}\label{proposition:sharp-relation} We have \[
(\varpi_{2})_*\Delta_{\alpha+1}^{\mbox{\tiny $\sharp$}}
=
\langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle_*\circ U_{\mathfrak{p}_1}\Delta_\alpha^{\mbox{\tiny $\sharp$}}. \] \end{proposition} \begin{proof} Combining equation (\ref{Commute2}) with Corollary \ref{proj-nu_alpha-commute}, we have the following diagram commutes \begin{equation}\label{diagramsharp} \xymatrix{
Y(K'_0(p^{\alpha+1}))
\ar@{=}[d] \ar@{^{(}->}[rr]^{\zeta}
&&
S(K_\diamond(p^{\alpha+1})) \ar[d]^{\nu_{1,\mathfrak{p}_1}\circ\mu} \ar[rr]^{w_{\mathfrak{p}_2^{\alpha+1}}}
&&
S(K_{\diamond,t}(p^{\alpha+1}))
\ar[d]^{\nu_{2,\mathfrak{p}_1}\circ\mu}
\\
Y(K'_0(p^{\alpha+1}))
\ar@{^{(}->}[rr]^{\zeta(\mathfrak{p}_1)}
&&
S(K_\diamond(p^\alpha)\cap K_0(\mathfrak{p}_1^{\alpha+1}))
\ar[rr]^{\langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle \circ w_{\mathfrak{p}_2^{\alpha}}}
&&
S(K_{\diamond,t}(p^\alpha)\cap K_0(\mathfrak{p}_1^{\alpha+1}))
\ar[d]^{\pi_{2,\mathfrak{p}_1}}
\\
&&
&&
S(K_{\diamond,t}(p^\alpha)).
} \end{equation} By definition, $(\varpi_{2})_*\Delta_{\alpha+1}^{\mbox{\tiny $\sharp$}}$ is the pushforward of the cycle class $\left[Y(K'_0(p^{\alpha+1}))\right]$ along the top arrows and the rightmost vertical arrows. Therefore \[\begin{split}
(\varpi_{2})_*\Delta_{\alpha+1}^{\mbox{\tiny $\sharp$}}
&=
(\pi_{2,\mathfrak{p}_1})_*\circ \langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle_*\circ (w_{\mathfrak{p}_2^\alpha})_*\circ(\zeta(\mathfrak{p}_1))_*\left[Y(K'_0(p^{\alpha+1}))\right]\\
&=
\langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle_*\circ(\pi_{2,\mathfrak{p}_1})_*\circ(w_{\mathfrak{p}_2^\alpha})_*\Delta_{\alpha}^\flat(\mathfrak{p}_1)\\
&=
\langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle_*\circ(\pi_{2,\mathfrak{p}_1})_*\circ(w_{\mathfrak{p}_2^\alpha})_*\circ(\pi_{1,\mathfrak{p}_1})^*\Delta_{\alpha}^{\flat}\\
&=
\langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle_*\circ(\pi_{2,\mathfrak{p}_1})_*(\pi_{1,\mathfrak{p}_1})^*\circ(w_{\mathfrak{p}_2^\alpha})_*\Delta_{\alpha}^{\flat}\\
&=
\langle\varpi_{\mathfrak{p}_2}^{-1},1\rangle_*\circ U_{\mathfrak{p}_1}\Delta_\alpha^{\mbox{\tiny $\sharp$}}, \end{split}\] where the third equality is due to Lemma \ref{firstformula} and the second to last follows from the fact that $w_{\mathfrak{p}_2^\alpha}$ is an isomorphism commuting with $\pi_{1,\mathfrak{p}_1}$ (Corollary \ref{proj-nu_alpha-commute}). \end{proof}
\subsection{Hirzebruch--Zagier cycles} \begin{definition}
Consider the Shimura threefold \begin{equation}
Z_\alpha(K) = S(K_{\diamond,t}(p^\alpha))\times X_0(p) \end{equation} where $X_0(p)$ denotes the compactified modular curve $X(V_1(N)\cap V_0(p))$. Then the Hirzebruch--Zagier cycles of level $\alpha\ge1$ is defined by \begin{equation} \Delta_{\alpha} = (\langle\varpi_{\mathfrak{p}_2}^{\alpha},1\rangle \circ w_{\mathfrak{p}_2^\alpha}\circ\zeta_\alpha,\ \pi_{1,\alpha})_*[Y(K'_0(p^\alpha))]\in\mathrm{CH}^2\big(Z_\alpha(K)\big)(\mathbb{Q}(\zeta_{p^\alpha})). \end{equation} where $\pi_{1,\alpha}: Y(K'_0(p^\alpha))\longrightarrow X_0(p)$ is the natural projection. \end{definition}
\noindent Proposition \ref{proposition:sharp-relation} implies a precise relation between Hirzebruch--Zagier cycles of different levels.
\begin{proposition}\label{proposition:flat-relation} The following identity holds in $\mathrm{CH}^{2}(Z_{\alpha}(K))(\mathbb{Q}(\zeta_{p^\alpha}))$: \[ (\varpi_{2}, \mathrm{id})_{*}\Delta_{\alpha+1} = (U_{\mathfrak{p}_1}, \mathrm{id})\Delta_{\alpha}. \] \end{proposition}
\subsubsection{Galois twisting.} By equation (\ref{ALGalois}), the Galois group $\Gamma_\mathbb{Q}$ acts on $\Delta_\alpha$ through the finite quotient $\mathrm{Gal}(\mathbb{Q}(\zeta_{p^\alpha})/\mathbb{Q})$ as \[ (\sigma_a)_* \Delta_\alpha = (\langle(1,a),(a^{-1},a)\rangle,\mathrm{id})_*\Delta_\alpha. \]
\begin{definition} Let $S^\dagger(K_{\diamond,t}(p^\alpha))$ be the twist of the $\mathbb{Q}$-variety $S(K_{\diamond,t}(p^\alpha))$ by the $1$-cocycle \[ \mathrm{Gal}(\mathbb{Q}(\zeta_{p^\alpha})/\mathbb{Q})\ni\sigma_a \mapsto
\langle(1,a^{-1}),(a,a^{-1})\rangle. \] We define $ Z_\alpha^\dagger(K) = S^\dagger(K_{\diamond,t}(p^\alpha))\times X(K'_0(p)). $ By construction \begin{equation} \mathrm{H}^{\bfcdot}_{\acute{\mathrm{e}}\mathrm{t}}(S^\dagger(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O) \simeq \mathrm{H}^{\bfcdot}_{\acute{\mathrm{e}}\mathrm{t}}(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O)(\delta _\alpha). \end{equation}
and $\Delta_\alpha$ corresponds to a codimension $2$ cycle of $Z_\alpha^\dagger(K)$ defined over $\mathbb{Q}$. \end{definition}
\subsubsection{Null-homologous cycles.} Finally, we apply a suitable correspondence to make Hirzebruch--Zagier cycles null-homologous. We define a correspondence $\varepsilon_{\mathsf{f}_\circ}$ on $X_0(p)$ following \cite{DR2} (after equation (47)). Since $p$ is assumed to be non-Eisenstein for $\mathsf{f}_\circ$, i.e. $\mathsf{f}_\circ$ is not congruent to an Eisenstein series modulo $p$, there exists an auxiliary prime $\ell\nmid Np$ for which $\ell+1-\mathsf{a}_p(\ell, \mathsf{f}_\circ)$ lies in $O^\times$. Then the correspondence \[ \varepsilon_{\mathsf{f}_\circ} = (\ell+1-T(\ell))/(\ell+1-\mathsf{a}_p(\ell, \mathsf{f}_\circ)) \] has coefficients in $O$, it annihilates $\mathrm{H}^0(X_0(p))$ and $\mathrm{H}^2(X_0(p))$ and acts as the identity on the $\mathsf{f}_\circ$-isotypic subspace $\mathrm{H}^1(X_0(p))[\mathsf{f}_\circ]$.
\begin{definition} The modified Hirzebruch--Zagier cycle is given by \[ \Delta_\alpha^\circ := (\mathrm{id}, \varepsilon_{\mathsf{f}_\circ})_*\Delta_\alpha \in \mathrm{CH}^2\big(Z^\dagger_\alpha(K)\big)(\mathbb{Q})\otimes_{\mathbb{Z}} O. \] \end{definition}
\begin{proposition}\label{nullhomo} The cycle class $\Delta_\alpha^\circ$ is null-homologous. \end{proposition} \begin{proof} Consider the smooth compactification $\iota:Z_\alpha^\dagger(K)\hookrightarrow Z_\alpha^\dagger(K)^\mathrm{c}$ obtained by taking the minimal resolution of the Baily-Borel compactification of the Hilbert modular surface, and denote by $\Delta_\alpha^{\circ,\mathrm{c}}$ the closure of $\Delta_\alpha^{\circ}$ in $Z_\alpha^\dagger(K)^\mathrm{c}$. Thanks to the commuting diagram \[\xymatrix{ \mathrm{CH}^2\big(Z^\dagger_\alpha(K)^\mathrm{c}\big)(\mathbb{Q})\otimes_{\mathbb{Z}}O\ar[r]^{\mathrm{cl}_{\acute{\mathrm{e}}\mathrm{t}}}\ar[d]^{\iota^*} & \mathrm{H}_\mathrm{et}^4\big(Z_\alpha^\dagger(K)^\mathrm{c}_{\bar{\mathbb{Q}}},O(2)\big)\ar[d]^{\iota^*}\\ \mathrm{CH}^2\big(Z^\dagger_\alpha(K)\big)(\mathbb{Q})\otimes_{\mathbb{Z}}O\ar[r]^{\mathrm{cl}_{\acute{\mathrm{e}}\mathrm{t}}}& \mathrm{H}_\mathrm{et}^4\big(Z_\alpha^\dagger(K)_{\bar{\mathbb{Q}}},O(2)\big), }\] it suffices to show that $\mathrm{cl}_{\acute{\mathrm{e}}\mathrm{t}}\big(\Delta_\alpha^{\circ,\mathrm{c}}\big)=0$. As the integral cohomology of smooth projective curves is torsion free and the minimal resolution of the Baily-Borel compactification of a Hilbert modular surface is simply connected, the group $\mathrm{H}_\mathrm{et}^4(Z_\alpha^\dagger(K)^\mathrm{c}_{\overline{\mathbb{Q}}},O(2))$ has a K\"unneth decomposition whose every non-zero term is annihilated by the correspondence $(\mathrm{id}, \varepsilon_{\mathsf{f}_\circ})$. \end{proof}
\subsection{Big cohomology classes} For any number field $D$, the $p$-adic \'etale Abel--Jacobi map \[ \mathrm{AJ}_p^{\acute{\mathrm{e}}\mathrm{t}}: \mathrm{CH}^2(Z_\alpha^\dagger(K))_0(D) \longrightarrow \mathrm{H}^1\big(D,\mathrm{H}^3_\mathrm{et}\big(Z_\alpha^\dagger(K)_{\bar{\mathbb{Q}}},O(2)\big)\big) = \mathrm{H}^1\big(D,\mathrm{H}^3_\mathrm{et}\big(Z_\alpha(K)_{\bar{\mathbb{Q}}},O(2)\big)(\delta_\alpha)\big) \] sends null-homologous cycles to Galois cohomology classes. \begin{definition}
We denote by $\mathrm{H}^2_!\big(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O(1)\big)$ the largest torsion-free quotient of the interior cohomology of $S(K_{\diamond,t}(p^\alpha))$ and set
\[
\boldsymbol{\cal{V}}_\alpha(K)
:=
e_\mathrm{n.o.}\mathrm{H}^2_!\big(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O(1)\big)(\delta_\alpha)
\otimes
\mathrm{H}^1_{\acute{\mathrm{e}}\mathrm{t}}\big(X_0(p)_{\bar{\mathbb{Q}}},O(1)\big).
\] \end{definition} \noindent Then the modified Hirzebruch--Zagier cycles $\Delta^\circ_\alpha$ give rise to cohomology classes \begin{equation} \kappa^\circ_\alpha := \mathrm{AJ}_p^{\acute{\mathrm{e}}\mathrm{t}}(\Delta^\circ_\alpha) \in \mathrm{H}^1\big(\mathbb{Q},\boldsymbol{\cal{V}}_\alpha(K)\big). \end{equation}
\begin{lemma}\label{compatibility - classes}
For $\alpha\ge1$ we have
\[
(\varpi_{2}, \mathrm{id})_{*}\kappa^\circ_{\alpha+1}
=
(U_{\mathfrak{p}_1}, \mathrm{id})\kappa^\circ_{\alpha}.
\] \end{lemma} \begin{proof}
It follows from Proposition $\ref{proposition:flat-relation}$ and the commutativity of cycle class map and correspondences. \end{proof}
\noindent Since $U_{\mathfrak{p}_1}$ acts invertibly on the nearly ordinary part, we may define \begin{equation}\label{normalizationHZclasses} \kappa_{\alpha}^\mathrm{n.o.} := \big( U_{\mathfrak{p}_{1}}^{-\alpha}, \mathrm{id}\big)\kappa_{\alpha}^{\circ}\in \mathrm{H}^{1}\big(\mathbb{Q},\boldsymbol{\cal{V}}_\alpha(K)\big), \end{equation} then the equality \begin{equation}\label{compttt} (\varpi_{2},\mathrm{id})_*\kappa_{\alpha+1}^\mathrm{n.o.}=\kappa_{\alpha}^\mathrm{n.o.}, \end{equation}
follows directly from the commutativity of $U_{\mathfrak{p}_1}$ with $(\varpi_{{2}})_*$ and Lemma $\ref{compatibility - classes}$.
\begin{definition}\label{BigCohomology} We consider the inverse limit $\boldsymbol{\cal{V}}_\infty(K) := \varprojlim_\alpha \boldsymbol{\cal{V}}_\alpha(K)$ taken with respect to the trace maps $(\varpi_{2})_*$. Then equation ($\ref{compttt}$) allows us to define \[ \boldsymbol{\kappa}_\infty^\mathrm{n.o.}=\underset{\leftarrow, \alpha}{\lim}\ \kappa_{\alpha}^\mathrm{n.o.}\in \mathrm{H}^{1}\big(\mathbb{Q},\boldsymbol{\cal{V}}_\infty(K)\big). \] \end{definition}
\subsection{Specializations of the diagonal restriction of $\Lambda$-adic forms}
Let \[ K'_{\det}(p^\alpha) := K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha) \cap \mathrm{GL}_2(\mathbb{A}_{\mathbb{Q}}) = \left\{\gamma\in K'_0(p^\alpha)\Big\lvert\ \det(\gamma)\equiv 1 \pmod{p^{\alpha}}\right\}, \] then there is a natural embedding $\zeta:Y(K'_{\det}(p^\alpha))\hookrightarrow S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))$ of Shimura varieties fitting in the following commutative diagram \begin{equation}\label{complex points} \xymatrix{ Y(K'_{\det}(p^{\alpha}))(\mathbb{C})\ar@{^{(}->}[r]^\zeta & S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))(\mathbb{C})\ar[r]^{\mathfrak{T}_{\tau_{\mathfrak{p}_2^\alpha}}} & S(K_{\diamond,t}(p^{\alpha}))(\mathbb{C})\\
Y(K'_{0}(p^{\alpha}))(\mathbb{C})
\ar@{^{(}->}[r]^{\zeta} & S(K_\diamond(p^\alpha))(\mathbb{C}) \ar[u]_{\nu_\alpha}\ar[ru]_{w_{\mathfrak{p}_2^\alpha}} &.} \end{equation}
\begin{proposition}\label{analysis comp geometry}
Let $\mathrm{P}\in \cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ be an arithmetic point of weight $(2t_L,t_L)$ and character $(\chi_\circ\theta_L^{-1}\chi^{-1},\mathbbm{1})$ with $\chi$ a of conductor $p^\alpha$. Then
\[
e_{\mathrm{ord}}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P})
=
\mathsf{a}_p(\varpi_\mathfrak{p},\breve{\mathsf{g}}_\mathrm{P})^\alpha \cdot G(\theta_{L,\mathfrak{p}}^{-1}\chi_\mathfrak{p}^{-1})^{-1}\cdot
e_\mathrm{ord}\zeta^*\Big[d^{-1}_\mu(w_{\mathfrak{p}_2^\alpha})^*(\breve{\mathsf{g}}_\mathrm{P})^{\mbox{\tiny $[\cal{P}]$}}\Big].
\] \end{proposition} \begin{proof} Combining Proposition \ref{specializations} and Lemma \ref{Atkin-Lehner} we see that, \[ e_{\mathrm{ord}}\zeta^*\big(d_\frak{p}^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P})= \mathsf{a}_p(\varpi_\mathfrak{p},\breve{\mathsf{g}}_\mathrm{P})^\alpha \cdot G(\theta_{L,\mathfrak{p}}^{-1}\chi_\mathfrak{p}^{-1})^{-1} \cdot e_\mathrm{ord}\zeta^*\Big[d_\mu^{-1}(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})^{\mbox{\tiny $[\cal{P}]$}}\Big]\otimes\theta_\mathbb{Q}\chi_{\mbox{\tiny $\spadesuit$}}. \] The cuspform $\breve{\mathsf{g}}_\mathrm{P}$ can be interpreted as a differential form on $S(K_{\diamond,t}(p^\alpha))(\mathbb{C})$, $\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha}=(\mathfrak{T}_{\mathfrak{p}_2^\alpha})^*\breve{\mathsf{g}}$ as a differential on $S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))(\mathbb{C})$ and $(w_{\mathfrak{p}_2^\alpha})^*\breve{\mathsf{g}}_\mathrm{P}=\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})$ as a differential on $S(K_{\diamond}(p^\alpha))(\mathbb{C})$. Since the morphism $\nu_\alpha:S(K_{\diamond}(p^\alpha))(\mathbb{C})\to S(K_{\mbox{\tiny $\mathrm{X}$}}(p^\alpha))(\mathbb{C})$ is the identity with respect to the complex uniformizations (Corollary \ref{proj-nu_alpha-commute}), it preserves classical $q$-expansions. More precisely for all $\xi\in L_+$, every index $i$ we have \[ a\Big(\xi,\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})^{\mbox{\tiny $[\cal{P}]$}}\big)_i\Big) =a\Big(\xi,(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})^{\mbox{\tiny $[\cal{P}]$}}_i\Big). \] Therefore, interpreting $\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})$ and $\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha}$ as $p$-adic modular forms, we see that \[
a\Big(\xi,d^{-1}_\mu\big((w_{\mathfrak{p}_2^\alpha})^*(\breve{\mathsf{g}}_\mathrm{P})^{\mbox{\tiny $[\cal{P}]$}}\big)_i\Big)
=a\Big(\xi,d^{-1}_\mu(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})_i^{\mbox{\tiny $[\cal{P}]$}}\Big). \] The diagonal restriction $\zeta^*\Big[d^{-1}_\mu(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})^{\mbox{\tiny $[\cal{P}]$}}\Big]$ is a $p$-adic elliptic cuspform on $Y(K'_{\det}(p^\alpha))$. Its twist $\zeta^*\Big[d^{-1}_\mu(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})^{\mbox{\tiny $[\cal{P}]$}}\Big]\otimes\theta_\mathbb{Q}\chi_{\mbox{\tiny $\spadesuit$}}$ has character $(\psi_\circ^{-1},\mathbbm{1})$ and so descends to a $p$-adic cuspform on $Y(K'_0(p^\alpha))$ where it can be compared with $\zeta^*\Big[d^{-1}_\mu(w_{\mathfrak{p}_2^\alpha})^*(\breve{\mathsf{g}}_\mathrm{P})^{\mbox{\tiny $[\cal{P}]$}}\Big]$. By Lemma \ref{twist classical expansion}, twisting by $\theta_\mathbb{Q}\chi_{\mbox{\tiny $\spadesuit$}}$ does not change the classical $q$-expansion of elliptic cuspforms on the identity component, thus the equality \[ e_\mathrm{ord}\zeta^*\Big[d^{-1}_\mu(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\mathfrak{p}^\alpha})^{\mbox{\tiny $[\cal{P}]$}}\Big]\otimes\theta_\mathbb{Q}\chi_{\mbox{\tiny $\spadesuit$}}=e_\mathrm{ord}\zeta^*\Big[d^{-1}_\mu(w_{\mathfrak{p}_2^\alpha})^*(\breve{\mathsf{g}}_\mathrm{P})^{\mbox{\tiny $[\cal{P}]$}}\Big] \] follows from the $q$-expansion principle since $Y(K'_0(p^\alpha))$ is geometrically connected. \end{proof}
\section{Review of big Galois representations}
Let $\mathbf{Q}_\mathscr{G}=\mathrm{Frac}(\mathbf{I}_\mathscr{G})$, then the ordinary family $\mathscr{G}$ passing through a choice of ordinary $p$-stabilization $\mathsf{g}_\circ^{\mbox{\tiny $(p)$}}$ has an associated big Galois representation $\boldsymbol{\varrho}_\mathscr{G}:\Gamma_L\to\mathrm{GL}_2(\mathbf{Q}_\mathscr{G})$ acting on $\mathbf{V}_\mathscr{G}=(\mathbf{Q}_\mathscr{G})^{\oplus2}$ (\cite{HidaGalois}, Theorem 1). The representation $\boldsymbol{\varrho}_\mathscr{G}$ is unramified outside $\frak{Q}p$ with determinant \[ \det(\boldsymbol{\varrho}_\mathscr{G})(z)=\phi_{\boldsymbol{\chi}}([z,1])\cdot\varepsilon_L(z)\qquad \forall\ z\in\mathbb{A}_{L,f}, \] and characteristic polynomial at a prime $\frak{q}\nmid\frak{Q}p$ given by \[ \det(1-\boldsymbol{\varrho}_\mathscr{G}(\mathrm{Fr}_\frak{q})X)=1-\mathscr{G}(\mathbf{T}(\frak{q}))X+\phi_{\boldsymbol{\chi}}([\varpi_\frak{q},1])\mathrm{N}_{L/\mathbb{Q}}(\frak{q})X^2. \] Furthermore, fixing a decomposition group $D_\frak{p}$ in $\Gamma_L$ for an $\cal{O}_L$-prime $\frak{p}$ above $p$, there is an unramified character $\boldsymbol{\Psi}_{\mathscr{G},\frak{p}}:D_\frak{p}\to \mathbf{I}_\mathscr{G}^\times$, $\mathrm{Fr}_\frak{p}\mapsto\mathscr{G}(\mathbf{T}(\varpi_\frak{p}))$, such that (\cite{HidaGalois}, Proposition 2.3)
\begin{equation}\label{ordinary-G}(\boldsymbol{\varrho}_{\mathscr{G}})_{\lvert D_\frak{p}}\sim\begin{pmatrix}
\boldsymbol{\Psi}_{\mathscr{G},\frak{p}}^{-1}\cdot\det(\boldsymbol{\varrho}_\mathscr{G})_{\lvert D_\frak{p}} &*\\
0&\boldsymbol{\Psi}_{\mathscr{G},\frak{p}}
\end{pmatrix}.
\end{equation}
\begin{definition} Let \[ \mathrm{As}(\mathbf{V}_\mathscr{G}):=\otimes\mbox{-}\mathrm{Ind}_L^\mathbb{Q}\left(\mathbf{V}_\mathscr{G}\right) \] denote the tensor induction of $\mathbf{V}_\mathscr{G}$ to $\Gamma_\mathbb{Q}$. \end{definition}
\begin{definition}
We define $\boldsymbol{\eta}_\mathbb{Q}:\Gamma_\mathbb{Q}\rightarrow \boldsymbol{\Lambda}^\times$ to be the Galois character associated to the idele character
\[
\mathbb{A}^\times_\mathbb{Q}\ni z\mapsto \big[\eta_\mathbb{Q}(z)\big]=\big[\xi_z^{-1}\big]
\] Then the restriction $(\boldsymbol{\eta}_\mathbb{Q})_{\lvert D_p}$ at the decomposition group at $p$ is the Galois character associated by local class field theory to the homomorphism $\mathbb{Q}_p^\times\to\Gamma\to \boldsymbol{\Lambda}^\times$, $x\mapsto \big[\langle x\rangle^{-1}\big]$.
\end{definition} \noindent A direct computation shows that
\begin{equation} \mathrm{As}\big(\det(\boldsymbol{\varrho}_\mathscr{G})\big)=\psi_\circ^{-1}\cdot\boldsymbol{\eta}_\mathbb{Q}^2\cdot\eta_\mathbb{Q}^2.
\end{equation} Since $p\cal{O}_L=\mathfrak{p}_1\mathfrak{p}_2$ splits in the real quadratic field $L$, the decomposition group $D_p$ is contained in $\Gamma_L$. Hence, the characters $\boldsymbol{\Psi}_{\mathscr{G},\frak{p}}$ can be interpreted as characters of $D_p$ and it makes sense to define \begin{equation} \boldsymbol{\Psi}_{\mathscr{G},p}:=\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}. \end{equation} From equation (\ref{ordinary-G}) we can deduce a concrete description of the action of the decomposition group at $p$ on $\mathrm{As}(\mathbf{V}_\mathscr{G})$.
\begin{proposition} \label{AsaiFil} The restriction of $\mathrm{As}(\mathbf{V}_\mathscr{G})$ to $D_p$ admits a three step $D_p$-stable filtration \[ \mathrm{As}(\mathbf{V}_\mathscr{G}) \supset \mathrm{Fil}^1\mathrm{As}(\mathbf{V}_\mathscr{G}) \supset \mathrm{Fil}^2\mathrm{As}(\mathbf{V}_\mathscr{G}) \supset 0 \] with graded pieces \[\mathrm{Gr}^0\mathrm{As}(\mathbf{V}_\mathscr{G})=\mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}\Big),\qquad \mathrm{Gr}^2\mathrm{As}(\mathbf{V}_\mathscr{G})=\mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot\big(\psi_\circ^{-1}\cdot\boldsymbol{\eta}_\mathbb{Q}^2\cdot\eta_\mathbb{Q}^2\big)_{\lvert D_p}\Big),\] \[\resizebox{\displaywidth}{!}{\xymatrix{ \mathrm{Gr}^1\mathrm{As}(\mathbf{V}_\mathscr{G})=\mathbf{Q}_\mathscr{G}\Big(\big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\big)^{-1}\cdot\big(\psi^{-1}_{\circ,\frak{p}_1}\cdot\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\big)_{\lvert D_{p}}\Big) \oplus \mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\cdot\big(\psi^{-1}_{\circ,\frak{p}_2}\cdot\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\big)_{\lvert D_{p}}\Big)}}\] where, for any prime $\frak{p}\mid p$, we let $\psi_{\circ,\frak{p}}^{-1}:D_p\to O^\times$ be the unramified character determined by $\psi_{\circ,\frak{p}}^{-1}(\mathrm{Fr}_p)=\chi_\circ(\mathrm{Fr}_\frak{p})$. In particular, $\psi_{\circ,\frak{p}_1}\cdot\psi_{\circ,\frak{p}_2}=\big(\psi_\circ\big)_{\lvert D_p}$. \end{proposition} \begin{proof} Let $\mathbf{V}_\mathscr{G}^+$ denote the subvector space of $\mathbf{V}_\mathscr{G}$ coming from the upper left corner in equation (\ref{ordinary-G}), and $\mathbf{V}_\mathscr{G}^- = \mathbf{V}_\mathscr{G}/\mathbf{V}_\mathscr{G}^+$ the dimension 1 quotient. We fix $\theta\in\Gamma_\mathbb{Q}\setminus\Gamma_L$ and define \[ \mathrm{Fil}^2\mathrm{As}(\mathbf{V}_\mathscr{G}) := \mathbf{V}_\mathscr{G}^+ \otimes (\mathbf{V}_\mathscr{G}^{+})^{\theta}\quad\text{and}\quad \mathrm{Fil}^1\mathrm{As}(\mathbf{V}_\mathscr{G}) := \mathbf{V}_\mathscr{G}^+ \otimes (\mathbf{V}_\mathscr{G})^{\theta} + \mathbf{V}_\mathscr{G} \otimes (\mathbf{V}_\mathscr{G}^{+})^{\theta}, \] then $\mathrm{Fil}^2\mathrm{As}(\mathbf{V}_\mathscr{G})$ has dimension 1 over $\mathbf{Q}_\mathscr{G}$ while $\mathrm{Fil}^1\mathrm{As}(\mathbf{V}_\mathscr{G})$ has dimension 3. By the description in (\ref{ordinary-G}), $D_p$ acts on $\mathrm{Fil}^2\mathrm{As}(\mathbf{V}_\mathscr{G})$ through the character $\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot\big(\psi_\circ^{-1}\cdot\boldsymbol{\eta}_\mathbb{Q}^2\cdot\eta_\mathbb{Q}^2\big)_{\lvert D_p}$, while it acts on the zero-th graded piece $\mathrm{Gr}^0\mathrm{As}(\mathbf{V}_\mathscr{G}) =\mathbf{V}^-_\mathscr{G} \otimes (\mathbf{V}_\mathscr{G}^{-})^{\theta}$ through $\boldsymbol{\Psi}_{\mathscr{G},p}$. Finally, the first graded piece is \[ \mathrm{Gr}^1\mathrm{As}(\mathbf{V}_\mathscr{G})=\mathbf{Q}_\mathscr{G}\left(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}^{-1}\boldsymbol{\Psi}^\theta_{\mathscr{G},\frak{p}_1}\cdot\det(\boldsymbol{\varrho}_\mathscr{G})_{\lvert D_{\frak{p}_1}}\right) \oplus \mathbf{Q}_\mathscr{G}\left(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}(\boldsymbol{\Psi}^{\theta}_{\mathscr{G},\frak{p}_1})^{-1}\cdot\det(\boldsymbol{\varrho}_\mathscr{G})^\theta_{\lvert D_{\frak{p}_1}}\right).\] Using the identification $D_{\frak{p}_1}=D_p$ we see that $\boldsymbol{\Psi}^\theta_{\mathscr{G},\frak{p}_1}=\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}$ and that \[ \det(\boldsymbol{\varrho}_\mathscr{G})_{\lvert D_{\frak{p}_1}}=\big(\psi^{-1}_{\circ,\frak{p}_1}\cdot\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\big)_{\lvert D_{p}},\qquad \det(\boldsymbol{\varrho}_\mathscr{G})_{\lvert D_{\frak{p}_1}}^\theta= \big(\psi^{-1}_{\circ,\frak{p}_2}\cdot\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\big)_{\lvert D_{p}}. \] \end{proof}
\begin{definition}\label{selfdual remark} Let $\mathrm{V}_{\mathsf{f}_\circ}$ denote the representation attached to the elliptic cuspform $\mathsf{f}_\circ$ and let
\[ \mathrm{As}(\mathbf{V}_\mathscr{G})^\dagger:=\mathrm{As}(\mathbf{V}_\mathscr{G})(\theta_\mathbb{Q}\cdot\boldsymbol{\eta}_\mathbb{Q}^{-1}).
\] The big Galois representation
\begin{equation} \mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}:=\mathrm{As}(\mathbf{V}_\mathscr{G})^\dagger(-1)\otimes\mathrm{V}_{\mathsf{f}_\circ}.
\end{equation}
interpolates Kummer self-dual Galois representations. \end{definition}
\noindent The explicit realization of $\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}$ in the cohomology of a tower of Shimura threefolds with increasing level at $p$ plays a crucial role in the understanding of the arithmetic applications of Hirzebruch--Zagier classes. We conclude this section by analyzing the ordinary filtration at $p$.
\begin{lemma}\label{GaloisStructure}
The restriction to a decomposition group at $p$ of the Galois representation $\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}$ is endowed with a $4$-steps $D_p$-stable filtration with graded pieces given by
\[\resizebox{\displaywidth}{!}{\xymatrix{
\mathrm{Gr}^0\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}=\mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta_\mathbb{Q}^{-1}\big)_{\lvert D_p}\Big),\qquad \mathrm{Gr}^3\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}=\mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot \delta_p(\mathsf{f}_\circ)^{-1} \cdot\big(\boldsymbol{\eta}_\mathbb{Q}\cdot\eta^2_\mathbb{Q}\cdot\theta_\mathbb{Q}\big)_{\lvert D_p}\Big),
}}\]
\[\resizebox{\displaywidth}{!}{\xymatrix{
\mathrm{Gr}^1\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}= \mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}\cdot\delta_p(\mathsf{f}_\circ)^{-1}\cdot\big(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\theta_\mathbb{Q}\cdot\psi_\circ\big)_{\lvert D_p}\Big) \oplus \mathbf{Q}_\mathscr{G}\Big(\big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\big)^{-1}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\psi^{-1}_{\circ,\frak{p}_1}\big)_{\lvert D_{p}}\Big) \oplus \mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\psi^{-1}_{\circ,\frak{p}_2}\big)_{\lvert D_p}\Big), }}\] \[\resizebox{\displaywidth}{!}{\xymatrix{ \mathrm{Gr}^2\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}= \mathbf{Q}_\mathscr{G}\Big(\big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\big)^{-1}\cdot\delta_p(\mathsf{f}_\circ)^{-1}\cdot\big(\psi_{\circ,\frak{p}_2}\cdot\varepsilon_{\mathbb{Q}}\big)_{\lvert D_p}\Big) \oplus \mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\cdot\delta_p(\mathsf{f}_\circ)^{-1}\cdot\big(\psi_{\circ,\frak{p}_1}\cdot\varepsilon_{\mathbb{Q}}\big)_{\lvert D_p}\Big) \oplus \mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\cdot \psi_\circ^{-1}\big)_{\lvert D_p}\Big). }}\] \end{lemma} \begin{proof}
As $\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}=\mathrm{As}(\mathbf{V}_\mathscr{G})(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta^{-1}_\mathbb{Q})\otimes\mathrm{V}_{\mathsf{f}_\circ}$, its graded pieces are given by \[
\mathrm{Gr}^0\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}=\mathrm{Gr}^0\mathrm{As}(\mathbf{V}_\mathscr{G})(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta^{-1}_\mathbb{Q})\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ},
\]
\[
\mathrm{Gr}^1\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}=[\mathrm{Gr}^0\mathrm{As}(\mathbf{V}_\mathscr{G})(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta^{-1}_\mathbb{Q})\otimes\mathrm{Gr}^1\mathrm{V}_{\mathsf{f}_\circ}]\oplus [\mathrm{Gr}^1\mathrm{As}(\mathbf{V}_\mathscr{G})(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta^{-1}_\mathbb{Q})\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}],
\]
\[
\mathrm{Gr}^2\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}=[\mathrm{Gr}^1\mathrm{As}(\mathbf{V}_\mathscr{G})(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta^{-1}_\mathbb{Q})\otimes\mathrm{Gr}^1\mathrm{V}_{\mathsf{f}_\circ}]\oplus [\mathrm{Gr}^2\mathrm{As}(\mathbf{V}_\mathscr{G})(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta^{-1}_\mathbb{Q})\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}],
\]
\[
\mathrm{Gr}^3\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}=\mathrm{Gr}^2\mathrm{As}(\mathbf{V}_\mathscr{G})(\boldsymbol{\eta}_\mathbb{Q}^{-1}\cdot\eta^{-1}_\mathbb{Q})\otimes\mathrm{Gr}^1\mathrm{V}_{\mathsf{f}_\circ}.
\]
Hence, the statement follows from Proposition $\ref{AsaiFil}$ and a direct computation. \end{proof}
\begin{definition}\label{somedefgal}
We define the direct summand $\mathbf{V}^{\mathrm{f}_\circ}_\mathscr{G}$ of $\mathrm{Gr}^2\mathbf{V}^\dagger_{\mathscr{G},\mathsf{f}_\circ}$ by setting
\[
\mathbf{V}^{\mathrm{f}_\circ}_\mathscr{G}:=\mathbf{Q}_\mathscr{G}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\cdot \psi_\circ^{-1}\big)_{\lvert D_p}\Big).
\]
\end{definition}
\subsection{Geometric realization}\label{geomrealiz}
\begin{definition} For the compact open $K=V_{1}(M\cal{O}_L)$ we define the \emph{anemic} Hecke algebra \[ \widetilde{\mathbf{h}}^\mathrm{n.o.}_L(K;O)\subseteq\mathbf{h}^\mathrm{n.o.}_L(K;O) \] to be the $O$-subalgebra generated by the Hecke operators $\mathbf{T}(y)$ with $y_M=1$. \end{definition} \noindent Given the Hida family $\mathscr{G}:\mathbf{h}^\mathrm{n.o.}_L(K;O)_{\boldsymbol{\chi}}\to \mathbf{I}_\mathscr{G}$ we denote by \begin{equation}\label{heartform} \mathscr{G}_{\mbox{\tiny $\heartsuit$}}:\widetilde{\mathbf{h}}^\mathrm{n.o.}_L(K;O)_{\boldsymbol{\chi}}\rightarrow\mathbf{I}_\mathscr{G} \end{equation} its restriction to the anemic Hecke algebra. It is used to single out the part of Hecke modules most relevant for our applications.
\begin{definition}
We define $\boldsymbol{\cal{V}}_\mathscr{G}(M)$ to be the projective limit of
\[
\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha:= \cal{V}_\alpha \otimes_{\mathscr{G}_{\mbox{\tiny $\heartsuit$}}}\mathbf{I}_\mathscr{G}(\theta_\mathbb{Q}\cdot\boldsymbol{\eta}_\mathbb{Q}^{-1})\qquad \forall\ \alpha\ge1
\]
with respect to the trace maps $(\varpi_2)_*$ where $\cal{V}_\alpha:=e_\mathrm{n.o.}\mathrm{H}^2_!\big(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O(2)\big)$, and set
\[
\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M):= \boldsymbol{\cal{V}}_\mathscr{G}(M)(-1)\otimes \mathrm{V}_{\mathsf{f}_\circ}(p)
\]
where $\mathrm{V}_{\mathsf{f}_\circ}(p)$ denotes the $\mathsf{f}_\circ$-isotypic quotient of $\mathrm{H}^1_{\acute{\mathrm{e}}\mathrm{t}}\big(X_0(p)_{\bar{\mathbb{Q}}},O(1)\big)$.
\end{definition}
\noindent Let $\boldsymbol{\delta}: \Gamma_\mathbb{Q}\to O\llbracket \mathbb{G}_L(K)\rrbracket^\times$ be the projective limit of the Galois characters $\delta_\alpha$ defined in ($\ref{GalCal}$), then there is a natural surjection \begin{equation}
\xymatrix{ \mathrm{pr}_{\mathscr{G},\mathsf{f}_\circ}:\boldsymbol{\cal{V}}_\infty(M)\ar@{->>}[r]& \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M) }\end{equation} because the Galois character $\mathscr{G}_{\mbox{\tiny $\heartsuit$}}\circ\boldsymbol{\delta}:\Gamma_\mathbb{Q}\to(\mathbf{I}_\mathscr{G})^\times$ satisfies \[\begin{split} \mathscr{G}_{\mbox{\tiny $\heartsuit$}}\circ\boldsymbol{\delta}(\sigma_a) &= \phi_{\boldsymbol{\chi}}\big([(1,a^{-1}),(a,a^{-1})]\big)\\
&=
\theta_\mathbb{Q}(a)\cdot\boldsymbol{\eta}_\mathbb{Q}^{-1}(a). \end{split} \qquad \forall\ a\in\mathbb{Z}_p^\times \]
\begin{theorem}\label{wishingDimitrov}
Suppose that $\varrho$ is residually not solvable, then for all odd primes $p$ the Galois module $\boldsymbol{\cal{V}}_\mathscr{G}(M)$ is finite free over $\mathbf{I}_\mathscr{G}$ and it satisfies exact control
\[
\boldsymbol{\cal{V}}_\mathscr{G}(M)\otimes_{\boldsymbol{\Lambda}}\Lambda_\alpha\cong \boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha\qquad \forall\ \alpha\ge1.
\] \end{theorem} \begin{proof}
This can be proved as (\cite{DimitrovAutSym}, Theorem 3.8 (i),(ii)). The key new input is the recent work of Caraiani--Tamiozzo (\cite{Caraiani-Tamiozzo}, Theorem 7.1.1 $\&$ Corollary 7.1.2). \end{proof}
\begin{corollary}\label{correctspec}
Suppose $p>2$ and $\varrho$ residually not solvable, then the Galois representation $\boldsymbol{\cal{V}}_\mathscr{G}(M)$ is isomorphic to a direct sum of copies of $\mathrm{As}(\mathbf{V}_\mathscr{G})^\dagger$. Moreover, the specialization $\boldsymbol{\cal{V}}_\mathscr{G}(M)(-1)\otimes_{\mathrm{P}_\circ}E_\wp$ is a sum of copies of the Artin representation $\mathrm{As}(\varrho)$. \end{corollary} \begin{proof}
It follows by Theorem \ref{wishingDimitrov} and a comparison of traces (\cite{Mazur}, Section 5). \end{proof}
\begin{proposition}\label{somekindoffil} If the Jordan--Holder factors of the residual representation $\mathrm{As}(\mathbf{V}_\mathscr{G})\otimes_{\mathbf{I}_\mathscr{G}}\overline{\mathbb{F}}_p$ are all distinct, then the $\mathbf{I}_{\mathscr{G}}$-module $\boldsymbol{\cal{V}}_\mathscr{G}(M)$ is endowed with a three step $\Gamma_{\mathbb{Q}_p}$-stable filtration \[ \boldsymbol{\cal{V}}_\mathscr{G}(M) \supset \mathrm{Fil}^1\boldsymbol{\cal{V}}_\mathscr{G}(M) \supset \mathrm{Fil}^2\boldsymbol{\cal{V}}_\mathscr{G}(M) \supset 0. \] Furthermore, there are $\mathbf{I}_\mathscr{G}$-modules $\mathbf{A}, \mathbf{B}, \mathbf{B}', \mathbf{C}$ with trivial $\Gamma_{\mathbb{Q}_p}$-action such that \[\resizebox{\displaywidth}{!}{\xymatrix{ \mathrm{Gr}^0\boldsymbol{\cal{V}}_\mathscr{G}(M)=\mathbf{A}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}\cdot (\theta_\mathbb{Q}\cdot\boldsymbol{\eta}_\mathbb{Q}^{-1})_{\lvert D_p}\Big), \qquad \mathrm{Gr}^2\boldsymbol{\cal{V}}_\mathscr{G}(M)=\mathbf{C}\Big(\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot\big(\psi_\circ^{-1}\cdot\varepsilon_\mathbb{Q}\cdot\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\big)_{\lvert D_p}\Big), }}\] and the first graded piece is an extension \[\resizebox{\displaywidth}{!}{\xymatrix{ \mathbf{B}\Big(\big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\big)^{-1}\cdot\big(\psi^{-1}_{\circ,\frak{p}_1}\cdot\varepsilon_\mathbb{Q}\big)_{\lvert D_{p}}\Big)\ar@{^{(}->}[r]& \mathrm{Gr}^1\boldsymbol{\cal{V}}_\mathscr{G}(M)\ar@{->>}[r]& \mathbf{B}'\Big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\cdot\big(\psi^{-1}_{\circ,\frak{p}_2}\cdot\varepsilon_\mathbb{Q}\big)_{\lvert D_{p}}\Big). }}\] \end{proposition} \begin{proof}
By (\cite{BL}, Chapter 3.4 $\&$ \cite{ES-Nekovar}, Theorem 5.20) there is a Galois equivariant injection
\begin{equation}\label{nekBL}
\xymatrix{ \boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha\ar@{^{(}->}[r]&
\bigoplus_\mathrm{P}\Big(\mathrm{As}(\mathrm{V}_{\mathscr{G}_\mathrm{P}})(\theta_\mathbb{Q}\cdot\chi_{\mbox{\tiny $\spadesuit,\mathrm{P}$}})\Big)^{\oplus n}
}\end{equation}
where the sum taken over arithmetic points $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight $(2t_L,t_L)$ and level $p^\alpha$ and where $n$ is the number of divisors of $M/\frak{Q}$.
The right-hand side of ($\ref{nekBL}$) is endowed with a nearly ordinary filtration (Proposition $\ref{AsaiFil}$). Therefore, if we set $\mathbf{I}_{\mathscr{G},\alpha}:=\mathbf{I}_{\mathscr{G}}\otimes\Lambda_\alpha$, the Galois module $\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha$ inherits a three step $\Gamma_{\mathbb{Q}_p}$-stable filtration consisting of $\mathbf{I}_{\mathscr{G},\alpha}$-modules \[ \boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha \supset \mathrm{Fil}^1\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha \supset \mathrm{Fil}^2\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha \supset 0. \] Moreover, there are $\mathbf{I}_{\mathscr{G},\alpha}$-modules $\mathbf{A}_\alpha, \mathbf{B}_\alpha, \mathbf{B}_\alpha', \mathbf{C}_\alpha$ with trivial $\Gamma_{\mathbb{Q}_p}$-action such that \[\resizebox{\displaywidth}{!}{\xymatrix{ \mathrm{Gr}^0\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha=\mathbf{A}_\alpha\Big(\boldsymbol{\Psi}_{\mathscr{G},p}\cdot (\theta_\mathbb{Q}\cdot\boldsymbol{\eta}_\mathbb{Q}^{-1})_{\lvert D_p}\Big), \qquad \mathrm{Gr}^2\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha=\mathbf{C}_\alpha\Big(\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot\big(\psi_\circ^{-1}\cdot\varepsilon_\mathbb{Q}\cdot\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\big)_{\lvert D_p}\Big), }}\] \[\resizebox{\displaywidth}{!}{\xymatrix{ 0\ar[r]& \mathbf{B}_\alpha\Big(\big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\big)^{-1}\cdot\big(\psi^{-1}_{\circ,\frak{p}_1}\cdot\varepsilon_\mathbb{Q}\big)_{\lvert D_{p}}\Big)\ar[r]& \mathrm{Gr}^1\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha\ar[r]& \mathbf{B}_\alpha'\Big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\cdot\big(\psi^{-1}_{\circ,\frak{p}_2}\cdot\varepsilon_\mathbb{Q}\big)_{\lvert D_{p}}\Big)\ar[r]&0. }}\] Since the Jordan--Holder factors of the residual representation $\mathrm{As}(\mathbf{V}_\mathscr{G})\otimes_{\mathbf{I}_\mathscr{G}}\overline{\mathbb{F}}_p$ are all distinct, the characters appearing in the graded pieces of $\boldsymbol{\cal{V}}_\mathscr{G}(M)_\alpha$ are all distinct. It follows that the Galois equivariant transition maps $\boldsymbol{\cal{V}}_\mathscr{G}(M)_{\alpha+1}\rightarrow \boldsymbol{\cal{V}}_\mathscr{G}(M)_{\alpha}$ respect the filtration and the claim follows by taking projective limits. \end{proof}
\begin{remark}
Recall the primitive eigenform $\mathsf{g}_\circ\in S_{t_L,t_L}(\mathfrak{Q};\chi_\circ;O)$ and denote by $\alpha_i,\beta_i$ the eigenvalues of $\varrho_{\mathsf{g}_\circ}(\mathrm{Fr}_{\mathfrak{p}_i})$ for $\mathfrak{p}_1,\mathfrak{p}_2$ the $\cal{O}_L$-prime ideals above $p$.
Thanks to Proposition $\ref{AsaiFil}$, the Jordan--Holder factors of the residual representation $\mathrm{As}(\mathbf{V}_\mathscr{G})\otimes_{\mathbf{I}_\mathscr{G}}\overline{\mathbb{F}}_p$ are all distinct if and only if the products $\alpha_1\alpha_2,\ \alpha_1\beta_2,\ \beta_1\alpha_2,\ \beta_1\beta_2$
are all distinct in $\overline{\mathbb{F}}_p$. \end{remark}
\noindent Let $K/\mathbb{Q}$ be a non-totally real $S_5$-quintic extension whose Galois closure contain a real quadratic field $L$. Recall there is a parallel weight one Hilbert eigenform $\mathsf{g}_K$ over $L$ such that $\mathrm{As}(\varrho_{\mathsf{g}_K})\cong\mathrm{Ind}_K^\mathbb{Q}\mathbbm{1}-\mathbbm{1}$ (\cite{MicAnalytic}, Corollary 4.2). \begin{proposition}\label{distinct eigenvalues quintic}
If $p\not=5$ is a rational prime unramified in $K$ whose Frobenius conjugacy class is that of $5$-cycles in $S_5$, then $p$ splits in $L$ and the residual $\Gamma_{\mathbb{Q}_p}$-representation $\left(\mathrm{As}(\varrho_{\mathsf{g}_K})\otimes\overline{\mathbb{F}}_p\right)_{\lvert D_p}$ has distinct Jordan--Holder factors. \end{proposition} \begin{proof}
The representation $\mathrm{As}(\varrho_{\mathsf{g}_K}):\Gamma_\mathbb{Q}\to\mathrm{GL}_4(O)$ factors through the Galois group of the Galois closure of $K$. As a representation of the symmetric group $S_5$ it is isomorphic to the irreducible $4$-dimensional direct summand of the permutation representation of $S_5$ acting on $5$ elements. If $p\not=5$ is a rational prime unramified in $K$ whose Frobenius conjugacy class is that of $5$-cycles in $S_5$, then the decomposition group $D_p$ is cyclic of order $5$ and we can conclude by noting that $\big(\mathrm{As}(\varrho_{\mathsf{g}_K})\otimes\overline{\mathbb{F}}_p\big)(\mathrm{Fr}_p)$ has four distinct eigenvalues given by the non-trivial $5$-th roots of unity. \end{proof}
\subsection{Hodge--Tate numerology} Let $\mathsf{g}$ be a primitive Hilbert cuspform over $L$ of weight $(\ell t_L,t_L)$ and normalize Hodge--Tate weights by stating that the character $\varepsilon_\mathbb{Q}$ has weight $-1$. Then for every $\cal{O}_L$-prime ideal $\mathfrak{p}\mid p$, the restriction $\big(\mathrm{V}_\mathsf{g}\big)_{\lvert D_\mathfrak{p}}$ has a $D_\frak{p}$-stable filtration \[\xymatrix{ 0\ar[r]& \mathrm{V}^+_\frak{p}\ar[r]& \big(\mathrm{V}_\mathsf{g}\big)_{\lvert D_\mathfrak{p}}\ar[r]& \mathrm{V}^-_\frak{p}\ar[r]&0 }\] where $\mathrm{V}^+_\frak{p}$ is a one-dimensional subrepresentation with Hodge--Tate weights equal to $ 1-\ell$ and $\mathrm{V}^-_\frak{p}$ is a one-dimensional quotient with Hodge--Tate weights equal to $0$ (\cite{pHida}, Introduction). Therefore, the twist \[ \mathrm{V}_\mathsf{g}^\dagger:= \mathrm{V}_\mathsf{g}\left(\eta_L^{\frac{2-\ell}{2}}\right) \]
has Hodge--Tate weights at $\frak{p}$ given by $\{ -\frac{\ell}{2}, \frac{\ell-2}{2}\}_{\tau\in\mathrm{I}_{L,\mathfrak{p}}}$. As in Propostion $\ref{AsaiFil}$, when $p\cal{O}_L=\mathfrak{p}_1\mathfrak{p}_2$ splits, the restriction at $p$ of the Asai representation
\[
\mathrm{As}\big(\mathrm{V}^\dagger_\mathsf{g}\big)_{\lvert D_p}=\big(\mathrm{V}^\dagger_\mathsf{g}\big)_{\lvert D_{\mathfrak{p}_1}}\otimes\big(\mathrm{V}^\dagger_\mathsf{g}\big)_{\lvert D_{\mathfrak{p}_2}},
\]
is endowed with a $3$-step $D_p$-stable filtration \[ \mathrm{As}\big(\mathrm{V}^\dagger_\mathsf{g}\big)\supset\mathrm{Fil}^1\mathrm{As}\big(\mathrm{V}^\dagger_\mathsf{g}\big)\supset\mathrm{Fil}^2\mathrm{As}\big(\mathrm{V}^\dagger_\mathsf{g}\big)\supset\{0\} \] whose graded pieces have dimension $1,2$ and $1$ respectively and whose Hodge--Tate weights are given in the following table.
\begin{center}
\begin{tabular}{lc}
\toprule
Graded piece & Hodge--Tate weights \\
\midrule
$\mathrm{Gr}^0\mathrm{As}(\mathrm{V}^\dagger_\mathsf{g})$
& $\ell-2$ \\
\midrule
$\mathrm{Gr}^1\mathrm{As}(\mathrm{V}^\dagger_\mathsf{g})$
& $(-1, -1)$ \\
\midrule
$\mathrm{Gr}^2\mathrm{As}(\mathrm{V}^\dagger_\mathsf{g})$
& $-\ell$ \\
\bottomrule
\end{tabular} \end{center}
\noindent Furthermore, as in Lemma $\ref{GaloisStructure}$, the restriction at $p$ of the $\Gamma_\mathbb{Q}$-representation \[ \mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}=\mathrm{As}\big(\mathrm{V}^\dagger_\mathsf{g}\big)(-1)\otimes\mathrm{V}_{\mathsf{f}_\circ} \] inherits a $4$-step $D_p$-stable filtration \[ \mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}\supset\mathrm{Fil}^1\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}\supset\mathrm{Fil}^2\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}\supset\mathrm{Fil}^3\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}\supset\{0\} \] with graded pieces of dimension $1,3,3$ and $1$ respetively and whose Hodge--Tate weights are presented in the following table.
\begin{center}
\begin{tabular}{lc}
\toprule
Graded piece & Hodge--Tate weights \\
\midrule
$\mathrm{Gr}^0\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}$
& $\ell-1$ \\ \midrule
$\mathrm{Gr}^1\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}$
& $(\ell-2,\ 0,\ 0)$ \\ \midrule
$\mathrm{Gr}^2\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}$
& $(-1,\ -1,\ 1-\ell)$ \\ \midrule
$\mathrm{Gr}^3\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}$
& $-\ell$ \\ \bottomrule
\end{tabular} \end{center}
\begin{corollary}\label{negative HT weights}
The H--T weights of $\mathrm{Fil}^2\mathrm{V}^\dagger_{\mathsf{g},\mathsf{f}_\circ}$ are all strictly negative if and only if $\ell\ge 2$. \end{corollary}
\subsection{Local cohomology classes} From now onward we suppose that the Jordan--Holder factors of the residual representation $\mathrm{As}(\varrho_\circ)\otimes_{O}\overline{\mathbb{F}}_p$ are all distinct. Then, by Proposition \ref{somekindoffil}, the Galois module $\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)$ has a $4$-step $D_p$-stable filtration, and the Galois action on its graded pieces is given by characters appearing in Lemma \ref{GaloisStructure}.
\begin{lemma}\label{fil2 inj} The natural map \[ \mathrm{H}^1(\mathbb{Q}_p,\mathrm{Fil}^2\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)) \longrightarrow \mathrm{H}^1(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)) \] induced by the $D_p$-stable filtration is an injection. \end{lemma} \begin{proof}
Lemma $\ref{GaloisStructure}$ implies that
\[\mathrm{H}^0(\mathbb{Q}_p,\mathrm{Gr}^1\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M))=0,\qquad \mathrm{H}^0(\mathbb{Q}_p,\mathrm{Gr}^0\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M))=0.\] Therefore taking the long exact sequence in Galois cohomology associated with the short exact sequence of $D_p$-modules
\[\xymatrix{
0\ar[r]& \mathrm{Gr}^1\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\ar[r]&\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\ar[r]&\mathrm{Gr}^0\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\ar[r]&0,
}\] we deduce that $\mathrm{H}^0(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2)=0$ and the lemma follows. \end{proof}
\subsubsection{Local properties.} \begin{definition} The $\mathbf{I}_\mathscr{G}$-adic cohomology class attached to the pair $(\mathscr{G}, \mathsf{f}_\circ)$ is the projection \[ \boldsymbol{\kappa}_{\mathscr{G},\mathsf{f}_\circ} := \mathrm{pr}_{\mathscr{G},\mathsf{f}_\circ}(\boldsymbol{\kappa}^\mathrm{n.o.}_\infty) \in \mathrm{H}^1\big(\mathbb{Q},\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\big). \] We denote its restriction at the decomposition group at $p$ by \[ \boldsymbol{\kappa}_p(\mathscr{G},\mathsf{f}_\circ):=\mathrm{loc}_p(\boldsymbol{\kappa}_{\mathscr{G},\mathsf{f}_\circ}) \in \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\big). \]
\end{definition}
\noindent Consider the following element of the ring $\mathbf{I}_\mathscr{G}$
\begin{equation}
\boldsymbol{\xi}_{\mathscr{G},\mathsf{f}_\circ} :=
\Big(1-\alpha_{\mathsf{f}_\circ}\chi_\circ(\mathfrak{p}_1)\mathscr{G}\big(\mathbf{T}(\varpi_{\mathfrak{p}_1})^{-1}\mathbf{T}(\varpi_{\mathfrak{p}_2})\big)\Big)
\Big(1-\alpha_{\mathsf{f}_\circ}\chi_\circ(\mathfrak{p}_2)\mathscr{G}\big(\mathbf{T}(\varpi_{\mathfrak{p}_1})\mathbf{T}(\varpi_{\mathfrak{p}_2})^{-1}\big)\Big).
\end{equation}
\begin{proposition}\label{classINsel}
We have \[ \boldsymbol{\xi}_{\mathscr{G},\mathsf{f}_\circ} \cdot \boldsymbol{\kappa}_p(\mathscr{G},\mathsf{f}_\circ) \in \mathrm{H}^1\big(\mathbb{Q}_p,\mathrm{Fil}^2\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\big). \] \end{proposition}
\begin{proof} We follow the argument of (\cite{DR2}, Proposition 2.2). The module \[ \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha=\boldsymbol{\cal{V}}_{\mathscr{G}}(M)_\alpha\otimes\mathrm{V}_{\mathsf{f}_\circ}(p) \] is realized as a quotient of $\mathrm{H}_{\acute{\mathrm{e}}\mathrm{t}}^{3}\big(Z^\dagger_\alpha(K)^c_{\bar{\mathbb{Q}}},O(2)\big)$ for $\iota:Z^\dagger_\alpha(K)\hookrightarrow Z^\dagger_\alpha(K)^c$ the smooth compactification appearing in the proof of Proposition $\ref{nullhomo}$. Let \[ \mathrm{H}^1_f\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big) \subseteq \mathrm{H}^1_g\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big) \subseteq \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big) \] denote the finite and geometric parts of the local Galois cohomology (\cite{Bloch--Kato}, Section 3). The purity conjecture for the monodromy filtration holds for the middle cohomology of Hilbert modular varieties by work of Saito and Skinner (\cite{p-adicHodgeHilbert}, \cite{SkinnerHilbert}), and hence for the middle cohomology of $Z^\dagger_\alpha(K)^c$. Therefore, the image of $\boldsymbol{\kappa}_p(\mathscr{G},\mathsf{f}_\circ)$ in $\mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big)$ lies in $\mathrm{H}^1_g\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big)=\mathrm{H}^1_f\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big)$ by (\cite{NekovarAbel--Jacobi}, Theorem 3.1).
\noindent Lemma $\ref{GaloisStructure}$ shows that $\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha$ is an ordinary $\Gamma_{\mathbb{Q}_p}$-representation in the sense of (\cite{Weston}, Section 1.1). Therefore, one can deduce that \[ \mathrm{H}^1_f\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big) = \ker \Big(\mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\big) \rightarrow \mathrm{H}^1\big(\mathrm{I}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha/\mathrm{Fil}^2\big)\Big) \] using Corollary $\ref{negative HT weights}$ and the argument in the proof of (\cite{Flach}, Lemma 2). In particular, the class $\boldsymbol{\kappa}_p(\mathscr{G},\mathsf{f}_\circ)$ has trivial image in the projective limit \[ \underset{\leftarrow, \alpha}{\lim}\ \mathrm{H}^1\Big(\mathrm{I}_p,\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha/\mathrm{Fil}^2\Big) \cong \mathrm{H}^1\Big(\mathrm{I}_p, \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\Big), \]
and consequently
the image of $\boldsymbol{\kappa}_p(\mathscr{G},\mathsf{f}_\circ)$ in $\mathrm{H}^1\big(\mathbb{Q}_p, \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)$ lies in \[ \mathrm{H}^1\Big(\mathbb{Q}_p^\mathrm{ur}/\mathbb{Q}_p,\big(\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)^{\mathrm{I}_p}\Big)\cong\ker\Big(\mathrm{H}^1\big(\mathbb{Q}_p, \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)\rightarrow \mathrm{H}^1\big(\mathrm{I}_p, \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)\Big). \] Taking into account Lemma $\ref{fil2 inj}$, we are left to show that \[ \boldsymbol{\xi}_{\mathscr{G},\mathsf{f}_\circ}\cdot \mathrm{H}^1\Big(\mathbb{Q}_p^\mathrm{ur}/\mathbb{Q}_p,\big(\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)^{\mathrm{I}_p}\Big)=0. \] By choosing the arithmetic Frobenius $\mathrm{Fr}_p$ we can make the identification $\mathrm{Gal}(\mathbb{Q}_p^\mathrm{ur}/\mathbb{Q}_p)\cong\widehat{\mathbb{Z}}$ and compute that \[ \mathrm{H}^1\Big(\mathbb{Q}_p^\mathrm{ur}/\mathbb{Q}_p,\big(\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)^{\mathrm{I}_p}\Big) \cong \big(\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)^{\mathrm{I}_p}/(\mathrm{Fr}_p-1). \] Considering the short exact sequence \[ 0\rightarrow \mathrm{Gr}^1\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M) \rightarrow \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2 \rightarrow \mathrm{Gr}^0\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M) \rightarrow 0 \] and the vanishing (Lemma \ref{GaloisStructure} $\&$ Proposition \ref{somekindoffil}) \[ \mathrm{H}^0\Big(\mathrm{I}_p, \mathrm{Gr}^0\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\Big)=\underset{\leftarrow,\alpha}{\lim}\ \mathrm{H}^0\Big(\mathrm{I}_p, \mathrm{Gr}^0\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)_\alpha\Big)=0, \] we deduce that $\big(\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\big)^{\mathrm{I}_p} = \big(\mathrm{Gr}^1\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\big)^{\mathrm{I}_p}$ sits in a short exact sequence \[\resizebox{\displaywidth}{!}{\xymatrix{ \mathbf{D}\Big(\big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\big)^{-1}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\psi^{-1}_{\circ,\frak{p}_1}\big)_{\lvert D_{p}}\Big)\ar@{^{(}->}[r]& \Big(\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)/\mathrm{Fil}^2\Big)^{\mathrm{I}_p}\ar@{->>}[r]& \mathbf{D}'\Big(\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_1}\boldsymbol{\Psi}_{\mathscr{G},\frak{p}_2}^{-1}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\psi^{-1}_{\circ,\frak{p}_2}\big)_{\lvert D_p}\Big) }}\] where $\mathbf{D},\mathbf{D}'$ are $\mathbf{I}_\mathscr{G}$-modules with trivial Galois action. Therefore, if we set \[
\boldsymbol{\xi}_{\mathscr{G},\mathsf{f}_\circ} =
\Big(1-\alpha_{\mathsf{f}_\circ}\chi_\circ(\mathfrak{p}_1)\mathscr{G}\big(\mathbf{T}(\varpi_{\mathfrak{p}_1})^{-1}\mathbf{T}(\varpi_{\mathfrak{p}_2})\big)\Big)
\Big(1-\alpha_{\mathsf{f}_\circ}\chi_\circ(\mathfrak{p}_2)\mathscr{G}\big(\mathbf{T}(\varpi_{\mathfrak{p}_1})\mathbf{T}(\varpi_{\mathfrak{p}_2})^{-1}\big)\Big) \]
the claim follows. \end{proof}
\noindent In light of Proposition $\ref{classINsel}$, from now on we replace the ring $\mathbf{I}_\mathscr{G}$ and the various modules over it with their respective localizations at the multiplicative set generated by $\boldsymbol{\xi}_{\mathscr{G},\mathsf{f}_\circ}$. Observe that the arithmetic specializations of $\boldsymbol{\xi}_{\mathscr{G},\mathsf{f}_\circ}$ never vanish: for any $\mathrm{P}\in\cal{A}(\mathbf{I}_\mathscr{G})$, the specialization $\mathrm{P}\circ\mathscr{G}\big(\mathbf{T}(\varpi_{\mathfrak{p}_1})^{-1}\mathbf{T}(\varpi_{\mathfrak{p}_2})\big)$ is an algebraic integer with complex absolute value 1, whereas $\alpha_{\mathsf{f}_\circ}$ has complex absolute value $p^{1/2}$.
\begin{corollary}\label{placing the class} With the above convention in place, \[ \boldsymbol{\kappa}_p(\mathscr{G},\mathsf{f}_\circ) \in \mathrm{H}^1\big(\mathbb{Q}_p,\mathrm{Fil}^2\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\big). \] \end{corollary}
\begin{definition}\label{TwistedGradedPiece}
Let $\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_{\mathscr{G}}(M)
:=
\mathrm{Fil}^2 \boldsymbol{\cal{V}}_\mathscr{G}(M) (-1) \otimes\mathrm{Gr}^0\mathrm{V}_{\mathrm{f}_\circ}(p)$ and denote by \[ \boldsymbol{\kappa}^{\mathsf{f}_\circ}_p(\mathscr{G}) \in \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_{\mathscr{G}}(M)\big) \] the image of $\boldsymbol{\kappa}_p(\mathscr{G},\mathsf{f}_\circ)$ under the natural surjection $\mathrm{Fil}^2\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M) \twoheadrightarrow \boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_{\mathscr{G}}(M)$. \end{definition} \begin{remark} The local Galois group $\Gamma_{\mathbb{Q}_p}$ acts on $\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_{\mathscr{G}}(M)$ through the character
\[
\boldsymbol{\Psi}_{\mathscr{G},p}^{-1}\cdot\delta_p(\mathsf{f}_\circ)\cdot\big(\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q}\cdot \psi_\circ^{-1}\big)_{\lvert D_p}.
\] \end{remark}
\section{Big pairing}\label{generalizingOhta} \subsection{Algebra interlude} Recall $\Gamma=1+p\mathbb{Z}_p$, $\Gamma_\alpha=\Gamma/\Gamma^{p^\alpha}$, $\Lambda_\alpha=O[\Gamma_\alpha]$ and $E_\wp=\mathrm{Frac}(O)$. \begin{definition}
Let $\boldsymbol{\Pi}_\alpha= (\Lambda_\alpha\otimes_OE_\wp)$ and define
\begin{equation}
\boldsymbol{\Pi} :=\underset{\leftarrow,\alpha}{\lim}\ \boldsymbol{\Pi}_\alpha.
\end{equation} \end{definition}
\noindent Let $(M_\alpha)_\alpha$ be a projective system where each $M_\alpha$ is a $\boldsymbol{\Pi}_\alpha$-module, then the projective limit $\mathbf{M} = \varprojlim_\alpha M_\alpha$ inherits a $\boldsymbol{\Pi}$-module structure. Any finite order character $\chi:\Gamma \rightarrow \mathbb{C}^\times_p$ factors through $\Gamma_\alpha$ for some $\alpha\ge1$, and determines a homomorphism \begin{equation} \chi:\mathbf{M}\longrightarrow M_\alpha\otimes_{\chi}\mathbb{C}_p, \qquad x\mapsto \chi(x). \end{equation}
\begin{lemma}\label{lem: vanishing criterion} Let $\mathbf{M} = \varprojlim_\alpha M_\alpha$ be a projective limit where each $M_\alpha$ is a flat $\boldsymbol{\Pi}_\alpha$-module. Then $x\in\mathbf{M}$ equals zero if and only if $\chi(x) = 0$ for every finite order character $\chi: \Gamma \to \mathbb{C}_p^\times$. \end{lemma} \begin{proof}
Let $x=(x_\alpha)_\alpha\in\mathbf{M}$ be such that $\chi(x) = 0$ for every finite order character $\chi: \Gamma \to \mathbb{C}_p^\times$. By definition, this means that for every $\alpha\ge1$ \[ \chi(x_\alpha)=0\qquad\forall\ \chi:\Gamma_\alpha\to\mathbb{C}_p^\times. \]
Then flatness of $M_\alpha$ and the injectivity of the homomorphism $\oplus_\chi: \boldsymbol{\Pi}_\alpha \hookrightarrow \oplus_\chi\mathbb{C}_p$, where the sum is taken over all the characters of $\Gamma_\alpha$, implies the injectivity of \[ \oplus_\chi : M_\alpha \hookrightarrow \oplus_\chi (M_\alpha\otimes_{\chi}\mathbb{C}_p). \] We deduce $x_\alpha=0$ for every $\alpha\ge1$. \end{proof}
\noindent We are interested in generalizing the vanishing criterion of Lemma $\ref{lem: vanishing criterion}$ to $\boldsymbol{\Pi}$-modules of the form $\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}$ where $\mathbf{I}$ is an algebra which is finitely generated and flat over $\boldsymbol{\Lambda}$. We start with a couple of technical lemmas.
\begin{lemma}\label{lemma surjective} If $A\twoheadrightarrow B$ is a surjective morphism of $\boldsymbol{\Lambda}$-modules then \[ \underset{\leftarrow,\alpha}{\lim}\ \big(A\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha\big)\twoheadrightarrow \underset{\leftarrow,\alpha}{\lim}\ \big(B\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha\big). \] \end{lemma} \begin{proof}
Let $Q=\ker(A\to B)$, and $\kappa_\alpha=\ker(A\otimes\boldsymbol{\Pi}_\alpha\to B\otimes\boldsymbol{\Pi}_\alpha)$ for every $\alpha\ge1$, then there is a commutative diagram
\[\xymatrix{
Q\otimes\boldsymbol{\Pi}_\alpha\ar@{->>}[dr]\ar[drr]\ar@{->>}[dd] & & &\\
& \kappa_\alpha\ar@{^{(}->}[r]\ar@{.>>}[dd] & A\otimes\boldsymbol{\Pi}_\alpha\ar[r]\ar@{->>}[dd] & B\otimes\boldsymbol{\Pi}_\alpha\ar[r]\ar@{->>}[dd]& 0\\
Q\otimes\boldsymbol{\Pi}_{\alpha-1}\ar@{->>}[dr]\ar[drr] & & &\\
& \kappa_{\alpha-1}\ar@{^{(}->}[r] & A\otimes\boldsymbol{\Pi}_{\alpha-1}\ar[r] & B\otimes\boldsymbol{\Pi}_{\alpha-1}\ar[r]& 0\\
}\]
Since tensoring by $\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha$ is right exact, $Q\otimes\boldsymbol{\Pi}_\alpha\twoheadrightarrow\kappa_\alpha$ is surjective for every $\alpha\ge1$. Therefore the transition map $\kappa_\alpha\to\kappa_{\alpha-1}$ is surjective for every $\alpha>1$ and $\underset{\leftarrow,\alpha}{\lim^1}\ \kappa_\alpha=0$ as required. \end{proof}
\begin{lemma}\label{inverse limit} Suppose $\mathbf{I}$ is a finitely generated, flat $\boldsymbol{\Lambda}$-module. Then \[ \mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}\overset{\sim}{\longrightarrow}\underset{\leftarrow,\alpha}{\lim}\ \big(\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha) \] is a projective limit of flat $\boldsymbol{\Pi}_\alpha$-modules. \end{lemma} \begin{proof} Since $\boldsymbol{\Lambda}$ is Noetherian, $\mathbf{I}$ is finitely presented, and thus fits in an exact sequence of the form \begin{equation}\label{presentation} (\boldsymbol{\Lambda})^{\oplus m}\rightarrow (\boldsymbol{\Lambda})^{\oplus n}\rightarrow \mathbf{I}\rightarrow0. \end{equation} Tensoring is right exact, hence we deduce presentations for $\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}$ and $\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha$: \[ (\boldsymbol{\Pi})^{\oplus m}\rightarrow (\boldsymbol{\Pi})^{\oplus n}\rightarrow \mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}\rightarrow0, \qquad (\boldsymbol{\Pi}_\alpha)^{\oplus m}\rightarrow (\boldsymbol{\Pi}_\alpha)^{\oplus n}\rightarrow \mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha\rightarrow0. \] Now it suffices to show that \[ (\boldsymbol{\Pi})^{\oplus m}\rightarrow (\boldsymbol{\Pi})^{\oplus n}\rightarrow \underset{\leftarrow,\alpha}{\lim}\ (\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha)\rightarrow0 \] is exact. The surjectivity of $(\boldsymbol{\Pi})^{\oplus n}\twoheadrightarrow \varprojlim_\alpha\ (\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha)$ follows from Lemma $\ref{lemma surjective}$, so we are left to prove exactness at the middle of the sequence. Let $Q=\ker(\boldsymbol{\Lambda}^{\oplus n}\to\mathbf{I})$, then $Q$ comes equipped with a surjection $\boldsymbol{\Lambda}^{\oplus m}\twoheadrightarrow Q$ because ($\ref{presentation}$) is a presentation. Moreover, \[ Q\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha=\ker\big((\boldsymbol{\Pi}_\alpha)^{\oplus n}\rightarrow \mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha\big) \] because $\mathrm{Tor}_1^{\boldsymbol{\Lambda}}(\mathbf{I},\boldsymbol{\Pi}_\alpha)=0$ as $\mathbf{I}$ is a flat $\boldsymbol{\Lambda}$-module. Therefore, \[ \underset{\leftarrow,\alpha}{\lim}\ (Q\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha)=\ker\big( (\boldsymbol{\Pi})^{\oplus n}\twoheadrightarrow \underset{\leftarrow,\alpha}{\lim}\ (\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha)\big) \] and we are left to show that the induced map $\boldsymbol{\Pi}^{\oplus m}\to \varprojlim_\alpha\ (Q\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}_\alpha)$ is surjective, which it is because of Lemma $\ref{lemma surjective}$. \end{proof}
\noindent Lemma $\ref{inverse limit}$ shows that the vanishing criterion of Lemma $\ref{lem: vanishing criterion}$ applies to any $\boldsymbol{\Pi}$-module of the form $\mathbf{I}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}$ for $\mathbf{I}$ a finitely generated and flat $\boldsymbol{\Lambda}$-module. When comparing the automorphic and motivic $p$-adic $L$-functions in Section $\ref{Sect: Comparison}$ we will be interested in the case $\mathbf{I}=\mathbf{I}_\mathscr{G}$ and we will have information about the specializations at arithmetic points of weight $2$. Observe that if $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ has weight $(2t_L,t_L)$ and level $p^\alpha$ then it induces a map \[ \mathrm{P}:\mathbf{I}_\mathscr{G}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Pi}\longrightarrow\mathbb{C}_p,\qquad x\mapsto \mathrm{P}(x). \] The following vanishing criterion will be of crucial importance.
\begin{theorem}\label{cor: vanishing criterion}
An element $x \in \mathbf{I}_\mathscr{G} \otimes_{\boldsymbol{\Lambda}} \boldsymbol{\Pi}$ equals zero if and only if $\mathrm{P}(x) = 0$ for all arithmetic points $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight $(2t_L,t_L)$. \end{theorem} \begin{proof}
By Lemma $\ref{inverse limit}$, $\mathbf{I}_\mathscr{G} \otimes_{\boldsymbol{\Lambda}} \boldsymbol{\Pi} \overset{\sim}{\to} \varprojlim_\alpha (\mathbf{I}_\mathscr{G} \otimes_{\boldsymbol{\Lambda}} \boldsymbol{\Pi}_\alpha)$, thus an element $x\in \mathbf{I}_\mathscr{G} \otimes_{\boldsymbol{\Lambda}} \boldsymbol{\Pi}$ is equal to zero if and only if its image $x_\alpha\in \mathbf{I}_\mathscr{G} \otimes_{\boldsymbol{\Lambda}} \boldsymbol{\Pi}_\alpha$ is equal to zero for every $\alpha\ge1$. As in the proof of Lemma $\ref{lem: vanishing criterion}$ there is an injection \[ \mathbf{I}_\mathscr{G} \otimes_{\boldsymbol{\Lambda}} \boldsymbol{\Pi}_\alpha\hookrightarrow\bigoplus_{\chi}\ \big(\mathbf{I}_\mathscr{G}\otimes_{\boldsymbol{\Lambda},\chi}\mathbb{C}_p\big) \] because $\mathbf{I}_\mathscr{G}$ is $\boldsymbol{\Lambda}$-flat. Let $\frak{p}_\chi$ be the kernel of $\chi:\boldsymbol{\Lambda}\to\mathbb{C}_p$ and denote by $\boldsymbol{\Lambda}_{\mathfrak{p}_\chi}$ the localization, then $\mathbf{I}_\mathscr{G}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Lambda}_{\mathfrak{p}_\chi}$ is finite \'etale over $\boldsymbol{\Lambda}_{\mathfrak{p}_\chi}$ because Hida families are finite \'etale over arithmetic points of weight $\ge 2t_L$ (\cite{nearlyHida}). Therefore \[ (\mathbf{I}_\mathscr{G}\otimes_{\boldsymbol{\Lambda}}\boldsymbol{\Lambda}_{\mathfrak{p}_\chi})\otimes_{\boldsymbol{\Lambda}_{\mathfrak{p}_\chi}}\mathbb{C}_p\cong \mathbf{I}_\mathscr{G}\otimes_{\boldsymbol{\Lambda},\chi}\mathbb{C}_p \] is a finite product of copies of $\mathbb{C}_p$, indexed by the arithmetic points $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ above $\mathfrak{p}_\chi$. In summary \[ \mathbf{I}_\mathscr{G}\otimes_{\boldsymbol{\Lambda},\chi}\mathbb{C}_p\cong\bigoplus_{\mathrm{P}}\mathbb{C}_p \] the sum over the arithmetic points $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight $(2t_L,t_L)$ and the claim follows. \end{proof}
\subsection{Generalizing Ohta} \begin{definition}\label{def 2 I-adic forms}
For a $\Lambda$-algebra $\mathbf{I}$ we define the space of ordinary $\mathbf{I}$-adic cuspforms of tame level $K$ and character $\boldsymbol{\chi}$ to be
\[
\overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\mathbf{I}):=\mathrm{Hom}_{\boldsymbol{\Lambda}\mbox{-}\mathrm{mod}}\big(\mathbf{h}^\mathrm{n.o.}_L(K;O)\otimes_{\phi_{\boldsymbol{\chi}}}\boldsymbol{\Lambda}, \mathbf{I}\big).
\] \end{definition} \noindent Let $\mathscr{G}\in \overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\boldsymbol{\Lambda})$ then exact control for the nearly ordinary Hecke algebra implies that extending scalars for $\mathscr{G}$ to $\Lambda_\alpha$ produces a $\Lambda_\alpha$-module homomorphism \[ \underline{\mathscr{G}}_\alpha:\mathrm{h}^\mathrm{ord}_{2t_L,t_L}(K_{\diamond,t}(p^\alpha);O)\longrightarrow \Lambda_\alpha. \] Since $\Lambda_\alpha=\bigoplus_{\sigma\in\Gamma_\alpha}O\cdot[\sigma]$, we may write $\underline{\mathscr{G}}_\alpha=\bigoplus_{\sigma \in \Gamma_\alpha}\mathscr{G}_{\alpha,\sigma^{-1}}\cdot[\sigma]$, and the $\Lambda_\alpha$-linearity implies $\mathscr{G}_{\alpha,\sigma^{-1}}(-)=\mathscr{G}_{\alpha,1}([\sigma]-)$. In order to lighten the notation we write $\mathscr{G}_\alpha$ for the Hilbert cuspform \[ \mathscr{G}_\alpha:=\mathscr{G}_{\alpha,1}\in S^\mathrm{ord}_{2t_L,t_L}(K_{\diamond,t}(p^\alpha);O). \] The compatibility \[\xymatrix{ \mathrm{h}^\mathrm{ord}_{2t_L,t_L}(K_{\diamond,t}(p^{\alpha+1});O)\ar[d]\ar[rr]^{\underline{\mathscr{G}}_{\alpha+1}}& &\Lambda_{\alpha+1}\ar[d]\\ \mathrm{h}^\mathrm{ord}_{2t_L,t_L}(K_{\diamond,t}(p^\alpha);O)\ar[rr]^{\underline{\mathscr{G}}_\alpha}& & \Lambda_\alpha }\] translates into \[ \sum_{\sigma\in\ker(\Gamma_{\alpha+1}\to\Gamma_\alpha)}\mathscr{G}_{\alpha+1}([\sigma]-)=\mathscr{G}_\alpha(-), \] or equivalently, \begin{equation}\label{tracecompatible} (\mu)_*\mathscr{G}_{\alpha+1} = (\pi_1)^*\mathscr{G}_{\alpha}. \end{equation} Recall that in Section \ref{ontheconjectures} of the introduction we set \[ \cal{V}_\alpha:=e_\mathrm{n.o.}\mathrm{H}^2_!\big(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O(2)\big). \] Its $0$-th graded piece $\mathrm{Gr}^0\cal{V}_\alpha$ of the \'etale cohomology, with respect to the ordinary filtration, is an unramified $\Gamma_{\mathbb{Q}_p}$-representation, therefore \[ \mathbb{D}\big(\mathrm{Gr}^0\cal{V}_\alpha\big):=\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes\widehat{\mathbb{Z}}_p^\mathrm{ur}\big)^{\Gamma_{\mathbb{Q}_p}} \] is a lattice in the de-Rham cohomology $\mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes_OE_\wp\big)$. The following conjectures compares the integral structures for the de Rham cohomology of Hilbert modular surfaces coming from integral \'etale cohomology, and ordinary Hilbert modular forms of parallel weight two.
\begin{conjecture}\label{wishingOhta}
For every large enough prime $p$ and every $\alpha\ge1$ the image of the natural map
\[
S^\mathrm{ord}_{2t_L,t_L}\big(K_{\diamond,t}(p^\alpha);O\big)
\longrightarrow
\mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes_OE_\wp\big)
\]
is contained in the lattice $\mathbb{D}\big(\mathrm{Gr}^0\cal{V}_\alpha\big)$. \end{conjecture}
\noindent We consider the projective limits
\begin{equation}
\mathbb{D}\big(\mathrm{Gr}^0\cal{V}_\infty\big):=
\underset{\leftarrow, \varpi_{2}}{\lim}\ \mathbb{D}\big(\mathrm{Gr}^0\cal{V}_\alpha\big),\qquad \mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big)
:=
\underset{\leftarrow, \varpi_{2}}{\lim}\ \mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes_OE_\wp\big).
\end{equation}
\begin{lemma}\label{BigPeriodMap}
There is a Hecke-equivariant morphism
\[
\raisebox{\depth}{\scalebox{1}[-1]{$ \Omega $}}_\infty:\overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\mathbf{I}_\mathscr{G})
\longrightarrow
\mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big)\otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}.
\]
Further, assuming Conjecture \ref{wishingOhta}, the homomorphism $\raisebox{\depth}{\scalebox{1}[-1]{$ \Omega $}}_\infty$ takes values in $\mathbb{D}\big(\mathrm{Gr}^0\cal{V}_\infty\big)\otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}$.
\end{lemma} \begin{proof}
Pushing forward equation ($\ref{tracecompatible}$) along $\pi_2$ gives $(\varpi_{2})_*\mathscr{G}_{\alpha+1} = U_p\mathscr{G}_{\alpha}$. Hence, the collection $(U_p^{-\alpha}\mathscr{G}_{\alpha})_\alpha$ is compatible under projection along $\varpi_{2}$. The homomorphism
\[
\overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\boldsymbol{\Lambda}) \longrightarrow \underset{\leftarrow,\varpi_2}{\lim}\ S^\mathrm{ord}_{2t_L,t_L}(K_{\diamond,t}(p^\alpha);O),
\qquad
\mathscr{G}\mapsto\big(U_p^{-\alpha}\mathscr{G}_{\alpha}\big)_\alpha
\] combined with the natural map $S^\mathrm{ord}_{2t_L,t_L}(K_{\diamond,t}(p^\alpha);E_\wp)
\longrightarrow \mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\cal{V}_\alpha\otimes_OE_\wp\big)$
gives
\begin{equation}\label{periodlambda}
\raisebox{\depth}{\scalebox{1}[-1]{$ \Omega $}}_\infty:\overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\boldsymbol{\Lambda})
\longrightarrow
\mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big).
\end{equation} Since $\boldsymbol{\Lambda}$ is Noetherian and $\mathbf{h}^\mathrm{n.o.}_L(K;O)\otimes_{\phi_{\boldsymbol{\chi}}}\boldsymbol{\Lambda}$ is finite over $\boldsymbol{\Lambda}$, the Hecke algebra is also finitely presented as a $\boldsymbol{\Lambda}$-module. As $\mathbf{I}_\mathscr{G}$ is flat over $\boldsymbol{\Lambda}$ it follows that \[ \overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\mathbf{I}_\mathscr{G}) \simeq \overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\boldsymbol{\Lambda}) \otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}. \] The claimed Hecke equivariant morphism is obtained from ($\ref{periodlambda}$) by extension of scalars. \end{proof}
\begin{definition}\label{BigGPeriodMap}
For $M$ a $\widetilde{\mathbf{h}}^\mathrm{n.o.}_L(K;O)_{\boldsymbol{\chi}}$-module we denote by
\[
M[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}]:=\big\{m\in M\lvert\ Tm=\mathscr{G}_{\mbox{\tiny $\heartsuit$}}(T)m\ \ \forall T\in \widetilde{\mathbf{h}}^\mathrm{n.o.}_L(K;O)\big\}
\]
its $\mathscr{G}_{\mbox{\tiny $\heartsuit$}}$-isotypic submodule, where $\mathscr{G}_{\mbox{\tiny $\heartsuit$}}$ was defined in \eqref{heartform}. With a small abuse of notation, we write $\mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big)[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}]$ for $\big(\mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big)\otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}\big)[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}]$ and consider the induced morphism
\[
\raisebox{\depth}{\scalebox{1}[-1]{$ \Omega $}}_\mathscr{G}:
\overline{\mathbf{S}}_L^{\text{ord}}(K;\boldsymbol{\chi};\mathbf{I}_\mathscr{G})[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}]
\longrightarrow
\mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big)[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}].
\]
\end{definition}
\subsubsection{Big pairing.} Let $m_\alpha\in\mathbb{A}_{L,f}^\times$ be the image of the integer $Mp^\alpha$ and consider the matrix \begin{equation} \tau_\alpha = \begin{pmatrix} 0& -1\\ m_\alpha& 0 \end{pmatrix}\in G_L(\mathbb{A}_{f}). \end{equation} If we denote by $(-)^*:G_L(\mathbb{A}_{f})\to G_L(\mathbb{A}_{f})$ the involution $g^*=\det(g)^{-1}g$, then for every $\alpha\ge1$ there is a morphism $\lambda_\alpha:S(K_{\diamond,t}(p^\alpha))\to S(K_{\diamond,t}(p^\alpha))$ defined as the composition \begin{equation} \xymatrix{ S(K_{\diamond,t}(p^\alpha))\ar@{.>}[rr]^{\lambda_\alpha}\ar[rd]^{\mathfrak{T}_{\tau_\alpha}}& & S(K_{\diamond,t}(p^\alpha))\\ & S(^{\tau_\alpha} K_{\diamond,t}(p^\alpha))\ar[ru]^{(-)^*}& &. }\end{equation}
\begin{lemma}\label{AtkinLehnerGalois}
If $\sigma\in \Gamma_{\mathbb{Q}}$ corresponds to $a\in\mathbb{A}_{f}^\times$ under the global Artin map, then
\[ \langle \Delta(a^{-1}),1\rangle\circ\lambda_\alpha \circ \sigma = \sigma \circ \lambda_\alpha \] where $\Delta:\mathbb{A}_{f}\hookrightarrow \mathbb{A}_{L,f}$ is the natural inclusion. In particular, $\lambda_\alpha$ is defined over $\mathbb{Q}(\zeta_{Mp^\alpha})$. \end{lemma} \begin{proof}
This is a standard computation using the reciprocity laws of Shimura varieties at CM points. We will sketch a proof below following the notations of (\cite{Milne-SVI}, Chapters 12 and 13).
\noindent Let $x\in \frak{H}^2$ be a CM point defined over $E$. Suppose $\sigma \in \mathrm{Aut}(\mathbb{C}/E)$ and choose $s\in\mathbb{A}_E^\times$ so that $s$ corresponds to $\sigma_{\lvert\bar{\mathbb{Q}}}$ under the reciprocal of the global Artin map. Then we have the following commutative diagram
\[\xymatrix{
[x,h]\in S(^{\tau_\alpha} K_{\diamond,t}(p^\alpha))
\ar[d]^{\sigma}\ar[rrr]^{(-)^*} &&&
[x,\det(h)^{-1}h]\in S(K_{\diamond,t}(p^\alpha))\ar[d]^{\sigma}\\
[x,r_x(s)h]\in S(^{\tau_\alpha} K_{\diamond,t}(p^\alpha))
\ar[rrr]^{(-)^*\circ\langle \det(r_x(s)),1\rangle} &&&
[x,\det(h)^{-1}r_x(s)h]\in S(K_{\diamond,t}(p^\alpha)) }\] where $r_x(s) = N_{E/\mathbb{Q}}(\mu_x(s_f))$ as defined as in (\cite{Milne-SVI}, Chapter 12, equation (52)), and $\mu_x:\mathbb{G}_m\rightarrow G$ is the $E$-rational cocharacter of $G$ characterized by $\mu_x(z)=h_{x,\mathbb{C}}(z,1)$ for $z\in\mathbb{C}$. In the case of Hilbert modular varieties, $\det(\mu_x)$ is the natural embedding $\mathbb{G}_m\hookrightarrow \mathrm{Res}_{L/\mathbb{Q}}\mathbb{G}_m$ and thus $\det(r_x(s))=\mathrm{N}_{E/\mathbb{Q}}(s_f)\in \mathbb{A}_{f}^\times$.
By functoriality of class field theory, $N_{E/\mathbb{Q}}(s_f)\in \mathbb{A}_{f}^\times$ corresponds to $\sigma_{\lvert\bar{\mathbb{Q}}}$, seen inside $\Gamma_{\mathbb{Q}}$. Therefore for any $\sigma\in \Gamma_\mathbb{Q}$, corresponding to $a\in\mathbb{A}_{f}^\times$ under the global Artin map and fixing the reflex field of a CM point, we have \[ \langle \Delta(a^{-1}),1\rangle\circ\lambda_\alpha \circ \sigma = \sigma \circ \lambda_\alpha \] because points of the form $[x,h]$ for a fixed $x$ are Zariski dense (\cite{Milne-SVI}, Lemma 13.5), and the map $\mathfrak{T}_{\tau_\alpha}$ is defined over $\mathbb{Q}$. Clearly such $\sigma$'s generate $\mathrm{Aut}(\mathbb{C}/\mathbb{Q})$, and thus the above relation holds for all $\sigma \in \mathrm{Aut}(\mathbb{C}/\mathbb{Q})$. \end{proof}
\noindent A direct calculation shows that \[ \pi_{1,p}\circ\lambda_{\alpha+1}=\lambda_\alpha\circ\pi_{2,p},\qquad \pi_{2,p}\circ\lambda_{\alpha+1}=\lambda_\alpha\circ\pi_{1,p}, \] so that \begin{equation}\label{UandU} U_p=(\lambda_{\alpha})_*\circ U^*_p\circ(\lambda_\alpha)^*. \end{equation} Consider the group \[ \mathbb{G}^{\alpha}_{\diamond,t}(K) := K_0(p^\alpha)\cal{O}_L^\times/K_{\diamond,t}(p^\alpha)\cal{O}_L^\times \] of diamond operators acting on $S(K_{\diamond,t}(p^\alpha))$. There is an inclusion \[ \Gamma_\alpha \hookrightarrow \mathbb{G}^{\alpha}_{\diamond,t}(K), \qquad z \mapsto \begin{pmatrix} \Delta(z)&0\\0& \Delta(z)\\ \end{pmatrix}, \] where $\Delta:1+p\mathbb{Z}_p\hookrightarrow 1+p\cal{O}_p$ denotes the diagonal embedding. Under this inclusion, an element $z\in \Gamma_\alpha$ acts on cohomology as the diamond operator $\langle \Delta(z),1 \rangle$. We define a twisted group-ring-valued pairing at finite level \[ \{\ ,\}_\alpha: \mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^\alpha);O(2)\big) \times \mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^\alpha); O\big) \longrightarrow \Lambda_\alpha \] by \begin{equation}\label{FiniteLevelPairing} \{x_\alpha,y_\alpha\}_\alpha = \sum_{z\in\Gamma_\alpha}\Big\langle\langle \Delta(z),1\rangle^* x_\alpha, (\lambda_\alpha)^*\circ U_p^\alpha y_\alpha\Big\rangle_\alpha [z^{-1}], \end{equation} where the Poincar\'e pairing $\langle\ , \rangle_\alpha$ is defined modulo torsion.
\begin{proposition}\label{basicsPairing} The pairing $\{\ , \}_\alpha$ is $\Lambda_\alpha$-bilinear and all the Hecke operators are self-adjoint with respect to it. In particular, $\{\ , \}_\alpha$ induces a pairing on nearly ordinary parts.
\end{proposition} \begin{proof} The Hecke operator $U_p$ is self-adjoint with respect to $\{\ , \}_\alpha$ because $U^*(\varpi_p)$ and $U_p$ are adjoint with respect the Poincar\'e pairing and equation ($\ref{UandU}$). A similar argument works for all the other Hecke operators $T(\varpi_\mathfrak{q})$ (or $U(\varpi_\mathfrak{q})$ if $\mathfrak{q}\mid Mp$). To check $\Lambda_\alpha$-bilinearity, let $b$ be any element in $\Gamma_\alpha$, then we have $\langle \Delta(b),1\rangle\circ\lambda_\alpha=\lambda_\alpha\circ\langle \Delta(b)^{-1},1\rangle$, which implies \[ \lambda_\alpha^*\circ\langle \Delta(b),1\rangle^*=\langle \Delta(b),1\rangle_*\circ\lambda_\alpha^*. \] Therefore \[\begin{split} \{x_\alpha, \langle \Delta(b),1\rangle^*y_\alpha\}_\alpha &= \sum_{z\in \Gamma_\alpha}\Big\langle\langle \Delta(z),1\rangle^* x_\alpha, (\lambda_\alpha)^*\circ U_p^\alpha\circ\langle \Delta(b),1\rangle^* y_\alpha\Big\rangle_\alpha [z^{-1}]\\ &= \sum_{z\in \Gamma_{\alpha}}\Big\langle\langle \Delta(zb),1\rangle^* x_\alpha, (\lambda_\alpha)^*\circ U_p^\alpha y_\alpha\Big\rangle_\alpha [z^{-1}]\\ &=[b]\{x_\alpha,y_\alpha\}_\alpha. \end{split} \] and similarly \[ \{\langle \Delta(b),1\rangle^* x_\alpha, y_\alpha\}_\alpha = [b]\{x_\alpha,y_\alpha\}_\alpha. \]
\end{proof}
\begin{lemma}\label{lemma: invariant} The finite Galois covering $\mu:S(K_{\diamond,t}(p^{\alpha+1})) \rightarrow S(K_{\diamond,t}(p^\alpha)\cap K_0(p^{\alpha+1}))$ induces an isomorphism \[ \mu^*:\mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^\alpha)\cap K_0(p^{\alpha+1});E_\wp(2)\big)\longrightarrow \mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^{\alpha+1});E_\wp(2)\big)^{\mathfrak{I}_{\diamond,t}^{\alpha+1}(K)}, \] where $\mathfrak{I}_{\diamond,t}^{\alpha+1}(K) = \ker (\mathbb{G}^{\alpha+1}_{\diamond,t}(K) \to \mathbb{G}^{\alpha}_{\diamond,t}(K))$ is the Galois group. \end{lemma} \begin{proof} The claim follows by analyzing the Hochschild-Serre spectral sequence \[ \mathrm{E}_2^{p,q} = \mathrm{H}^p\big(\mathfrak{I}_{\diamond,t}^{\alpha+1}(K),\mathrm{H}_{\acute{\mathrm{e}}\mathrm{t}}^q(K_{\diamond,t}(p^{\alpha+1});E_\wp(2))\big)\implies \mathrm{H}_{\acute{\mathrm{e}}\mathrm{t}}^{p+q}\big(K_{\diamond,t}(p^\alpha)\cap K_0(p^{\alpha+1});E_\wp(2)\big). \] It degenerates at the second page because $\mathrm{E}_2^{p,q}=0$ for all $p>0$ as $\mathfrak{I}_{\diamond,t}^{\alpha+1}(K)$ is a finite group and $\mathrm{H}_{\acute{\mathrm{e}}\mathrm{t}}^q(K_{\diamond,t}(p^{\alpha+1});E_\wp(2))$ is an $E_\wp$-vector space. \end{proof}
\begin{proposition}\label{compatibility} Let $p_{\alpha+1}: \Lambda_{\alpha+1}\to \Lambda_\alpha$ be the homomorphism induced by the natural projection $\Gamma_{\alpha+1}\to \Gamma_{\alpha}.$ Then the diagram \[ \xymatrix{ \mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^{\alpha+1});O(2)\big) \times \mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^{\alpha+1});O\big) \ar[r]^-{\{ , \}_{\alpha+1}} \ar[d]^{(\varpi_{2})_*\times(\varpi_{2})_*} & \Lambda_{\alpha+1} \ar[d]^{p_{\alpha+1}}\\
\mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^\alpha);O(2)\big)
\times
\mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^\alpha);O\big) \ar[r]^-{\{ ,\}_\alpha} & \Lambda_{\alpha}
} \] commutes. \end{proposition} \begin{proof} We prove the proposition through a direct computation. Since the pairing is defined modulo torsion, it suffices to prove the lemma after inverting $p$. We have \begin{equation*} \begin{split} p_{\alpha+1}\big(\{x_{\alpha+1},y_{\alpha+1}\}_{\alpha+1}\big) &= p_{\alpha+1}\Big(\sum_{z\in\Gamma_{\alpha+1}}\Big\langle\langle \Delta(z),1\rangle^* x_{\alpha+1}, (\lambda_{\alpha+1})^*\circ U_p^{\alpha+1} y_{\alpha+1}\rangle\Big\rangle_{\alpha+1} [z^{-1}]\Big) \\ & =\sum_{b\in \Gamma_{\alpha}}\Big\langle \sum_{z\in\Gamma_{\alpha+1},\ z\mapsto b}\langle \Delta(z),1\rangle^*x_{\alpha+1},\ (\lambda_{\alpha+1})^*\circ U_p^{\alpha+1} y_{\alpha+1}\Big\rangle_{\alpha+1 } [b^{-1}]. \end{split} \end{equation*} Note that \[ \sum_{z\in\Gamma_{\alpha+1},\ z\mapsto b} \langle \Delta(z),1\rangle^*x_{\alpha+1} =\sum_{z\in\Gamma_{\alpha+1},\ z\mapsto b} \langle \Delta(z)^{-1},1\rangle_*x_{\alpha+1} \] is invariant under the action of $\mathfrak{I}_{\diamond,t}^{\alpha+1}(K)$, and thus equals to $(\mu)^*\eta_b$ for some $\eta_b \in \mathrm{H}^2_{\acute{\mathrm{e}}\mathrm{t}}\big(K_{\diamond,t}(p^\alpha)\cap K_0(p^{\alpha+1});E_\wp(2)\big)$ by Lemma $\ref{lemma: invariant}$. We compute that \begin{equation}\label{eq: number 1}
\begin{split} p_{\alpha+1}\circ\{x_{\alpha+1},y_{\alpha+1}\}_{\alpha+1} &= \sum_{b\in \Gamma_{\alpha}}\Big\langle (\mu)^*\eta_b,\ (\lambda_{\alpha+1})^*\circ U_p^{\alpha+1} y_{\alpha+1}\Big\rangle_{\alpha+1} [b^{-1}]\\ &= \sum_{b\in \Gamma_{\alpha}}\Big\langle \eta_b,\ (\mu)_*\circ(\lambda_{\alpha+1})^*\circ U_p^{\alpha+1} y_{\alpha+1}\Big\rangle_{\alpha} [b^{-1}]\\ &= \sum_{b\in \Gamma_{\alpha}}\Big\langle \eta_b,\ (\lambda_{\alpha+1})^*\circ U_p^{\alpha}\circ(\varpi_{2})_*\circ(\pi_1)^* y_{\alpha+1}\Big\rangle_{\alpha} [b^{-1}]\\ &= \sum_{b\in \Gamma_{\alpha}}\Big\langle (\pi_{2})_*\eta_b,\ (\lambda_{\alpha+1})^*\circ U_p^{\alpha}\circ(\varpi_{2})_* y_{\alpha+1}\Big\rangle_{\alpha} [b^{-1}]. \end{split}\end{equation} Observing that \[\begin{split}
(\pi_{2})_*\eta_b
&= \frac{1}{\deg(\mu)}\cdot(\pi_{2})_*\circ(\mu)_*\Big(\sum_{z\in\Gamma_{\alpha+1},\ z\mapsto b} \langle \Delta(z)^{-1},1\rangle_*x_{\alpha+1}\Big)\\
&=
\langle\Delta(b)^{-1},1\rangle_*\circ(\varpi_{2})_*x_{\alpha+1}\\
&=
\langle \Delta(b),1\rangle^*\circ(\varpi_{2})_*x_{\alpha+1}, \end{split} \] we obtain the claim \[ p_{\alpha+1}\circ\{x_{\alpha+1},y_{\alpha+1}\}_{\alpha+1} = \{(\varpi_{2})_*x_{\alpha+1},(\varpi_{2})_*y_{\alpha+1}\}_{\alpha}. \] \end{proof}
Consider $\mathrm{V}_\infty^\mathrm{dR}(-2)=\underset{\leftarrow, \varpi_{2}}{\lim}\ \cal{V}_\alpha(-2)\otimes_OB_\mathrm{dR}$, then extending scalars in the pairing $\{,\}_\alpha$ in (\ref{FiniteLevelPairing}), restricting to the $\mathscr{G}_{\mbox{\tiny $\heartsuit$}}$-isotypic subspace in the second argument and taking projective limits gives \[ \{\ , \}_\mathscr{G}\colon \Big(\boldsymbol{\cal{V}}_\mathscr{G}(M)(\theta_\mathbb{Q}^{-1}\cdot\boldsymbol{\eta}_\mathbb{Q})\Big)\widehat{\otimes}_{\mathbb{Z}_p} \widehat{\mathbb{Z}}_p^\mathrm{ur}\ \times\ \mathrm{V}_\infty^\mathrm{dR}(-2)[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}] \longrightarrow
B_\mathrm{dR}\llbracket\Gamma\rrbracket\otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}. \]
\begin{proposition}\label{prop: Big pairing} Let $\boldsymbol{\Theta}=(\boldsymbol{\eta}_\mathbb{Q}\cdot\eta_\mathbb{Q})_{\lvert\Gamma_{\mathbb{Q}_p}}$, then the pairing \[
\{\ , \}_\mathscr{G}\colon \boldsymbol{\cal{V}}_\mathscr{G}(M)(-1)(\boldsymbol{\Theta}^{-1})\widehat{\otimes}_{\mathbb{Z}_p} \widehat{\mathbb{Z}}_p^\mathrm{ur}\
\times \
\mathrm{V}_\infty^\mathrm{dR}[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}]
\longrightarrow
B_\mathrm{dR}\llbracket\Gamma\rrbracket\otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}(-\psi_\circ) \] is $\Gamma_{\mathbb{Q}_p}$-equivariant. \end{proposition} \begin{proof}
Let $\sigma\in \Gamma_{\mathbb{Q}_p}$ such that $a\in\mathbb{Q}_p^\times$ corresponds to $\sigma$ via the local Artin map. Lemma \ref{AtkinLehnerGalois} implies
\[
(\lambda_\alpha)^*\circ\sigma^*
=
\langle \Delta(a^{-1}),1\rangle^*\circ\sigma^*\circ (\lambda_\alpha)^*,
\]
which, combined with (\ref{FiniteLevelPairing}) yields
\[\begin{split}
\big\{\sigma^*(x_\alpha), \sigma^*(y_\alpha)\big\}_\alpha
&=
\sum_{z\in \Gamma_\alpha}\Big\langle\sigma^*\circ\langle \Delta(z),1\rangle^* x_\alpha,
\sigma^*\circ\langle \Delta(a^{-1}),1\rangle^*\circ(\lambda_\alpha)^*\circ U_p^\alpha y_\alpha\Big\rangle_\alpha [z^{-1}]\\
&=
\sum_{z\in \Gamma_\alpha}\Big\langle\langle \Delta(za),1\rangle^* x_\alpha,
(\lambda_\alpha)^*\circ U_p^\alpha y_\alpha\Big\rangle_\alpha [z^{-1}]\\
&=
[a]\big\{x_\alpha, y_\alpha\big\}_\alpha
\end{split}\]
where in the second equality we used the Galois equivariance of the Poincar\'e pairing
\[
\langle\ ,\rangle_\alpha\colon
\mathrm{H^2_{\acute{\mathrm{e}}\mathrm{t}}}(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O(2))
\times
\mathrm{H^2_{\acute{\mathrm{e}}\mathrm{t}}}(S(K_{\diamond,t}(p^\alpha))_{\bar{\mathbb{Q}}},O)
\rightarrow
O.
\]
As
\[ \mathscr{G}_{\mbox{\tiny $\heartsuit$}}([\Delta(a),1])=\boldsymbol{\chi}(\Delta(a))\big[\xi_{\Delta(a)}^{-t_L}\big]
=
\psi^{-1}_\circ(a)\cdot\theta_\mathbb{Q}^{-2}(a)\cdot\boldsymbol{\eta}^{2}_\mathbb{Q}(a),
\]
we see that
\[
\big\{\sigma^*(x_\alpha), \sigma^*(y_\alpha)\big\}_\alpha
= \psi^{-1}_\circ(a)\cdot\theta_\mathbb{Q}^{-2}(a)\cdot\boldsymbol{\eta}^{2}_\mathbb{Q}(a)\cdot\{x_\alpha, y_\alpha\}_\alpha.
\]
Therefore, the pairing $\{\ , \}_\mathscr{G}$
\[ \boldsymbol{\cal{V}}_\mathscr{G}(M)(\theta_\mathbb{Q}^{-1}\cdot\boldsymbol{\eta}_\mathbb{Q})\widehat{\otimes}_{\mathbb{Z}_p} \widehat{\mathbb{Z}}_p^\mathrm{ur}
\times
\mathrm{V}_\infty^\mathrm{dR}(-2)[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}]
\longrightarrow
B_\mathrm{dR}\llbracket\Gamma\rrbracket\otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}(\psi^{-1}_\circ\cdot\theta_\mathbb{Q}^{-2}\cdot\boldsymbol{\eta}^{2}_\mathbb{Q}).
\]
is $\Gamma_{\mathbb{Q}_p}$-equivariant and the claim follows by twisting. \end{proof}
\subsection{On Dieudonn\'e modules} Given $\mathsf{f}^*_\circ\in S^\mathrm{ord}_{2,1}(N;\psi_\circ^{-1};\overline{\mathbb{Q}})$ an ordinary elliptic cuspform, one can define a linear map \[ \varphi\colon S^\mathrm{ord}_{2,1}(V_{1,0}(N,p);\psi_\circ^{-1};\overline{\mathbb{Q}})\longrightarrow\overline{\mathbb{Q}},\qquad \mathsf{h}\mapsto\frac{\big\langle \mathsf{h},\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\big\rangle}{\big\langle \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}, \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\big\rangle} \] which satisfies $\varphi(T^*(\ell)\mathsf{h})=\mathsf{a}_p(\ell,\mathsf{f}_\circ)\cdot\varphi(\mathsf{h})$. As in (\cite{DR2}, Section 2.3 $\&$ Equation (118)) we can give the following definition.
\begin{definition}\label{def eta}
Let $\eta_\circ\in \mathrm{H}^1_\mathrm{dR}(X_0(p))^{\mathrm{ord},\mathrm{ur}}[\mathsf{f}^*_\circ]$ denote the unique class that satisfies
\[
\Phi(\eta_\circ)=\alpha_{\mathsf{f}^*_\circ}\cdot\eta_\circ
\]
and for any $\mathsf{h}\in S^\mathrm{ord}_{2,1}(V_{1,0}(N,p);\psi_\circ^{-1};\overline{\mathbb{Q}})$
\[
\big\langle\omega_\mathsf{h},(\lambda_1)^*\eta_\circ\big\rangle_\mathrm{dR}=\varphi(\mathsf{h}).
\] \end{definition}
\begin{remark}
For any $\phi\in S^\mathrm{ord}_{2,1}(V_{1,0}(N,p);\psi_\circ;\overline{\mathbb{Q}})$ we have
\[
\big\langle\omega_\phi,\eta_\circ\big\rangle_\mathrm{dR}=\varphi\big((\lambda_1)_*\phi\big)=\frac{\big\langle (\lambda_1)_*\phi,\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\big\rangle}{\big\langle \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}, \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\big\rangle}.
\] \end{remark}
\noindent The Hecke equivariant twist of the Poincar\'e pairing \begin{equation}\label{heckeequivariantpoincare} \{\ ,\}_\mathbb{Q}\colon \mathrm{H}^1_{\acute{\mathrm{e}}\mathrm{t}}(X_0(p)_{\bar{\mathbb{Q}}},O(1)) \times \mathrm{H}^1_{\acute{\mathrm{e}}\mathrm{t}}(X_0(p)_{\bar{\mathbb{Q}}},O) \longrightarrow O, \qquad \big\{x,y\big\}_\mathbb{Q} = \big\langle x,(\lambda_1)^* y\big\rangle_\mathrm{dR} \end{equation}
induces a $\Gamma_{\mathbb{Q}_p}$-equivariant perfect pairing on $\mathsf{f}_\circ$-isotypic components \[
\{\ ,\}_{\mathsf{f}_\circ}\colon \mathrm{V}_{\mathsf{f}_\circ}(p) \times \mathrm{V}_{\mathsf{f}_\circ}(p)(-1) \longrightarrow O(\psi_\circ). \] Furthermore, by looking at the Galois action, one sees that $\mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)$ and $\mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)(-1)$ are orthogonal with respect to $ \{\ ,\}_{\mathsf{f}_\circ}$. Therefore there is an induced perfect pairing \begin{equation}\label{QPairing}
\{,\}_{\mathsf{f}_\circ}\colon \mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}(p) \times \mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)(-1) \longrightarrow O(\psi_\circ). \end{equation} which we can use to make the identification \[ \mathrm{D}_\mathrm{dR}\big(\mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)(-1)\big)\overset{\sim}{\longrightarrow}\mathrm{Hom}_{E_\wp}\Big(\mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}(p)\big),E_\wp\Big). \]
\begin{definition}\label{etaprime} We denote by \[ \eta_\circ'\in \mathrm{D}_\mathrm{dR}\big(\mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)(-1)\big) \] the element corresponding to the homomorphism \[ \mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0(\mathrm{V}_{\mathsf{f}_\circ}(p))\big)\longrightarrow E_\wp,\qquad \omega_\phi\mapsto\big\langle\omega_\phi,\eta_\circ\big\rangle_\mathrm{dR}. \] It satisfies \[ \big\{\omega,\eta_\circ'\big\}_{\mathsf{f}_\circ}= \big\langle\omega,\eta_\circ\big\rangle_\mathrm{dR}\qquad \forall\ \omega\in \mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}(p)\big). \] \end{definition}
\begin{proposition}\label{prop: huge period map} Let $\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)
=
\mathrm{Fil}^2 \boldsymbol{\cal{V}}_\mathscr{G}(M)(-1) (\boldsymbol{\Theta}^{-1}) \otimes
\mathrm{Gr}^0\mathrm{V}_{\mathrm{f}_\circ}(p)$, then
there exists a homomorphism of $\mathbf{I}_\mathscr{G}$-modules
\[
\Big\langle\ ,\omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\Big\rangle\colon \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\longrightarrow \boldsymbol{\Pi} \otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}
\]
whose specialization at any arithmetic point $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight 2 is
\[
\mathrm{P}\circ\Big\langle\ ,\omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\Big\rangle
=
\Big\langle\ ,(\lambda_\alpha)^*\omega_{\breve{\mathscr{G}}_{\mathsf{P}}}\otimes \eta_\circ\Big\rangle_\mathrm{dR}\colon \mathrm{D}_\mathrm{dR}\big(\cal{U}_{\mathscr{G}_\mathrm{P}}^{\mathsf{f}_\circ}(M)\big) \longrightarrow \mathbb{C}_p.
\]
where $\cal{U}_{\mathscr{G}_\mathrm{P}}^{\mathsf{f}_\circ}(M)=\boldsymbol{\cal{U}}_{\mathscr{G}}^{\mathsf{f}_\circ}(M)\otimes_{\mathbf{I}_\mathscr{G},\mathrm{P}}E_\wp$. \end{proposition}
\begin{proof} By tensoring the $\Gamma_{\mathbb{Q}_p}$-equivariant pairing of Proposition \ref{prop: Big pairing}
with the $\Gamma_{\mathbb{Q}_p}$ equivariant pairing in (\ref{QPairing}), we obtain \[ \boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\widehat{\otimes}_{\mathbb{Z}_p} \widehat{\mathbb{Z}}_p^\mathrm{ur}\ \times\ \mathrm{V}_\infty^\mathrm{dR}[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}] \otimes_O \mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)(-1) \longrightarrow B_\mathrm{dR}\llbracket\Gamma\rrbracket \otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}. \]
Restricting the pairing to $\Gamma_{\mathbb{Q}_p}$-invariants we obtain \begin{equation}\label{hereisthepairing} \big\langle\ , \big\rangle\colon\mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\ \times\ \mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big)[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}] \otimes_{\mathbb{Q}_p} \mathrm{D}_\mathrm{dR}\big(\mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)(-1)\big) \longrightarrow \boldsymbol{\Pi}\otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G} \end{equation} because
\[ \Big(B_\mathrm{dR}\llbracket\Gamma\rrbracket\Big)^{\Gamma_{\mathbb{Q}_p}} \otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G}\cong\Big(\varprojlim_{\alpha} \mathbb{Q}_p[\Gamma_\alpha]\Big)\otimes_{\boldsymbol{\Lambda}}\mathbf{I}_\mathscr{G} = \boldsymbol{\Pi}\otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}. \] Let $\omega_{\breve{\mathscr{G}}} := \raisebox{\depth}{\scalebox{1}[-1]{$ \Omega $}}_\mathscr{G}(\breve{\mathscr{G}})\in \mathbf{D}_\mathrm{dR}\big(\mathrm{Gr}^0\mathrm{V}_\infty\big)[\mathscr{G}_{\mbox{\tiny $\heartsuit$}}]$ be the class represented by the compatible collection $(U_p^{-\alpha}\breve{\mathscr{G}}_\alpha)_\alpha$ of cuspforms, and let $\eta_\circ'\in \mathrm{D}_\mathrm{dR}\big(\mathrm{Fil}^1\mathrm{V}_{\mathsf{f}_\circ}(p)(-1)\big)$ be the class of Definition $\ref{etaprime}$. Then evaluating the pairing ($\ref{hereisthepairing}$) at $\omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'$ gives the homomorphism \begin{equation}\label{dR pairing} \big\langle\ ,\omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\big\rangle\colon \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big) \longrightarrow \boldsymbol{\Pi}\otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}. \end{equation} Now we study the specialization of the pairing at arithmetic points $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight $(2t_L,t_L)$ and character $(\chi_\circ\theta_L^{-1}\chi^{-1})$ of level $p^\alpha$. Let \[ \boldsymbol{z}=\sum_i x_i\otimes y_i \in \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big) \] be any element, then by construction \[ \Big\langle\boldsymbol{z},\ \omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\Big\rangle = \sum_i\big\{x_i,\omega_{\breve{\mathscr{G}}}\big\}_\mathscr{G}\cdot\big\{y_i,\eta_\circ'\big\}_{\mathsf{f}_\circ}. \] Firstly we note that \[ \big\{y_i,\eta_\circ'\big\}_{\mathsf{f}_\circ}=\big\langle y_i, \eta_\circ\big\rangle_\mathrm{dR}, \] then we observe that the projection of $\big\{x_i,\omega_{\breve{\mathscr{G}}}\big\}_\mathscr{G}$ to level $\alpha$ is \[\begin{split} \big\{x_{i,\alpha},U_p^{-\alpha}\breve{\mathscr{G}}_\alpha\big\}_\alpha &= \sum_{z\in \Gamma_\alpha} \Big\langle \langle \Delta(z),1\rangle^*x_{i,\alpha}, (\lambda_\alpha)^*\circ U_{p}^\alpha U_p^{-\alpha}\breve{\mathscr{G}}_\alpha\Big\rangle[z^{-1}]\\ &= \sum_{z\in \Gamma_\alpha} \Big\langle x_{i,\alpha}, (\lambda_\alpha)^*\circ\langle \Delta(z),1\rangle^*\breve{\mathscr{G}}_\alpha\Big\rangle[z^{-1}]. \end{split}\] Therefore \[\begin{split} \mathrm{P}\circ\big\{x_i,\omega_{\breve{\mathscr{G}}}\big\}_\mathscr{G} &= \Big\langle x_{i,\alpha}, (\lambda_\alpha)^*\sum_{z}\chi_{\mbox{\tiny $\spadesuit$}}(z^{-1})\langle \Delta(z)^{-1},1\rangle_* \breve{\mathscr{G}}_\alpha\Big\rangle\\ &= \Big\langle x_{i,\alpha},(\lambda_\alpha)^*\breve{\mathscr{G}}_\mathrm{P}\Big\rangle = \Big\langle x_{i,\mathrm{P}},(\lambda_\alpha)^*\breve{\mathscr{G}}_\mathrm{P}\Big\rangle \end{split}\] where the last equality results from the $\Lambda_\alpha$-equivariance of the twisted Poincar\'e pairing. It follows that \[ \mathrm{P}\circ \Big\langle\boldsymbol{z},\ \omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\Big\rangle = \Big\langle\boldsymbol{z}_\mathrm{P},\ (\lambda_\alpha)^*\omega_{\breve{\mathscr{G}}_{\mathsf{P}}}\otimes \eta_\circ \Big\rangle_\mathrm{dR}. \] \end{proof}
\begin{remark}\label{remintegral} Under Conjecture \ref{wishingOhta}, the homomorphism $\big\langle\ ,\omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\big\rangle\colon \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\rightarrow \boldsymbol{\Pi} \otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}$ of Proposition \ref{prop: huge period map} is actually $\mathbf{I}_\mathscr{G}$-valued. Moreover, given any $\boldsymbol{z}\in\mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)$ we have \[
\mathrm{P}_\circ\circ\Big\langle\boldsymbol{z} ,\ \omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\Big\rangle
=\Big\langle\boldsymbol{z}_{\mathrm{P}_\circ} ,\ \omega_{\breve{\mathscr{G}}}\otimes\eta_\circ'\Big\rangle \] for the arithmetic point $\mathrm{P}_\circ\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of parallel weight one. \end{remark}
\section{Motivic $p$-adic $L$-functions}
\subsection{Perrin-Riou's regulator}
Let $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ be an arithmetic point of weight $(\ell t_L,t_L)$ and character $(\chi_\circ\theta_L^{1-\ell}\chi^{-1},\mathbbm{1})$, then \[\boldsymbol{\Theta}(\mathrm{P})=\chi_{\mbox{\tiny $\spadesuit$}}^{-1}\cdot\big(\eta_\mathbb{Q}^{\ell-1}\big)_{\lvert D_p}\]
has negative Hodge--Tate weight if $\ell\ge 2$. In the case of the arithmetic point $\mathrm{P}_\circ$ of weight $(t_L,t_L)$ and character $(\chi_\circ,\mathbbm{1})$, the specialization has Hodge-Tare weight equal to zero:
\[
\boldsymbol{\Theta}(\mathrm{P}_\circ)\equiv 1.
\]
The Galois modules $\boldsymbol{\cal{U}}^{\mathsf{f}_\circ}_\mathscr{G}(M)$ is unramified and recalling Definition \ref{TwistedGradedPiece} we see that \[ \boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)=\boldsymbol{\cal{U}}^{\mathsf{f}_\circ}_\mathscr{G}(M)(\boldsymbol{\Theta}). \] Therefore, if we let $\cal{V}^{\mathsf{f}_\circ}_{\mathsf{g}_\mathrm{P}}(M)=\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)\otimes_{\mathbf{I}_\mathscr{G},\mathrm{P}}E_\wp$, there are isomorphisms \begin{equation}\label{step four} \begin{split} &\log_\mathrm{BK}:\mathrm{H}^1\big(\mathbb{Q}_p, \cal{V}^{\mathsf{f}_\circ}_{\mathsf{g}_\mathrm{P}}(M)\big)\overset{\sim}{\longrightarrow}\mathrm{D}_\mathrm{dR}\big(\cal{V}^{\mathsf{f}_\circ}_{\mathsf{g}_\mathrm{P}}(M)\big),\qquad\quad \text{if}\ \mathrm{P}\ \text{has weight}\ \ell\ge 2,\\ &\exp^*_\mathrm{BK}:\mathrm{H}^1\big(\mathbb{Q}_p, \cal{V}^{\mathsf{f}_\circ}_{\mathsf{g}_{\mathrm{P}_\circ}}(M)\big)\overset{\sim}{\longrightarrow}\mathrm{D}_\mathrm{dR}\big(\cal{V}^{\mathsf{f}_\circ}_{\mathsf{g}_{\mathrm{P}_\circ}}(M)\big), \qquad \text{if}\ \mathrm{P}=\mathrm{P}_\circ, \end{split} \end{equation} since $\cal{V}^{\mathsf{f}_\circ}_{\mathsf{g}_\mathrm{P}}(M)$ never contains $\mathbb{Q}_p(1)$ nor the trivial $1$-dimensional representation.
\begin{lemma} Let $\beta:\mathbb{Z}_p^\times\to E_\beta^\times$ be a finite order character of conductor $p^\alpha$, corresponding to a Galois character $\beta:\Gamma_{\mathbb{Q}_p}\to E_\beta^\times$ factoring through $\text{Gal}(\mathbb{Q}_p(\zeta_{p^\alpha})/\mathbb{Q}_p)$. Consider the $\Gamma_{\mathbb{Q}_p}$-representation $E_\beta\big(\beta+j\big)$, then the $E_\beta$-vector space
\[
\mathrm{D}_\mathrm{dR}\big(E_\beta(\beta+j)\big)= E_\beta\cdot b_{\beta,j}.
\] has a canonical basis $b_{\beta,j}$. \end{lemma} \begin{proof}
For any $j\in\mathbb{Z}$, the choice of a compatible sequence of $p$-power roots of unity $\boldsymbol{\zeta}:=\{\zeta_{p^\alpha}\}_{\alpha\ge0}$ determines a basis $\boldsymbol{\zeta}^j$ of the $\Gamma_{\mathbb{Q}_p}$-representation $\mathbb{Q}_p(j)$ and an element $t^{-j}\in\mathrm{B}_\mathrm{dR}$ such that the element $\boldsymbol{\zeta}^j\otimes t^{-j}$ gives a canonical basis of $\mathrm{D}_\mathrm{dR}\big(\mathbb{Q}_p(j))$ independent of $\boldsymbol{\zeta}$. We consider models of the $\Gamma_{\mathbb{Q}_p}$-representations $E_\beta(\beta)$, $E_\beta(-\beta)$ appearing in the Galois modules $E_\beta\otimes_{\mathbb{Q}_p}\mathbb{Q}_p(\zeta_{p^\alpha})$ where $\Gamma_{\mathbb{Q}_p}$ acts only on the second factor by Galois automorphisms. For a character $\alpha:\text{Gal}(\mathbb{Q}_p(\zeta_{p^\alpha})/\mathbb{Q}_p)\to E_\beta^\times$ the element
\[
\theta_{\alpha}=\sum_{\tau\in \text{Gal}(\mathbb{Q}_p(\zeta_{p^\alpha})/\mathbb{Q}_p)}\alpha^{-1}(\tau)\otimes\zeta_{p^\alpha}^\tau\in E_\beta\otimes_{\mathbb{Q}_p}\mathbb{Q}_p(\zeta_{p^\alpha}) \] satisfies $(\theta_{\alpha})^\sigma=\alpha(\sigma)\theta_{\varepsilon}$ for all $\sigma\in \Gamma_{\mathbb{Q}_p}$. Then $E_\beta(\beta)\cong E_\beta\cdot\theta_{\beta}$ and $E_\beta(-\beta)\cong E_\beta\cdot\theta_{\beta^{-1}}$. We choose the model $E_\beta\cdot\theta_\beta\otimes\boldsymbol{\zeta}^j$ of the $\Gamma_{\mathbb{Q}_p}$-representation $E_\beta(\beta+j)$ and we note that \[ b_{\beta,j}:=(\theta_\beta\otimes\boldsymbol{\zeta}^j)\otimes_{E_\beta}(\theta_{\beta^{-1}} \otimes t^{-j})\in E_\beta(\beta+j)\otimes_{\mathbb{Q}_p}\mathrm{B}_\mathrm{dR} \] is invariant under the $\Gamma_{\mathbb{Q}_p}$-action and it is independent of the choice of $\boldsymbol{\zeta}$. Therefore, we deduce that $\mathrm{D}_\mathrm{dR}\big(E_\beta(\beta+j)\big)$ has $b_{\beta,j}$ as canonical $E_\beta$-basis. \end{proof}
\noindent Write $\boldsymbol{\Lambda}_\Gamma$ for $O\llbracket\mathbb{Z}_p^\times\rrbracket$, then by (\cite{KLZ}, Theorem 8.2.3) and (\cite{LZ14}, Theorems 4.15, B.5) there is a $\big(\mathbf{I}_\mathscr{G}\widehat{\otimes} \boldsymbol{\Lambda}_\Gamma\big)$-linear map
\[
\boldsymbol{\cal{L}}: \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma(-\mathbf{j})\big)\longrightarrow\mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma
\]
such that for all points $\mathrm{P}\in\mathrm{Hom}(\mathbf{I}_\mathscr{G},\overline{\mathbb{Q}}_p)$ and all characters of $\mathbb{Z}^\times_p$ of the form $\eta\cdot\varepsilon_\mathbb{Q}^j$ where $j\in\mathbb{Z}$ and $\eta$ has finite order, we have a commutative diagram \[\xymatrix{ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma(-\mathbf{j})\big)\ar[d]\ar[r]^{\boldsymbol{\cal{L}}} & \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma\ar[d]\\ \mathrm{H}^1\big(\mathbb{Q}_p,\cal{U}_{\mathscr{G}_\mathrm{P}}^{\mathsf{f}_\circ}(M)(-j-\eta)\big)\ar[r] & \mathrm{D}_\mathrm{dR}\big(\cal{U}_{\mathscr{G}_\mathrm{P}}^{\mathsf{f}_\circ}(M)(-j-\eta)\big) }\] where the rightmost vertical map is \[\mathbb{D}(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M))\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma\longrightarrow\mathrm{D}_\mathrm{dR}\big(\cal{U}_{\mathscr{G}_\mathrm{P}}^{\mathsf{f}_\circ}(M)(-j-\eta)\big), \qquad \boldsymbol{x}\otimes[u]\mapsto\eta\varepsilon_\mathbb{Q}^j(u)\cdot\boldsymbol{x}_\mathrm{P}\otimes b_{\eta^{-1},-j}, \] and the bottom horizontal map is given by \begin{equation}\label{specialization biglog} \begin{cases} \left(1-\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}\alpha_{\mathsf{f}^*_\circ}^{-1}\cdot \eta(\mathrm{Fr}_p)p^j\right)\left(1-\alpha^{-1}_{1,\mathsf{g}_\mathrm{P}}\alpha^{-1}_{2,\mathsf{g}_\mathrm{P}}\alpha_{\mathsf{f}^*_\circ}\cdot \eta^{-1}(\mathrm{Fr}_p)p^{-j-1}\right)^{-1} & \text{cond}(\eta)=0\\ \\ \left(\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}\alpha_{\mathsf{f}^*_\circ}^{-1}\cdot\eta(p)p^{1+j}\right)^{\text{cond}(\eta)}G(\eta)^{-1} & \text{cond}(\eta)>0 \end{cases} \end{equation} \[ \times\qquad\begin{cases} \frac{(-1)^{-j-1}}{(-j-1)!}\cdot\log_\mathrm{BK} & j<0\\ \\ j!\cdot\exp_\mathrm{BK}^* & j\ge 0. \end{cases} \] As in the proof of (\cite{KLZ}, Theorem 8.2.8), we pull back the map $\boldsymbol{\cal{L}}$ by the automorphism \[ 1\otimes[z]\mapsto\boldsymbol{\Theta}(z)^{-1}\cdot 1\otimes[z] \] of $\mathbf{I}_\mathscr{G}\widehat{\otimes} \boldsymbol{\Lambda}_\Gamma$. By functoriality of the construction of $\boldsymbol{\cal{L}}$ we obtain
\begin{equation}\label{eq: biglog diagram}
\xymatrix{ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma(-\mathbf{j})\big)\ar[d]\ar@{.>}[r]^{\boldsymbol{\cal{L}}'}&\Big(\mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma\Big)\widehat{\otimes}_{\boldsymbol{\Theta}}\boldsymbol{\Lambda}_\Gamma\ar[d]^{\mathrm{id}\otimes\boldsymbol{\Theta}^{-1}}\\ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma(-\mathbf{j})\big)\ar[r]^{\boldsymbol{\cal{L}}}&\mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma.}\end{equation}
\begin{proposition}\label{prop: big log}
There is an homomorphism
\[
\boldsymbol{\cal{L}}_\mathscr{G}^{\mathsf{f}_\circ}: \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)\big)\longrightarrow \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)
\]
satisfying the following properties:
\begin{itemize}
\item[(i)] For all arithmetic points $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight $(\ell t_L, t_L)$, $\ell\ge2$, and character $(\chi_\circ\theta_L^{1-\ell}\chi^{-1},\mathbbm{1})$,
\[
\nu_{\mathrm{P}}\circ\boldsymbol{\cal{L}}^{\mathsf{f}_\circ}_\mathscr{G}=\frac{(-1)^{\ell-2}}{(\ell-2)!}\cdot\Upsilon(\mathrm{P})\cdot\big(\log_\mathrm{BK}\circ\ \mathrm{P}\big)
\]
where $\Upsilon(\mathrm{P})=\left(\alpha_{1,\mathsf{g}_{\mathrm{P}}}\alpha_{2,\mathsf{g}_{\mathrm{P}}}\alpha_{\mathsf{f}^*_\circ}^{-1}p^{2-\ell}\right)^\alpha\cdot G\big(\chi_{\mbox{\tiny $\spadesuit$}}\cdot\theta^{\ell-1}_{\mathbb{Q}\lvert D_p}\big)^{-1}$.
\item[(ii)] For the arithmetic point $\mathrm{P}_\circ\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight one,
\[
\nu_{\mathrm{P}_\circ}\circ\boldsymbol{\cal{L}}^{\mathsf{f}_\circ}_\mathscr{G}= \Upsilon(\mathrm{P}_\circ)\cdot\big(\exp^*_\mathrm{BK}\circ\ \mathrm{P}_\circ\big)
\]
where $\Upsilon(\mathrm{P}_\circ)=\big(1-\alpha_{1,\mathsf{g}_{\mathrm{P}_\circ}}\alpha_{2,\mathsf{g}_{\mathrm{P}_\circ}}\alpha_{\mathsf{f}^*_\circ}^{-1}\big)\big(1-\alpha^{-1}_{1,\mathsf{g}_{\mathrm{P}_\circ}}\alpha^{-1}_{2,\mathsf{g}_{\mathrm{P}_\circ}}\alpha_{\mathsf{f}^*_\circ}\cdot p^{-1}\big)^{-1}$.
\end{itemize}
\end{proposition}
\begin{proof}
First, we note that the map $\boldsymbol{\Theta}\otimes\mathrm{id}:\boldsymbol{\Lambda}_\Gamma\widehat{\otimes}_{\boldsymbol{\Theta}}\boldsymbol{\Lambda}_\Gamma\overset{\sim}{\to} \boldsymbol{\Lambda}_\Gamma$ is an isomorphism, hence $\boldsymbol{\cal{L}}'$ can be seen as a homomorphism
$\boldsymbol{\cal{L}}': \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma(-\mathbf{j})\big)\longrightarrow \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma$. For any arithmetic point $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$, there is a commutative diagram \[\xymatrix{ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma(-\mathbf{j})\big)\ar[d]\ar[rr]^{\boldsymbol{\cal{L}}'} && \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma\ar[d]\\ \mathrm{H}^1\big(\mathbb{Q}_p,\cal{V}^{\mathsf{f}_{\circ}}_{\mathsf{g}_{\mathrm{P}}}(M)\big)\ar[rr] && \mathrm{D}_\mathrm{dR}\big(\cal{V}^{\mathsf{f}_{\circ}}_{\mathsf{g}_{\mathrm{P}}}(M)\big) }\] obtained by composing ($\ref{eq: biglog diagram}$) with specialization at point $\mathrm{P}$ and character \[\boldsymbol{\Theta}(\mathrm{P})^{-1}=\chi_{\mbox{\tiny $\spadesuit$}}\cdot\big(\varepsilon_\mathbb{Q}^{1-\ell}\cdot\theta_{\mathbb{Q}}^{\ell-1}\big)_{\lvert D_p}. \] Then, using ($\ref{specialization biglog}$), the bottom horizontal map can be computed to be \[ \frac{(-1)^{\ell-2}}{(\ell-2)!}\cdot\left(\alpha_{1,\mathsf{g}_{\mathrm{P}}}\alpha_{2,\mathsf{g}_{\mathrm{P}}}\alpha_{\mathsf{f}^*_\circ}^{-1}p^{2-\ell}\right)^\alpha G\big(\chi_{\mbox{\tiny $\spadesuit$}}\cdot\theta^{\ell-1}_{\mathbb{Q}\lvert D_p}\big)^{-1}\cdot\log_\mathrm{BK}. \] Similarly, when considering the arithmetic point $\mathrm{P}_\circ$, the relevant character is the trivial character $\boldsymbol{\Theta}(\mathrm{P}_\circ)^{-1}\equiv 1$, and the bottom horizontal map can be seen to be \[ \big(1-\alpha_{1,\mathsf{g}_{\mathrm{P}_\circ}}\alpha_{2,\mathsf{g}_{\mathrm{P}_\circ}}\alpha_{\mathsf{f}^*_\circ}^{-1}\big)\big(1-\alpha^{-1}_{1,\mathsf{g}_{\mathrm{P}_\circ}}\alpha^{-1}_{2,\mathsf{g}_{\mathrm{P}_\circ}}\alpha_{\mathsf{f}^*_\circ}\cdot p^{-1}\big)^{-1}\cdot\exp^*_\mathrm{BK}. \] In order to define the claimed homomorphism $\boldsymbol{\cal{L}}_\mathscr{G}^{\mathsf{f}_\circ}$, we note that if we consider \[ \beta:\mathbf{I}_\mathscr{G}\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma\longrightarrow \mathbf{I}_\mathscr{G},\qquad 1\otimes[u]\mapsto \langle u\rangle[u], \] then for all $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ the following diagram commutes \[\xymatrix{ \mathbf{I}_\mathscr{G}\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma\ar[drr]_{\mathrm{P}\otimes\boldsymbol{\Theta}(\mathrm{P})^{-1}}\ar[rr]^\beta && \mathbf{I}_\mathscr{G}\ar[d]^{\mathrm{P}} \\ && O. } \] Therefore, the composition \[\xymatrix{ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)\big)\ar[d]^{\mathrm{H}^1\big(\mathbb{Q}_p,\mathrm{id}\otimes 1\big)}\ar@{.>}[rr]^{\boldsymbol{\cal{L}}_\mathscr{G}^{\mathsf{f}_\circ}}&& \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\\ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_\mathscr{G}(M)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma(-\mathbf{j})\big)\ar[rr]^{\boldsymbol{\cal{L}}'}&& \mathbb{D}\big(\boldsymbol{\cal{U}}_\mathscr{G}^{\mathsf{f}_\circ}(M)\big)\widehat{\otimes}\boldsymbol{\Lambda}_\Gamma\ar[u]_\beta }\] satisfies the claimed properties. \end{proof}
\subsubsection{The motivic $p$-adic $L$-function.}\label{motivic p-adic L-function} The class $\boldsymbol{\kappa}_p^{\mathsf{f}_\circ}(\mathscr{G}) \in \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_{\mathscr{G}}(M)\big)$, presented in Definition \ref{TwistedGradedPiece} and arising from cycles on Shimura threefolds, is the key input to define the motivic $p$-adic $L$-function. \begin{definition}\label{motpadicLfun} The motivic $p$-adic $L$-function is given by \[ \mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ) := \Big\langle \boldsymbol{\cal{L}}_\mathscr{G}^{\mathsf{f}_\circ}\big(\boldsymbol{\kappa}_p^{\mathsf{f}_\circ}(\mathscr{G}) \big),\ \omega_{{\breve{\mathscr{G}}}}\otimes\eta_\circ'\Big\rangle \in \boldsymbol{\Pi}\otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}. \] \end{definition}
\begin{lemma}\label{cruximplicat} Assume Conjecture \ref{wishingOhta}, then the motivic $p$-adic $L$-function belongs to $\mathbf{I}_\mathscr{G}$, and \[ \mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}_\circ)\not=0\quad\implies\quad\mathrm{exp}^*_\mathrm{BK}\big(\boldsymbol{\kappa}^{\mathsf{f}_\circ}_p(\mathscr{G})(\mathrm{P}_\circ)\big)\not=0. \] \end{lemma} \begin{proof}
This is a direct consequence of Remark \ref{remintegral} and Proposition \ref{prop: big log}. \end{proof}
\section{$p$-adic Gross-Zagier formulas} The main goal of this section is to give a formula for certain values of the syntomic Abel--Jacobi map in terms of $p$-adic modular forms. Given the definition of the motivic $p$-adic $L$-functiom in terms of the pairing introducted in Section $\ref{generalizingOhta}$, we will be interested in the values \[ \mathrm{AJ}_{\mathrm{syn}}\big(\Delta_\alpha^\circ\big)\big((\lambda_\alpha)^*\omega_{\breve{\mathscr{G}}_\mathrm{P}}\otimes\eta_\circ\big)\in\mathbb{C}_p \]
\subsection{$P$-syntomic cohomology} Let $K/\mathbb{Q}_p$ be finite extension, we denote by $K_0$ the maximal unramified subfield of $K$ and by $q$ be the cardinality of the residue field. \begin{definition}
A filtered $(\varphi,N,\Gamma_K)$-module over $K$ is a finite dimensional $\mathbb{Q}_p^\mathrm{ur}$-vector space $D$ endowed with a $\mathbb{Q}_p^\mathrm{ur}$-semilinear bijective Frobenius endomorphism $\varphi$ and a $\mathbb{Q}_p^\mathrm{ur}$-linear monodromy operator $N$ satisfying $N\varphi=p\varphi N$. The absolute Galois group $\Gamma_K$ acts $\mathbb{Q}_p^\mathrm{ur}$-semilinearly on $D$ and there is a decreasing, separated, exhaustive filtration of the $K$-vector space \[D_K:=\big(D\otimes_{\mathbb{Q}_p^\mathrm{ur}}\bar{\mathbb{Q}}_p\big)^{\Gamma_K}\] by $K$-vector subspaces $\mathrm{Fil}^iD_K$. \end{definition}
\noindent One writes $D_\mathrm{st}$ for the $K_0$-vector space $D^{\Gamma_K}$ of $\Gamma_K$-invariant elements. A filtered $(\varphi,N,\Gamma_K)$-module $D$ such that the $\Gamma_K$-action is unramified and $N=0$ is said to be \emph{crystalline}. In this case one can show that \[ D=D_\mathrm{st}\otimes_{K_0}\mathbb{Q}_p^\mathrm{ur}\qquad\text{and}\qquad D_K=D_\mathrm{st}\otimes_{K_0}K. \] \begin{definition}
A crystalline filtered $(\varphi,N,\Gamma_K)$-module over $K$ is said to be \emph{convenient} for a choice of polynomial $P(T)\in 1+TK[T]$ if $P(\Phi)$ and $P(q\Phi)$ are bijective endomorphisms of $D_K$, where $\Phi$ denotes the extension of scalars of the $K_0$-linear operator $\varphi^{[K_0:\mathbb{Q}_p]}$ on $D_\mathrm{st}$. \end{definition}
\noindent For a variety $X_{/K}$, we let $\mathrm{H}^{\bfcdot}_\mathrm{HK}(X_h)$ and $\mathrm{H}^{\bfcdot}_\mathrm{dR}(X_h)$ be the extensions of Hyodo--Kato and de-Rham cohomologies defined by Beilinson \cite{Beilinson} and by \[ \iota_\mathrm{dR}^\mathrm{B}:\mathrm{H}^{\bfcdot}_\mathrm{HK}(X_h)\otimes_{K_0}K\longrightarrow\mathrm{H}^{\bfcdot}_\mathrm{dR}(X_h) \] the comparison morphism relating them (which is an isomorphism if $X$ has a semistable model over $\cal{O}_K$). For the filtered $(\varphi,N,\Gamma_K)$-modules $D^{\bfcdot}(X_h)=\mathbf{D}_\mathrm{pst}\big(\mathrm{H}^{\bfcdot}_\mathrm{et}(X_{\overline{K}},\mathbb{Q}_p)\big)$ it was shown by Beilinson \cite{Beilinson} that \[ D^{\bfcdot}(X_h)_\mathrm{st}=\mathrm{H}^{\bfcdot}_\mathrm{HK}(X_h)\qquad\text{and}\qquad D^{\bfcdot}(X_h)_K=\mathrm{H}^{\bfcdot}_\mathrm{dR}(X_h). \] \subsubsection{Cohomology of filtered $(\varphi, N, \Gamma_K)$-modules.} For a polynomial $P(T)\in 1+TK[T]$ one can define the complex \[ C^{\bfcdot}_{\mathrm{st},P}(D):\qquad D_{\mathrm{st},K}\oplus\mathrm{Fil}^0D_K\longrightarrow D_{\mathrm{st},K}\oplus D_{\mathrm{st},K}\oplus D_K\longrightarrow D_{\mathrm{st},K} \] where the first map is $(u,v)\mapsto(P(\Phi)u, Nu, u-v)$, and the second is $(w,x,y)\mapsto Nw- P(q\Phi)x$. The cohomology of this complex is denoted by \[ \mathrm{H}^{\bfcdot}_{\mathrm{st},P}(D):=\mathrm{H}^{\bfcdot}\big(C^{\bfcdot}_{\mathrm{st},P}(D)\big). \]
\begin{theorem}(\cite{BLZ} Theorem 2.1.2)
There is a $P$-syntomic descent spectral sequence
\[
E_2^{i,j}=\mathrm{H}^i_{\mathrm{st},P}\big(D^j(X_h)(r)\big)\implies \mathrm{H}^{i+j}_{\mathrm{syn},P}(X_h,r)
\]
compatible with cup products. \end{theorem} \subsection{Syntomic Abel--Jacobi map} Let $X_{/K}$ be a smooth $d$-dimensional variety. The commutativity of the following diagram \begin{equation}\label{syn to et}\xymatrix{ & \mathrm{CH}^i(X)\ar[ld]_{\mathrm{cl}_\mathrm{syn}}\ar[rd]^{\mathrm{cl}_\mathrm{et}} &\\ \mathrm{H}^{2i}_\mathrm{syn}(X_{K,h},i)\ar@{.>}[rr]^{\rho_\mathrm{syn}}\ar@{->>}[d] & & \mathrm{H}^{2i}_\mathrm{et}(X_{K},\mathbb{Q}_p(i))\ar@{->>}[d]\\ \mathrm{Gr}^0_{\mathrm{syn}}\ar@{^{(}->}[d]\ar@{.>}[rr]& & \mathrm{Gr}^0_{\mathrm{et}}\ar@{=}[d]\\ \mathrm{H}_{\mathrm{st},1-T}^0(D^{2i}(X_h)(i))\ar@{.>}[rr]^\sim & & \mathrm{H}^{2i}_\mathrm{et}(X_{\overline{K}},\mathbb{Q}_p(i))^{\Gamma_K} }\end{equation} where $\rho_\mathrm{syn}$ is Nekov\'a$\check{r}$--Niziol period morphism, follows from the compatibility of the syntomic descent spectral sequence for syntomic cohomology and the Hochschild--Serre spectral sequence for \'etale cohomology (\cite{BLZ} Theorem 2.1.2). \begin{remark}
The bottom horizontal map of diagram ($\ref{syn to et}$) is an isomorphism by (\cite{BLZ} Theorem 1.1.4), hence the middle horizontal map is injective. \end{remark}
\noindent Therefore, if we let \[ \mathrm{CH}^i(X)_0:=\ker\left(\mathrm{cl}_\mathrm{et}:\mathrm{CH}^i(X)\longrightarrow \mathrm{H}^{2i}_\mathrm{et}(X_{\overline{K}},\mathbb{Q}_p(i))^{\Gamma_K}\right) \] denote the subgroup of null-homologous cycles, then the syntomic and the $p$-adic \'etale Abel--Jacobi maps can be compared \[\xymatrix{ & \mathrm{CH}^i(X)_0\ar[ld]_{\mathrm{AJ}_\mathrm{syn}}\ar[dr]^{\mathrm{AJ}^\mathrm{et}_p} &\\ \mathrm{H}_{\mathrm{st},1-T}^1(D^{2i-1}(X_h)(i))\ar[rr]^{\exp_\mathrm{st}} & & \mathrm{H}^1(K, \mathrm{H}^{2i-1}_\mathrm{et}(X_{\overline{K}},\mathbb{Q}_p(i))) }\] through the generalized Bloch--Kato exponential map. If $V$ is a quotient of $\mathrm{H}^{2i-1}_\mathrm{et}(X_{\overline{K}},\mathbb{Q}_p(i))$ such that $D=\mathbf{D}_\mathrm{pst}(V)$ is a convenient quotient of $D^{2i-1}(X_h)(i)$ with respect to the polynomial $1-T$, then the natural inclusion $D_K\hookrightarrow C^1_{\mathrm{st},1-T}(D)$ induces an isomorphism \[ \frac{D_K}{\mathrm{Fil}^0D_K}\cong\mathrm{H}^1_{\mathrm{st},1-T}(D), \] and one can refine the comparison to \[\xymatrix{ & \mathrm{CH}^i(X)_0\ar[ld]_{\mathrm{AJ}_{\mathrm{syn},D}}\ar[dr]^{\mathrm{AJ}^\mathrm{et}_{p,V}} &\\ D_K/\mathrm{Fil}^0\ar[rr]^{\exp_\mathrm{BK}} & & \mathrm{H}_e^1(K, V). }\] \begin{remark} Recall (\cite{Bloch-Kato} Definition 3.10) that the Bloch--Kato exponential surjects onto $\mathrm{H}_e^1(K, V)$ and its kernel is given by $\mathrm{D}_\mathrm{cris}(V)^{\varphi=1}/\mathrm{H}^0(K,V)$. In particular, when $\mathrm{D}_\mathrm{cris}(V)^{\varphi=1}=0$ we can write \[ \mathrm{AJ}_{\mathrm{syn},D}=\log_\mathrm{BK}\circ\mathrm{AJ}_{p,V}^\mathrm{et}\qquad\text{where}\qquad \log_\mathrm{BK}=\exp_\mathrm{BK}^{-1}. \] \end{remark} \noindent If we let $D^*(1)$ denote the Tate dual of $D$, which is a submodule of $D^{2(d-i)+1}(X_h)(d+1-i)$, then $D_K/\mathrm{Fil}^0=\big(\mathrm{Fil}^0D^*(1)_K\big)^\vee$ and we can write \begin{equation}
\mathrm{AJ}_{\mathrm{syn},D}: \mathrm{CH}^i(X)_0\longrightarrow \big(\mathrm{Fil}^0D^*(1)_K\big)^\vee. \end{equation}
\subsubsection{Evaluation using $P$-syntomic cohomology.} Let $\Delta\in\mathrm{CH}^i(X)_0$ be a null-homologous cycle. For any class \[ \eta\in\mathrm{Fil}^0D^*(1)_K\subset\mathrm{Fil}^{d+1-i}\mathrm{H}^{2(d-i)+1}_\mathrm{dR}(X_h), \] choose a polynomial $P(T)\in 1+TK[T]$ such that $P(1)\not=0$, $P(q^{-1})\not=0$ and $\eta\in\mathrm{H}^0_{\mathrm{st},P}(D^*(1))$ (\cite{BLZ} Proposition 1.4.3). Suppose that $\eta$ is in the kernel of the "knight's move" map \[\mathrm{H}^0_{\mathrm{st},P}\big(D^{2(d-i)+1}(X_h)(d+1-i)\big)\longrightarrow \mathrm{H}^2_{\mathrm{st},P}\big(D^{2(d-i)}(X_h)(d+1-i)\big),\] so that $\eta$ can be lifted to syntomic cohomology. Then, for any lift $\tilde{\eta}\in \mathrm{H}^{2(d-i)+1}_{\mathrm{syn},P}\big(X_h,d+1-i\big)$ we can write \begin{equation}\label{evaluation}
\mathrm{AJ}_{\mathrm{syn},D}(\Delta)(\eta)=\mathrm{tr}_{X,\mathrm{syn},P}\big(\mathrm{cl}_\mathrm{syn}(\Delta)\cup\tilde{\eta}\big) \end{equation}
thanks to the compatibility of the $P$-syntomic descent spectral sequence with cup products \[\resizebox{\displaywidth}{!}{\xymatrix{ \mathrm{Fil}^1\mathrm{H}^{2i}_{\mathrm{syn},1-T}\big(X_h,i\big)\ar[d]& \times & \mathrm{H}^{2(d-i)+1}_{\mathrm{syn},P}\big(X_h,d+1-i\big)\ar[d]\ar[r]& \mathrm{H}^{2d+1}_{\mathrm{syn},P}\big(X_h,d+1\big)/\mathrm{Fil}^2\cong K\ar[d]^\sim \\ \mathrm{H}^1_{\mathrm{st},1-T}\big(D^{2i-1}(X_h)(i)\big)\ar@{->>}[d]& \times & \mathrm{H}^0_{\mathrm{st},P}\big(D^{2(d-i)+1}(X_h)(d+1-i)\big)\ar[r]& \mathrm{H}^1_{\mathrm{st},P}\big(\mathbf{D}_\mathrm{pst}(\mathbb{Q}(1))\big)\cong K\ar@{=}[d]\\ \mathrm{H}^1_{\mathrm{st},1-T}(D) &\times& \mathrm{H}^0_{\mathrm{st},P}(D^*(1))\ar@{^{(}->}[u]\ar[r]& \mathrm{H}^1_{\mathrm{st},P}\big(\mathbf{D}_\mathrm{pst}(\mathbb{Q}(1))\big)\cong K. }}\]
\subsection{Abel--Jacobi map of Hirzebruch--Zagier cycles} For every arithmetic point $\mathrm{P}\in \cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of weight 2 and level $p^\alpha$, the Galois representation $\mathrm{V}_{\mathsf{g}_\mathrm{P}}$ is crystalline as a $\Gamma_{\mathbb{Q}_p(\zeta_{p^\alpha})}$-representation. Throughout this subsection, we will consider all our geometric structures, including the moduli schemes and the cycles, to be defined over $F_\alpha:=\mathbb{Q}_p(\zeta_{p^\alpha})$. Similarly, we will regard all Galois representations, as well as Deudonne\'e functors, to be defined with respect to the absolute Galois group $\Gamma_{F_\alpha}$.
\noindent The specialization at $\mathrm{P}$ of
$\boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)$ is a quotient of $\mathrm{H}_{\acute{\mathrm{e}}\mathrm{t}}^3(Z_\alpha(K)_{\overline{F}},\mathbb{Q}_p(2))$ such that \begin{equation}
D_{\mathsf{g}_\mathrm{P},\mathsf{f}_\circ} := \mathbf{D}_\mathrm{pst}\big(\cal{V}_{\mathscr{G}_\mathrm{P},\mathrm{f}_\circ}(M)\big) \end{equation} is a convenient quotient of $D^3(Z_\alpha(K))(2)$. Furthermore, $D_{\mathsf{g}_\mathrm{P},\mathsf{f}_\circ}\cong (D_{\mathsf{g}_\mathrm{P},\mathsf{f}_\circ})^*(1)$ since the Galois representation $\cal{V}_{\mathscr{G}_\mathrm{P},\mathrm{f}_\circ}(M)$ is Kummer self-dual.
\begin{definition} Let $\omega_\mathrm{P} \in e_\mathrm{n.o.} \mathrm{Fil}^2\mathrm{H}^2_\mathrm{dR}(S(K_{\diamond,t}(p^\alpha))/F_\alpha)$ be the de Rham cohomology class associated with the specialization \[ \breve{\mathsf{g}}_\mathrm{P}\in S^\mathrm{n.o.}_{2t_L,t_L}\big(K_{\diamond,t}(p^\alpha);\chi_\circ\theta_L^{-1}\chi^{-1},\mathbbm{1};O\big) \] of the $\mathbf{K}_\mathscr{G}$-adic cuspform $\breve{\mathscr{G}}$. \end{definition}
\noindent Recall the class $\eta_\circ$ of Definition $\ref{def eta}$, then the tensor product $(\lambda_\alpha)^*\omega_\mathrm{P}\otimes\eta_\circ$ belongs to the convenient $(\varphi, N,\Gamma_{F_\alpha})$-module $(D_{\mathsf{g}_\mathrm{P},\mathsf{f}_\circ})^*(1)_{F_\alpha}\cong (D_{\mathsf{g}_\mathrm{P},\mathsf{f}_\circ})_{F_\alpha}$ and it makes sense to try to evaluate \[ \mathrm{AJ}_{\mathrm{syn}}\big(\Delta_\alpha^\circ\big)\big((\lambda_\alpha)^*\omega_\mathrm{P}\otimes\eta_\circ\big) =\mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big). \] We use the functoriality of the formation of the syntomic Abel--Jacobi map to move the computation on a Hilbert--Blumenthal variety where the theory of overconvergent $p$-adic Hilbert cuspforms with level at $p$ was developed by Kisin and Lai in (\cite{Kisin-Lai}). \begin{lemma}\label{ablemma}
The following equality holds
\[
\lambda_\alpha \circ w_{\mathfrak{p}_2^\alpha}
=
\langle\varpi_{\mathfrak{p}_2}^\alpha,1\rangle \circ w_{\mathfrak{p}_2^\alpha} \circ \lambda_\alpha.
\] \end{lemma} \begin{proof}
By definition
$w_{\mathfrak{p}_2^\alpha}
=
\mathfrak{T}_{\tau_{\mathfrak{p}_2}} \circ \nu_\alpha$
and
$
\lambda_\alpha = (-)^* \circ \mathfrak{T}_{\tau_\alpha}
$, thus by a direct calculation using complex uniformizations one sees that
\[
\lambda_\alpha\circ \mathfrak{T}_{\tau_{\mathfrak{p}_2}}
=
\langle\varpi_{\mathfrak{p}_2}^\alpha,1\rangle\circ \mathfrak{T}_{\tau_{\mathfrak{p}_2}}\circ\lambda_\alpha.
\]
The claim follows because $\nu_\alpha\circ\lambda_\alpha=\lambda_\alpha\circ\nu_\alpha$ as the determinant of $\tau_\alpha$ defining $\lambda_\alpha$ is $Mp^\alpha\in\mathbb{Q}^\times_+$. \end{proof} \noindent The diagonal embedding $\zeta:Y(K'_0(p^\alpha))\to S(K_\diamond(p^\alpha))$ naturally factors \[\xymatrix{ Y(K'_0(p^\alpha))\ar[r]^\zeta\ar[dr]_\zeta& S^*(K^*_\diamond(p^\alpha))\ar@{.>}[d]^\xi\\ &S(K_\diamond(p^\alpha)) }\] through a map to a Hilbert--Blumenthal variety $\zeta:Y(K'_0(p^\alpha))\to S^*(K^*_\diamond(p^\alpha))$, denoted by the same symbol. \begin{lemma}\label{TwistedCycle2} Let $Z^*_\diamond(p^\alpha):=S^*(K^*_\diamond(p^\alpha))\times X_0(p)$ and consider the null-homologous cycle \[ \Xi_\alpha^\circ:=(\mathrm{id}, \varepsilon_{\mathsf{f}^*_\circ})_*( \lambda_\alpha\circ \zeta, \pi_{1,\alpha})_* [Y(K'_0(p^\alpha))]\in\mathrm{CH}^2(Z^*_\diamond(p^\alpha))_0(F_\alpha)\otimes_\mathbb{Z}\mathbb{Z}_p \] then
\[
(\lambda_\alpha, \mathrm{id})_*\Delta^\circ_\alpha= ( w_{\mathfrak{p}_2^\alpha}\circ\xi, \mathrm{id})_* \Xi_\alpha^\circ.
\] \end{lemma} \begin{proof} Using Lemma $\ref{ablemma}$ we compute \[ \begin{split} (\lambda_\alpha, \mathrm{id})_*\Delta^\circ_\alpha
&=
(\lambda_{\alpha}, \mathrm{id})_*(\mathrm{id}, \varepsilon_{\mathsf{f}_\circ})_* (\langle\varpi_{\mathfrak{p}_2}^\alpha,1\rangle \circ w_{\mathfrak{p}_2^\alpha} \circ \zeta,\ \pi_{1,\alpha})_*[Y(K'_0(p^\alpha))] \\
&=
(\mathrm{id}, \varepsilon_{\mathsf{f}^*_\circ})_*(\langle\varpi_{\mathfrak{p}_2}^{-\alpha},1\rangle \circ \lambda_\alpha \circ w_{\mathfrak{p}_2^\alpha}\circ \zeta,\ \pi_{1,\alpha})_*[Y(K'_0(p^\alpha))] \\
&=
(\mathrm{id}, \varepsilon_{\mathsf{f}^*_\circ})_*(\langle\varpi_{\mathfrak{p}_2}^{-\alpha},1\rangle \circ \langle\varpi_{\mathfrak{p}_2}^{\alpha},1\rangle \circ w_{\mathfrak{p}_2^\alpha} \circ \lambda_\alpha \circ \zeta,\ \pi_{1,\alpha})_*[Y(K'_0(p^\alpha))] \\
&=
(\mathrm{id}, \varepsilon_{\mathsf{f}^*_\circ})_*( w_{\mathfrak{p}_2^\alpha} \circ\lambda_\alpha\circ \zeta,\ \pi_{1,\alpha})_* [Y(K'_0(p^\alpha))] \\
&=
( w_{\mathfrak{p}_2^\alpha}\circ\xi,\ \mathrm{id})_* \Xi_\alpha^\circ. \end{split} \] \end{proof}
\noindent Therefore we are left to compute the right-hand side of the following equation \begin{equation}\label{firstreduction} \mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big)= \mathrm{AJ}_{\mathrm{syn}}\big(\Xi^\circ_\alpha\big)\big(\omega^\diamond_\mathrm{P}\otimes\eta_\circ\big) \end{equation} where the de Rham cohomology class \[ \omega^\diamond_\mathrm{P}:=(w_{\frak{p}_2^\alpha}\circ\xi)^*\omega_\mathrm{P}\in\mathrm{Fil}^2\mathrm{H}^2_\mathrm{dR}(S^*(K^*_\diamond(p^\alpha))/F_\alpha) \] is associated with the cuspform \begin{equation}\label{def prop cuspform} \breve{\mathsf{g}}_\mathrm{P}^\diamond:=(\nu_\alpha\circ\xi)^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\in S_{2t_L,t_L}\big(K^*_\diamond(p^\alpha);O\big). \end{equation}
\subsubsection{Back to syntomic cohomology.}
If $R(T), Q(T)\in 1+T\cdot F_\alpha[T]$ are polynomials such that the cohomology classes
$R(p^{-2}\Phi)\omega^\diamond_\mathrm{P}$ and $Q(\Phi)\eta_\circ$ are zero, then \[ \omega^\diamond_\mathrm{P} \in \mathrm{H}^0_{\mathrm{st},R}\big(D^2(S^*(K^*_{\diamond}(p^\alpha)))(2)\big),\qquad \eta_\circ \in \mathrm{H}^0_{\mathrm{st},Q}\big(D^1(X_0(p))\big) \] and it is often possible to lift them to syntomic cohomology. \begin{lemma}
Suppose that $Q(p)\not=0$, then there exists lifts
\[
\tilde{\omega}^\diamond_\mathrm{P} \in \mathrm{H}^2_{\mathrm{syn},R}(S^*(K^*_{\diamond}(p^\alpha)),2),\qquad\tilde{\eta}_\circ\in\mathrm{H}^1_{\mathrm{syn},Q}(X_0(p),0)
\]
of $\omega^\diamond_\mathrm{P}$ and $\eta_\circ$ to syntomic cohomology. \end{lemma} \begin{proof} Let $S^*(K^*_{\diamond}(p^\alpha))^c$ denote the minimal resolution of the Baily-Borel compactification of $S^*(K^*_{\diamond}(p^\alpha))$. As the class $\omega^\diamond_\mathrm{P}$ extends to the smooth compactification, it suffices to show $\omega^\diamond_\mathrm{P}$ can be lifted to the syntomic cohomology of $S^*(K^*_{\diamond}(p^\alpha))^c$. The algebraic surface $S^*(K^*_{\diamond}(p^\alpha))^c$ is simply connected, hence $\mathrm{H}^2_{\mathrm{st},R}\big(D^1(S^*(K^*_{\diamond}(p^\alpha))^c)(2)\big)=0$ and the descent spectral sequence (\cite{BLZ} Theorem 2.1.2) produces a surjection \[ \mathrm{H}^2_{\mathrm{syn},R}(S^*(K^*_{\diamond}(p^\alpha))^c,2) \twoheadrightarrow \mathrm{H}^0_{\mathrm{st},R}\big(D^2(S^*(K^*_{\diamond}(p^\alpha))^c)(2)\big) \] which proves the first claim. In the modular curve case, we compute that \[ \mathrm{H}^2_{\mathrm{st},Q}\big(D^0(X_0(p))\big)=\mathrm{H}^2_{\mathrm{st},Q}\big(\mathbf{D}_{\mathrm{pst}}(\mathbb{Q}_p)\big)=F_\alpha/Q(p)F_\alpha \] is zero as long as $Q(p)\not=0$. Hence, there is a surjection $\mathrm{H}^1_{\mathrm{syn},Q}(X_0(p),0)\twoheadrightarrow \mathrm{H}^0_{\mathrm{st},Q}\big(D^1(X_0(p))\big)$ whenever $Q(p)\not=0$. \end{proof}
\noindent It follows from equations (\ref{evaluation}) and (\ref{firstreduction}) that if $(R\star Q)(1)\not=0$ and $(R\star Q)(p^{-1})\not=0$ we can evaluate the syntomic Abel--Jacobi map as \[ \mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big)= \mathrm{tr}_{Z^*_\diamond(p^\alpha),R\star Q}\Big(\mathrm{cl}_\mathrm{syn}\big(\Xi_\alpha^\circ\big)\cup\big(\tilde{\omega}^\diamond_\mathrm{P}\otimes\tilde{\eta}_\circ\big)\Big). \] Moreover, the projection formula (\cite{BLZ}, Theorem 2.5.3) computes the syntomic trace for $Z_\diamond(p^\alpha)$ as a syntomic trace for the curve $Y_\alpha=Y(K'_\diamond(p^\alpha))$ \begin{equation}\label{AJ101} \mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big)=\Big\langle (\zeta\circ \lambda_\alpha)^*\tilde{\omega}^\diamond_\mathrm{P},\ (\pi_{1,\alpha})^*\tilde{\eta}_\circ\Big\rangle_{Y_\alpha,R\star Q}
\end{equation} where we used the equality $(\varepsilon_{\mathsf{f}_\circ})^*\eta_\circ=\eta_\circ$.
\subsubsection{Explicit cup product formulas.}\label{cupproduct} To continue the computation of syntomic regulators it is convenient to make the choice of lifts more explicit. According to (\cite{BLZ} Section 2.4) any lift $\tilde{\omega}^\diamond_{\mathrm{P}}\in \mathrm{H}^2_{\mathrm{syn},R}(S^*(K^*_\diamond(p^\alpha)),2)$ of $\omega_\mathrm{P}^\diamond$ can be described by a tuple $[u,v;w,x,y;z]$ where \begin{center}\begin{tabular}{lllll} $u\in\mathbb{R}\Gamma^{B,2}_\mathrm{HK}(S^*(K^*_\diamond(p^\alpha)))$, &&$v\in \mathrm{Fil}^2\hspace{1mm}\mathbb{R}\Gamma^{2}_\mathrm{dR}(S^*(K^*_\diamond(p^\alpha)))$, &&$z\in\mathbb{R}\Gamma^{B,0}_\mathrm{HK}(S^*(K^*_\diamond(p^\alpha)))$,\\ \\ $w,x\in \mathbb{R}\Gamma^{B,1}_\mathrm{HK}(S^*(K^*_\diamond(p^\alpha)))$,&&$y\in \mathbb{R}\Gamma^{1}_\mathrm{dR}(S^*(K^*_\diamond(p^\alpha)))$,&&\\ \end{tabular}\end{center} satisfy the relations \begin{center}
\begin{tabular}{lllll} $du=0$,& &$dv=0$,& &$dw=R(p^{-2}\Phi)u$,\\ $dx=Nu$, &&$dy=\iota_\mathrm{dR}^B(u)-v$, &&$dz=Nw-R(p^{-1}\Phi)x$. \end{tabular}
\end{center} We choose a lift $\tilde{\omega}^\diamond_{\mathrm{P}}$ such that $\iota^B_\mathrm{dR}(u)=v=\omega^\diamond_\mathrm{P}$ so that we can assume $y=0$. Then, we obtain \[ (\zeta\circ \lambda_\alpha)^*\tilde{\omega}^\diamond_\mathrm{P}=\big[0,0;(\zeta\circ \lambda_\alpha)^*w,(\zeta\circ \lambda_\alpha)^*x,0;(\zeta\circ \lambda_\alpha)^*z\big] \] for dimension reasons. Similarly, any lift $\tilde{\eta}_\circ\in \mathrm{H}^1_{\mathrm{syn},Q}(X_0(p),0)$ of $\eta_\circ$ can be described by a tuple $[u',v';w',x',y';0]$ where \begin{center}\begin{tabular}{lll} $u'\in\mathbb{R}\Gamma^{B,1}_\mathrm{HK}(X_0(p))$, &&$v'\in \mathbb{R}\Gamma^{1}_\mathrm{dR}(X_0(p))$,\\ \\ $w',x'\in \mathbb{R}\Gamma^{B,0}_\mathrm{HK}(X_0(p))$,&&$y'\in \mathbb{R}\Gamma^{0}_\mathrm{dR}(X_0(p))$ \end{tabular}\end{center} satisfy the relations \begin{center}
\begin{tabular}{lllll} $du'=0$,& &$dv'=0$,& &$dw'=Q(\Phi)u'$,\\ $dx'=Nu'$, &&$dy'=\iota_\mathrm{dR}^B(u')-v'$. && \end{tabular}
\end{center} As one might expect, another tuple $[u'',v'';w'',x'',y'';z'']$ represents the cup product \[ (\zeta\circ \lambda_\alpha)^*\tilde{\omega}^\diamond_\mathrm{P}\cup (\pi_{1,\alpha})^*\tilde{\eta}_\circ\in \mathrm{H}^3_{\mathrm{syn},R\star Q}(Y_\alpha,2)\] which can be described explicitly (\cite{BLZ}, Proposition 2.4.1). For our computation, the relevant entries are $w''$ and $y''$ (see \cite{BLZ}, Equation (2)). Given polynomials $a(T_1,T_2), b(T_1,T_2)$ with
\[
(R\star Q)(T_1T_2)=a(T_1,T_2)R(T_1)+b(T_1,T_2)Q(T_2),
\] we then compute that
\[ w''=a(p^{-2}\Phi,\Phi)((\zeta\circ \lambda_\alpha)^*w\cup (\pi_{1,\alpha})^*u')\qquad\&\qquad y''=0. \]
Therefore, combining ($\ref{AJ101}$) with the definition of the syntomic trace map (\cite{BLZ}, Definition 3.1.2) we obtain \begin{equation}\label{AJformula1}
\begin{split}
\mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big)&= \mathrm{tr}_{Y_\alpha,R\star Q}\Big( (\zeta\circ \lambda_\alpha)^*\tilde{\omega}^\diamond_\mathrm{P}\cup (\pi_{1,\alpha})^*\tilde{\eta}_\circ\Big)\\
&=-\iota_\mathrm{dR}^B\big((R\star Q)(\Phi)^{-1}w''\big)
\\
&=-\frac{a(p^{-2}\Phi,\alpha_{\mathsf{f}_\circ^*})}{(R\star Q)(p^{-1})} \cdot \big[( \lambda_\alpha)^*\zeta^*[\iota_\mathrm{dR}^B(w)]\cup_\mathrm{dR} (\pi_{1,\alpha})^*\eta_\circ\big]
\end{split} \end{equation} where the last equality follows the facts that $\eta_\circ$ is a eigenvector for $\Phi$ of eigenvalue $\alpha_{\mathsf{f}_\circ^*}$ and the Frobenius endomorphism of $\mathbf{D}_\mathrm{pst}(\mathbb{Q}_p(1))_{\mathrm{st},F_\alpha}$ is multiplication by $p^{-1}$.
\subsection{Relation to $p$-adic modular forms}\label{section AJ p-adic}
The modular curve $X_0(p)$ admits a proper regular model $\mathscr{X}_0(p)_{/\mathbb{Z}_p}$ whose special fiber is the union of two curves, each isomorphic to the special fiber of $X(V_{1}(N))$. One writes $\{\mathscr{W}_\infty, \mathscr{W}_0\}$ for the standard admissible covering of $X_0(p)=\mathscr{X}_0(p)^\mathrm{an}$ by wide open neighborhoods obtained as the inverse image under the specialization map of the two distinguished curves in the special fiber. The two opens are interchanged by the involution $\lambda_1:X_0(p)\to X_0(p)$ defined over $\mathbb{Q}_p(\zeta_p)$. The de Rham cohomology group $\mathrm{H}^1_\mathrm{dR}(X_0(p)/\mathbb{Q}_p)$ is endowed with an action of a Frobenius map $\Phi$ commuting with the $U_p$ operator and the ordinary unit root subspace \[ \mathrm{H}^1_\mathrm{dR}(X_0(p)/\mathbb{Q}_p)^{\mathrm{ord},\mathrm{ur}}\subseteq e_\mathrm{ord}\mathrm{H}^1_\mathrm{dR}(X_0(p)/\mathbb{Q}_p) \] is spanned by the eigenvectors of $\Phi$ whose eigenvalue is a $p$-adic unit. As in (\cite{DR2}, Lemma 4.2) the natural map induced by restriction \[ \mathrm{res}_{\mathscr{W}_\infty}:\mathrm{H}^1_\mathrm{dR}(X_0(p)/\mathbb{Q}_p)^{\mathrm{ord},\mathrm{ur}}\overset{0}{\longrightarrow} \mathrm{H}^1_\mathrm{rig}(\mathscr{W}_\infty) \] is trivial, while \[ \mathrm{res}_{\mathscr{W}_0}:\mathrm{H}^1_\mathrm{dR}(X_0(p)/\mathbb{Q}_p)^{\mathrm{ord},\mathrm{ur}}[\phi] \overset{\sim}{\longrightarrow} \mathrm{H}^1_\mathrm{rig}(\mathscr{W}_0)[\phi] \] is an isomorphism for any eigenform $\phi\in S_{2,1}(V_{1,0}(N,p);\overline{\mathbb{Q}})$.
\begin{lemma} For any $\omega\in \mathrm{H}^1_\mathrm{dR}(X(K'_0(p^\alpha))/\mathbb{Q}_p)$ the de Rham pairing \[ \big\langle (\lambda_\alpha)^*\omega,(\pi_{1,\alpha})^*\eta_\circ\big\rangle_{\mathrm{dR}}=\big\langle e_\mathrm{ord}\omega,(\pi_{2,\alpha})^*(\lambda_1)^*\eta_\circ\big\rangle_\mathrm{dR} \]
depends only on the $p$-adic cuspform associated to $e_{\mathrm{ord}}\omega$. \end{lemma} \begin{proof} Since the class $\eta_\circ$ ordinary and $e_\mathrm{ord}\circ(\pi_{1,\alpha})^*=(\pi_{1,\alpha})^*\circ e_\mathrm{ord}$ we can compute
\[\begin{split}
\big\langle (\lambda_\alpha)^*\omega,(\pi_{1,\alpha})^*\eta_\circ\big\rangle_\mathrm{dR}
&=
\big\langle e^*_\mathrm{ord}(\lambda_\alpha)^*\omega,(\pi_{1,\alpha})^*\eta_\circ\big\rangle_\mathrm{dR}\\
&=
\big\langle (\lambda_\alpha)^*e_\mathrm{ord}\omega,(\pi_{1,\alpha})^*\eta_\circ\big\rangle_\mathrm{dR}\\ \text{as}\quad\lambda_\alpha^{-1}=\lambda_\alpha\circ\langle-1,1\rangle\qquad &=
\big\langle e_\mathrm{ord}\omega,(\pi_{1,\alpha}\circ\lambda_\alpha)^*\eta_\circ\big\rangle_\mathrm{dR}\\
&=
\big\langle e_\mathrm{ord}\omega,(\lambda_1\circ\pi_{2,\alpha})^*\eta_\circ\big\rangle_\mathrm{dR}.\\
\end{split}\]
The class $\eta_\circ$ is supported on the wide open $\mathscr{W}_0$. Hence, from the explicit description of the Poincar\'e pairing in (\cite{DR2}, Equation (109)) and the fact that the involution $\lambda_1:X_0(p)\to X_0(p)$ interchanges $\mathscr{W}_0$ with $\mathscr{W}_\infty$ we see that the pairing depends only on $\mathrm{res}_{\mathscr{W}_\infty(p^\alpha)}\big(e_{\mathrm{ord}}\omega\big)$. \end{proof}
\noindent Therefore, we are left to describe $e_\mathrm{ord}\zeta^*[\iota_\mathrm{dR}^B(w)]$ in terms of $p$-adic cuspforms.
\begin{definition} Let $S^*_\diamond(p^\alpha)$ denote the $\mathbb{Q}_p$-scheme $S^{*}(K_\diamond^*(p^\alpha))$ with its minimal compactification $\overline{S}_{\diamond}^*(p^\alpha)^{\mbox{\tiny $\mathrm{min}$}}$. For a choice of toroidal compactification $\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}$ we write $\mbox{\small $D$}$ for the boundary divisor $\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}\setminus S^*_\diamond(p^\alpha)$ and we will use the same symbols to denote the associated rigid analytic spaces. \end{definition}
\noindent Following (\cite{Kisin-Lai}, Section 3.2.2) we let \begin{equation} j_{\mbox{\tiny $\mathrm{tor}$}}: \overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}\hookrightarrow \overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}},\qquad j_{\mbox{\tiny $\mathrm{min}$}}: \overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{min}$}}_{\mbox{\tiny $\mathrm{ord}$}}\hookrightarrow \overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{min}$}} \end{equation} denote the open immersions of the rigid analytic subspaces parametrizing ordinary abelian surfaces and the unramified cusps. By (\cite{Kisin-Lai}, Equation 3.2.4) the pullback of a fundamental system of strict neighborhoods of the ordinary loci at level $p^0$ provides a fundamental system of strict neighborhoods at level $p^\alpha$. In particular, $\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{min}$}}_{\mbox{\tiny $\mathrm{ord}$}}$ has a fundamental system of strict neighborhoods consisting of affinoid subdomains.
\noindent The class $e_\mathrm{ord}\zeta^*[\iota_\mathrm{dR}^B(w)]$ depends only on the image of $\iota_\mathrm{dR}^B(w)$ in the rigid complex of $\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}$ because the image of the wide open $\mathscr{W}_\infty(p^\alpha)\subseteq Y_0(p^\alpha)$ under the morphism $\zeta: Y_0(p^\alpha)\to S_\diamond^*(p^\alpha)$ is contained in $\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}$. Furthermore, as in characteristic zero there exists a Hecke equivariant projection from the de Rham complex to the dual BGG complex which is a quasi-isomorphism of filtered complexes (\cite{BGG}, Sections 5.3-5.4), we are reduced to describing the projection of $\iota_\mathrm{dR}^B(w)$ using overconvergent cuspforms for the group $G^*$ (defined as in \cite{Hilbert}, Section 3.3). Concretely, the relevant complex is given by
\[\xymatrix{
S^\dagger_{0,0}(K^*_\diamond(p^\alpha)) \ar[rr]^-{(d_1,d_2)}&&
S^\dagger_{(2,0),(1,0)}(K^*_\diamond(p^\alpha))\oplus S^\dagger_{(0,2),(0,1)}(K^*_\diamond(p^\alpha)) \ar[rr]^-{-d_2\oplus d_1}&&
S^\dagger_{2t_L,t_L}(K^*_\diamond(p^\alpha)),
}\] it computes the rigid cohomology groups (see also \cite{Hilbert}, Theorem 3.5)
\[
\mathrm{H}_{\mathrm{rig},c}^{\bfcdot}\Big(\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}\Big)
:=\mathbb{H}^{\bfcdot}\Big(\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}, (j_\mathrm{tor})^\dagger\ \Omega^{\star}_{\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}}(-\mbox{\small $D$})\Big),
\] and the projection of $\omega_\mathrm{P}^\diamond$ in $S^\dagger_{2t_L,t_L}(K^*_\diamond(p^\alpha))$ is the cuspform $\breve{\mathsf{g}}_\mathrm{P}^\diamond$ defined in \eqref{def prop cuspform}.
\begin{lemma}\label{lemma: representative} The image of $\iota_\mathrm{dR}^B(w)$ in $S^\dagger_{(2,0),(1,0)}(K^*_\diamond(p^\alpha))\oplus S^\dagger_{(0,2),(0,1)}(K^*_\diamond(p^\alpha))$ is given by a pair $(H_1,H_2)$ of overconvergent cuspforms satisfying \begin{equation}\label{star relation} d_1H_2-d_2H_1=R(V(p))\breve{\mathsf{g}}_\mathrm{P}^\diamond. \end{equation} \end{lemma} \begin{proof} Recall that the operator $R(p^{-2}\Phi)$ was required to annihilate the cohomology class $\omega_\mathrm{P}^\diamond$. As the action of $p^{-2}\Phi$ on de Rham cohomology corresponds to the action of the $V(p)$--operator on overconvergent cuspforms, we deduce the triviality of the class $R(V(p))\breve{\mathsf{g}}_\mathrm{P}^\diamond$ in $\mathrm{H}_{\mathrm{rig},c}^{2}\big(\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}\big)$. \end{proof}
\subsubsection{Choice of the polynomial $R$.} Recall that the cuspform $\breve{\mathsf{g}}_\mathrm{P}\in S_{2t_L,t_L}(K_{\diamond,t}(p^\alpha);O)$ is an eigenform for the Hecke operators $U_{\mathfrak{p}_1},U_{\mathfrak{p}_2}^*$ for the two primes above $p$ with eigenvalues \[ U_{\mathfrak{p}_1}\breve{\mathsf{g}}_\mathrm{P}=\alpha_{1,\mathsf{g}_\mathrm{P}}\cdot\breve{\mathsf{g}}_\mathrm{P}\qquad \text{and}\qquad U_{\mathfrak{p}_2}^*\breve{\mathsf{g}}_\mathrm{P}= \overline{\alpha}_{2,\mathsf{g}_\mathrm{P}}\cdot\breve{\mathsf{g}}_\mathrm{P}=\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}p\cdot \breve{\mathsf{g}}_\mathrm{P}. \] Thus, the cuspform $(w_{\frak{p}_2^\alpha})^*\breve{\mathsf{g}}_\mathrm{P}\in S_{2t_L,t_L}(K_\diamond(p^\alpha);O)$ is an eigenform for the $U_p$-operator and equation (\ref{U_pAtkinLehner}) allows us to compute the Hecke eigenvalue by \[\begin{split} U_p \cdot (w_{\frak{p}_2^\alpha})^*\breve{\mathsf{g}}_\mathrm{P} &= (w_{\mathfrak{p}_2^\alpha})^* U_{\mathfrak{p}_1}U_{\mathfrak{p}_2}^* \langle \varpi_{\mathfrak{p}_2},1 \rangle \breve{\mathsf{g}}_\mathrm{P}\\ &= \Big(\chi_\circ\theta_L^{-1}\chi^{-1}(\varpi_{\mathfrak{p}_2})\cdot\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}p\Big) \cdot(w_{\mathfrak{p}_2^\alpha})^* \breve{\mathsf{g}}_\mathrm{P}\\ &= \chi_\circ\theta_L^{-1}\chi^{-1}(\varpi_{\mathfrak{p}_2})\cdot\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}p\cdot (w_{\frak{p}_2^\alpha})^*\breve{\mathsf{g}}_\mathrm{P}. \end{split}\] We suppose from now on that $\mathfrak{p}_1$ is narrowly principal in $\cal{O}_L$, $\frak{p}_1=(p_1)\cal{O}_L$, for $p_1\in\cal{O}_L$ a totally positive generator and we set $p_2 := p/p_1 \in \cal{O}_L$. Then, there are equalities of Hecke operators
\[
U(p) = \langle 1,p^{-1}\varpi_{p}\rangle U_{p},\qquad
U(p_i) = \langle 1,p_i^{-1}\varpi_{\mathfrak{p}_i}\rangle U_{\mathfrak{p}_i}\qquad \text{for}\ i=1,2
\]
Moreover, $U(p_1), U(p_2)$ commute with $(\nu_\alpha)^*$ as one can see arguing as in Lemma \ref{U_p-nu_alpha-commute}.
\begin{lemma}
The cuspform $\breve{\mathsf{g}}^\diamond_\mathrm{P}$ is an eigenform for the Hecke operators $U(p_1)$ and $U(p_2)$:
\[
U(p_1)\cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}
=
\alpha^\diamond_1 \cdot \breve{\mathsf{g}}^\diamond_\mathrm{P},\qquad
U(p_2)\cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}
=
\alpha^\diamond_2 \cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}
\]
where
\[ \alpha^\diamond_1:=\chi_\circ\theta_L^{-1}\chi^{-1}((p_1)_{\mathfrak{p}_2}^{-1})\cdot\alpha_{1,\mathsf{g}_\mathrm{P}}
\qquad \text{and}\qquad \alpha_2^\diamond:=\chi_\circ\theta_L^{-1}\chi^{-1}((p_2)_{\mathfrak{p}_2}^{-1}\varpi_{\frak{p}_2})\cdot\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}p.
\]
\end{lemma}
\begin{proof}
By definition the matrices $\mbox{\tiny $\begin{pmatrix}p_i&0\\0&1\end{pmatrix}$}$ for $i=1,2$ belong to
$G(\mathbb{Q})^+G^*(\mathbb{A}_{f})$, hence the operators $U(p_1)$ and $U(p_2)$ commute with the natural map $\xi:S^*(K_\diamond^*(p^\alpha))\rightarrow S(K_\diamond(p^\alpha))$ (\cite{LLZ}, Definition 2.2.4). First, we observe that
\[\begin{split}
U(p_1)\cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}
&=
\xi^*\circ U(p_1)\circ(w_{\frak{p}_2^\alpha})^*\breve{\mathsf{g}}_\mathrm{P}\\
&=
\xi^*\circ(\nu_\alpha)^*\circ U(p_1)\circ(\mathfrak{T}_{\tau_{\mathfrak{p}_2}})^*\breve{\mathsf{g}}_\mathrm{P}.
\end{split}\]
As
\[\begin{split}
U(p_1)\circ(\mathfrak{T}_{\tau_{\mathfrak{p}_2}})^*
&=
U_{\mathfrak{p}_1}\circ \langle 1,p_1^{-1}\varpi_{\mathfrak{p}_1}\rangle\circ (\mathfrak{T}_{\tau_{\mathfrak{p}_2}})^*\\
(\text{Equation}\ (\ref{AL-diamonds}))\qquad&=
U_{\mathfrak{p}_1}\circ (\mathfrak{T}_{\tau_{\mathfrak{p}_2}})^*\circ
\langle (p_1)_{\mathfrak{p}_2}^{-1},p_1^{-1}\varpi_{\mathfrak{p}_1}\cdot(p_1)_{\mathfrak{p}_2}^{2}\rangle\\
&=
(\mathfrak{T}_{\tau_{\mathfrak{p}_2}})^*\circ U_{\mathfrak{p}_1}\circ \langle (p_1)_{\mathfrak{p}_2}^{-1},p_1^{-1}\varpi_{\mathfrak{p}_1}\cdot(p_1)_{\mathfrak{p}_2}^2\rangle,
\end{split}\]
we obtain
\[\begin{split}
U(p_1)\cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}
&=
\xi^*\circ(\nu_\alpha)^*\circ(\mathfrak{T}_{\tau_{\mathfrak{p}_2}})^*\circ U_{\mathfrak{p}_1}\circ \langle (p_1)_{\mathfrak{p}_2}^{-1},\varpi_{\mathfrak{p}_1}\cdot(p_1)_{\mathfrak{p}_2}^{-2}\rangle \breve{\mathsf{g}}_\mathrm{P}\\
&=
\chi_\circ\theta_L^{-1}\chi^{-1}((p_1)_{\mathfrak{p}_2}^{-1})\cdot\alpha_{1,\mathsf{g}_\mathrm{P}}\cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}.
\end{split}\]
Similarly one computes
\[
U(p_2)\cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}
= \chi_\circ\theta_L^{-1}\chi^{-1}((p_2)_{\mathfrak{p}_2}^{-1}\varpi_{\frak{p}_2})\cdot\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}p\cdot \breve{\mathsf{g}}^\diamond_\mathrm{P}.
\]
\end{proof}
\begin{remark}
Note that
\begin{equation}
\alpha_1^\diamond\alpha_2^\diamond=\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}p.
\end{equation}
\end{remark}
\begin{definition}
We set
\[
R(T):=(1-\alpha^\diamond_1\alpha^\diamond_2T).
\] \end{definition}
\begin{lemma}\label{lemma: R is ok}
If $R(T)=(1-\alpha^\diamond_1\alpha^\diamond_2T)$, then in $\mathrm{H}_{\mathrm{rig},c}^{2}\big(\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}\big)$ we have
\[
[R(V(p))\breve{\mathsf{g}}_\mathrm{P}^\diamond]=0.
\] \end{lemma} \begin{proof} Consider the operators
\[
V(p_1)
:=
\langle 1,p_1\varpi_{\mathfrak{p}_1}^{-1}\rangle V_{\mathfrak{p}_1}
\qquad \mathrm{and} \qquad
V(p_2)
:=
\langle 1,p_1^{-1}\varpi_{\mathfrak{p}_1}\rangle V_{\mathfrak{p}_2},
\]
acting on automorphic forms for $G^*$. They satisfy $V(p)=V(p_1)V(p_2)$ and $U(p_i)V(p_i) = 1$ for $i=1,2$. The polynomials \[ R_i(T_i)=(1-\alpha^\diamond_iT_i)\quad \text{for}\quad i=1,2 \] can be used to write \[ R(T_1T_2)=R_1(T_1)R_2(T_2)+\alpha^\diamond_2T_2R_1(T_1)+ \alpha^\diamond_1T_1R_2(T_2). \] Therefore, the cuspform $R(V(p))\breve{\mathsf{g}}_\mathrm{P}^\diamond$ can be expressed as a sum of depleted cuspforms \[ R(V(p))\breve{\mathsf{g}}_\mathrm{P}^\diamond=(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\cal{P}]$}}+\alpha^\diamond_2V(p_2)(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\mathfrak{p}_1]$}}+\alpha^\diamond_1V(p_1)(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\mathfrak{p}_2]$}}. \] We deduce that $U(p)\cdot R(V(p))\breve{\mathsf{g}}_\mathrm{P}^\diamond=0$. This implies the claim because $U(p)$ acts invertibly on the finite dimensional cohomology group $\mathrm{H}_{\mathrm{rig},c}^{2}\big(\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}\big)$ having a right inverse (the $V(p)$--operator). \end{proof}
\subsection{The formula}
We choose the polynomial $Q(T)=(1-\alpha_{\mathsf{f}_\circ^*}^{-1}T)$ to annihilate the class $\eta_\circ$ because by definition $\Phi(\eta_\circ)=\alpha_{\mathsf{f}_\circ^*}\cdot\eta_\circ$. Clearly $Q(p)\not=0$, and $R\star Q(1)\not=0$, $R\star Q(p^{-1})\not=0$ because the Weil conjectures imply that the roots of $R(T)$ have complex absolute values $p^{-1}$. Observe that
\begin{equation}
(R\star Q)(T)= R(\alpha_{\mathsf{f}_\circ^*}^{-1}T) =(1-\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1} \alpha_{\mathsf{f}_\circ^*}^{-1}p T).
\end{equation}
and if we write
\[
(R\star Q)(T_1T_2) =
a(T_1,T_2)R(T_1)
+b(T_1,T_2)Q(T_2),
\]
as in Section $\ref{cupproduct}$, then $a(T_1,\alpha_{\mathsf{f}_\circ^*})=1$ because
\[
R(T_1)=(R\star Q)(T_1\cdot\alpha_{\mathsf{f}_\circ^*})=a(T_1,\alpha_{\mathsf{f}_\circ^*})R(T_1).
\] Therefore, equation ($\ref{AJformula1}$) simplifies to the following expression \begin{equation}\label{secondreduction} \mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big)=\frac{-1}{(R\star Q)(p^{-1})} \Big\langle e_\mathrm{ord}\zeta^*[\iota_\mathrm{dR}^B(w)], (\pi_{2,\alpha})^*(\lambda_1)^*\eta_\circ\Big\rangle_{\mathrm{dR}, Y_\alpha}.
\end{equation} By Lemma \ref{lemma: representative} and the straightforward computation $e_\mathrm{ord}\zeta^*(d_1f, d_2f)=e_\mathrm{ord}d\zeta^*f=0$ for any overconvergent function $f\in S^\dagger_{0,0}(K^*_\diamond(p^\alpha))$, we obtain \begin{equation}\label{express in classical forms} e_\mathrm{ord}\zeta^*[\iota_\mathrm{dR}^B(w)] =\omega_{e_\mathrm{ord}\zeta^*\big(H_1, H_2\big)}. \end{equation}
\subsubsection{Replacing overconvergent cuspforms with $p$-adic ones.} Recall the expression \[ R(V(p))\breve{\mathsf{g}}_\mathrm{P}^\diamond=(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\cal{P}]$}}+\alpha^\diamond_2V(p_2)(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\mathfrak{p}_1]$}}+\alpha^\diamond_1V(p_1)(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\mathfrak{p}_2]$}} \] obtained in the proof of Lemma \ref{lemma: R is ok}. Each term of the right hand side is trivial in the cohomology group $\mathrm{H}_{\mathrm{rig},c}^{2}\big(\overline{S}_{\diamond}^{*}(p^\alpha)^{\mbox{\tiny $\mathrm{tor}$}}_{\mbox{\tiny $\mathrm{ord}$}}\big)$, thus there exist three pairs of overconvergent cuspforms \[ (A_1,A_2),\ (B_1,B_2),\ (C_1,C_2)\ \in S^\dagger_{(2,0),(1,0)}(K^*_\diamond(p^\alpha))\oplus S^\dagger_{(0,2),(0,1)}(K^*_\diamond(p^\alpha)) \] satisfying the relations \[ d_1A_2-d_2A_1=(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\cal{P}]$}},\quad d_1B_2-d_2B_1=\alpha^\diamond_2V(p_2)(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\mathfrak{p}_1]$}},\quad d_1C_2-d_2C_1=\alpha^\diamond_1V(p_1)(\breve{\mathsf{g}}_\mathrm{P}^\diamond)^{\mbox{\tiny$[\mathfrak{p}_2]$}}. \] We can further assume that $(A_1,A_2)$ consists of $\cal{P}$-depleted forms, $(B_1,B_2)$ consists of $\mathfrak{p}_1$-depleted forms, and $(C_1,C_2)$ consists of $\mathfrak{p}_2$-depleted forms because depletions operators are idempotents commuting with differential operators. Then, without loss of generality, we can suppose that $(H_1,H_2)=(A_1+B_1+C_1, A_2+B_2+C_2)$. Finally, the next lemma shows that in evaluating $e_\mathrm{ord}\zeta^*[\iota_\mathrm{dR}^B(w)]$, we can replace $\big(H_1, H_2\big)$ with the pair of $p$-adic cuspforms \[ \Big(-d_2^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\cal{P}]$}}- \alpha_1^\diamond V(p_1)d_2^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\frak{p}_2]$}},\quad \alpha_2^\diamond V(p_2) d_1^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\frak{p}_1]$}}\Big). \]
\begin{lemma}\label{PrimitiveVanishing} Suppose $(\mathsf{g}_1,\mathsf{g}_2)\in S_{(2,0),(1,0)}^\mathrm{p\mbox{-}adic}(K^*_\diamond(p^\alpha))\oplus S_{(0,2),(0,1)}^\mathrm{p\mbox{-}adic}(K^*_\diamond(p^\alpha))$ such that \[ d_1 \mathsf{g}_2 -d_2 \mathsf{g}_1 =0, \] and that either $\mathsf{g}_1$ is $\mathfrak{p}_1$-depleted or $\mathsf{g}_2$ is $\mathfrak{p}_2$-depleted. Then \[e_\mathrm{ord}\zeta^*(\mathsf{g}_1, \mathsf{g}_2)=0.\] \end{lemma} \begin{proof} Since this is a statement involving only sections on the identity component of the Hilbert--Blumenthal surface, we may carry out the computation using the classical $q$-expansion.
\noindent First, suppose $\mathsf{g}_1$ is $\mathfrak{p}_1$-depleted. For $\lambda\in L$, write $q^\lambda =\exp(2\pi i(\lambda z_1+\bar{\lambda}z_2)),$ $\bar{\lambda}\in L$ is the algebraic conjugate. Suppose $\mathsf{g}_1 = \sum_\lambda a_\lambda q^\lambda$ and $\mathsf{g}_2 = \sum_\lambda b_\lambda q^\lambda$, where $\lambda$ runs through the totally positive elements in some lattice in $L$. Then $d_1\mathsf{g}_2-d_2\mathsf{g}_1=0$ implies \[ \lambda b_\lambda -\bar{\lambda}a_\lambda=0,\qquad\text{or equivalently}\qquad b_\lambda = \frac{\bar{\lambda}}{\lambda}\cdot a_\lambda. \] Here $a_\lambda$ and $b_\lambda$ are understood to be in $\overline{\mathbb{Q}}_p,$ as well as $\lambda$ and $\bar{\lambda}$ via our fixed embedding $\overline{\mathbb{Q}}\hookrightarrow \overline{\mathbb{Q}}_p,$ corresponding to a place of $\bar{\mathbb{Q}}_p$ above $\mathfrak{p}_1$. As $\mathsf{g}_1$ is $\mathfrak{p}_1$-depleted, $d_1^{-1}\mathsf{g}_1\in S_{0t_L,0t_L}^\mathrm{p\mbox{-}adic}(K^*_\diamond(p^\alpha))$ is well-defined and we can compute \[ \zeta^*(\mathsf{g}_1, \mathsf{g}_2) =\sum_{n>0}\Big(\sum_{\lambda+\bar{\lambda}=n}(a_\lambda+b_\lambda)\Big)q^n=-\sum_{n>0}n\Big(\sum_{\lambda+\bar{\lambda}=n}\frac{a_\lambda}{\lambda}\Big)q^n=-d\zeta^*\big[d_1^{-1}\mathsf{g}_1\big]. \]
Therefore, $e_\mathrm{ord}\zeta^*(\mathsf{g}_1, \mathsf{g}_2)=0$. The case when $\mathsf{g}_2$ is $\mathfrak{p}_2$-depleted is completely analogous. \end{proof}
\begin{corollary}\label{corollcomputation}
\[
e_\mathrm{ord}\zeta^*[\iota_\mathrm{dR}^B(w)]=-\omega_{e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big]}
\] \end{corollary} \begin{proof}
Combining equation ($\ref{express in classical forms}$) and Lemma $\ref{PrimitiveVanishing}$ we deduce that
\[
e_\mathrm{ord}\zeta^*[\iota_\mathrm{dR}^B(w)]
=
\omega_{e_\mathrm{ord}\zeta^*\Big( -d_2^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\cal{P}]$}}- \alpha_1^\diamond V(p_1)d_2^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\frak{p}_2]$}},\ \alpha_2^\diamond V(p_2) d_1^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\frak{p}_1]$}}\Big)}.
\]
Then, the claim follows by applying (\cite{BlancoFornea}, Proposition 2.11 $\&$ Lemma 3.10) as in the proof of (\cite{BlancoFornea}, Theorem 5.14) to obtain the vanishing
\[
e_\mathrm{ord}\zeta^*\Big( V(p_2) d_1^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\frak{p}_1]$}}\Big)=0,
\qquad
e_\mathrm{ord}\zeta^*\Big( V(p_1)d_2^{-1}(\breve{\mathsf{g}}^\diamond_\mathrm{P})^{\mbox{\tiny $[\frak{p}_2]$}}\Big)=0.
\] \end{proof}
\noindent Now we are ready to compute the syntomic Abel--Jacobi map of Hirzebruch--Zagier cycles.
\begin{theorem}\label{AJ formula} Suppose that $p$ splits in $L$ with narrowly principal factors and that there is no totally positive unit in $L$ congruent to $-1$ modulo $p$, then \[ \mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big) = \frac{\alpha_{\mathsf{f}^*_\circ}^{\alpha-1}\cdot \alpha_{2,\mathsf{g}_\mathrm{P}}^{-\alpha}\cdot G(\theta_{L,\mathfrak{p}}^{-1}\chi_\mathfrak{p}^{-1})}{\big(1-\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}\alpha_{\mathsf{f}_\circ^*}^{-1}\big)}\cdot\frac{\Big\langle e_{\mathrm{ord}}\zeta^*\big(d_\mu^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P}),\ \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle}{\Big\langle\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}, \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle}. \] \end{theorem} \begin{proof}
By equation ($\ref{secondreduction}$) and Corollary $\ref{corollcomputation}$ we have
\[\resizebox{ \textwidth}{!}{
$\mathrm{AJ}_{\mathrm{syn}}\Big((\lambda_\alpha,\mathrm{id})_*\Delta_\alpha^\circ\Big)\big(\omega_\mathrm{P}\otimes\eta_\circ\big)=
\frac{1}{\big(1-\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}\alpha_{\mathsf{f}_\circ^*}^{-1}\big)}\cdot \bigg\langle \omega_{e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big]},\ (\pi_{2,\alpha})^*(\lambda_1)^*\eta_\circ\bigg\rangle_{\mathrm{dR},Y_\alpha}$
}\]
Corollary $\ref{diagonal restriction family}$ and Proposition $\ref{analysis comp geometry}$ show that the cuspform $e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big]$ is of level $K'_0(p)$, and since $U_p^{\alpha-1}\eta_\circ=\alpha_{\mathsf{f}^*_\circ}^{\alpha-1}\cdot \eta_\circ$
\[
\begin{split}
\bigg\langle \omega_{e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big]},\ & (\pi_{2,\alpha})^*(\lambda_1)^*\eta_\circ\bigg\rangle_{\mathrm{dR},Y_\alpha}\\
&=
\bigg\langle \omega_{e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big]},\ (\lambda_1)^*U_p^{\alpha-1}\eta_\circ\bigg\rangle_{\mathrm{dR},Y_1}\\
&=
\alpha_{\mathsf{f}^*_\circ}^{\alpha-1}\cdot \bigg\langle \omega_{e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big]},\ (\lambda_1)^*\eta_\circ\bigg\rangle_{\mathrm{dR},Y_1}.
\end{split}
\]
Finally, by Definition $\ref{def eta}$ \[ \bigg\langle \omega_{e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big]},\ (\lambda_1)^*\eta_\circ\bigg\rangle_{\mathrm{dR},Y_1} = \frac{\Big\langle e_\mathrm{ord}\zeta^*\Big[ d_2^{-1}\big(\nu_\alpha^*(\breve{\mathsf{g}}_\mathrm{P}\lvert\tau^{-1}_{\frak{p}_2^\alpha})\big)^{\mbox{\tiny $[\cal{P}]$}}\Big],\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle}{\Big\langle\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}, \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle} \] which together with Proposition $\ref{analysis comp geometry}$ implies the result. \end{proof}
\section{Comparison}\label{Sect: Comparison} Recall our running assumptions: there is an ordinary prime $p$ for $\mathsf{f}_\circ$ such that
\begin{itemize}
\item[\bfcdot] $p$ splits in $L$ with narrowly principal factors;
\item[\bfcdot] there is no totally positive unit in $L$ congruent to $-1$ modulo $p$;
\item[\bfcdot] the eigenvalues of $\mathrm{Fr}_p$ on $\mathrm{As}(\varrho)$ are all distinct modulo $p$.
\end{itemize} Consider the element \[ \boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}:=\alpha_{\mathsf{f}_\circ}\cdot\Big(1-\mathscr{G}(\mathbf{T}(\varpi_{\frak{p}_1})\mathbf{T}(\varpi_{\frak{p}_2}^{-1}))\alpha_{\mathsf{f}^*_\circ}^{-1}\Big)\in\mathbf{I}_\mathscr{G}. \] \begin{theorem}\label{comparison aut-mot} There is an equality of $p$-adic $L$-functions \[ \boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}\cdot \mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ) = \mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)\qquad \text{in}\qquad \boldsymbol{\Pi}\otimes_{\boldsymbol{\Lambda}} \mathbf{I}_\mathscr{G}. \] In particular, $\mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)$ belongs to $\mathbf{I}_\mathscr{G}[\boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}^{-1}]$. \end{theorem} \begin{proof} By Theorem \ref{cor: vanishing criterion} it suffices to show that the elements we want to compare have the same specialization at every arithmetic point of $\mathbf{I}_\mathscr{G}$ of weight $(2t_L,t_L)$. Let $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ be an arithmetic point of weight $(2t_L,t_L)$ and character $(\chi_\circ\theta_L^{-1}\chi^{-1},\mathbbm{1})$ of conductor $p^\alpha$ we have (Proposition \ref{prop: big log})
\[
\mathrm{P}\circ\boldsymbol{\cal{L}}^{\mathsf{f}_\circ}_\mathscr{G}=\Upsilon(\mathrm{P})\cdot\big(\log_\mathrm{BK}\circ\ \mathrm{P}\big)
\qquad\text{for}\qquad \Upsilon(\mathrm{P})=\left(\alpha_{1,\mathsf{g}_{\mathrm{P}}}\alpha_{2,\mathsf{g}_{\mathrm{P}}}\alpha_{\mathsf{f}^*_\circ}^{-1}\right)^\alpha\cdot G\big(\chi_{\mbox{\tiny $\spadesuit$}}\cdot\theta_{\mathbb{Q}\lvert D_p}\big)^{-1}. \] By Propositions \ref{prop: huge period map} \[ \begin{split} \mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}) &= \Big\langle \mathrm{P}\circ\boldsymbol{\cal{L}}_\mathscr{G}^{\mathsf{f}_\circ}\big(\boldsymbol{\kappa}_p^{\mathsf{f}_\circ}(\mathscr{G})\big),\ (\lambda_\alpha)^*\omega_{\breve{\mathscr{G}}_\mathrm{P}}\otimes (\lambda_1)^*\eta_\circ\Big\rangle_\mathrm{dR}\\ &= \Upsilon(\mathrm{P})\cdot\Big\langle\log_\mathrm{BK}\big(\boldsymbol{\kappa}_p^{\mathsf{f}_\circ}(\mathscr{G})(\mathrm{P})\big),\ (\lambda_\alpha)^*\omega_{\breve{\mathscr{G}}_\mathrm{P}}\otimes (\lambda_1)^*\eta_\circ\Big\rangle_\mathrm{dR}.\\ \end{split}\] From the definition of $\kappa_\alpha^\mathrm{n.o.}$ (Equation ($\ref{normalizationHZclasses}$)) we continue with \begin{equation}\label{L-1}\begin{split} \mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}) &= \alpha_{1,\mathsf{g}_{\mathrm{P}}}^{-\alpha}\cdot\Upsilon(\mathrm{P})\cdot\Big\langle\mathrm{AJ}_\mathrm{syn}(\Delta^\circ_\alpha),\ (\lambda_\alpha)^*\omega_{\breve{\mathscr{G}}_\mathrm{P}}\otimes (\lambda_1)^*\eta_\circ\Big\rangle_\mathrm{dR}\\ &= \alpha_{1,\mathsf{g}_{\mathrm{P}}}^{-\alpha}\cdot\Upsilon(\mathrm{P})\cdot\mathrm{AJ}_\mathrm{syn}\Big((\lambda_\alpha,\lambda_1)_*\Delta^\circ_\alpha\Big)\big( \omega_{\breve{\mathscr{G}}_\mathrm{P}}\otimes \eta_\circ\big)\\ (\text{Theorem}\ \ref{AJ formula})\qquad &= \frac{\alpha_{\mathsf{f}^*_\circ}^{-1}}{\big(1-\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}\alpha_{\mathsf{f}_\circ^*}^{-1} \big)}\cdot\frac{\Big\langle e_{\mathrm{ord}}\zeta^*\big(d_\mu^{\bfcdot}\breve{\mathscr{G}}^{\mbox{\tiny $[\cal{P}]$}}\big)^\dagger(\mathrm{P}),\ \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle}{\Big\langle\mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}, \mathsf{f}_\circ^{*\mbox{\tiny $(p)$}}\Big\rangle}. \end{split}\end{equation} We found that \begin{equation}\label{eq111} \begin{split} \mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}) &= \frac{\alpha_{\mathsf{f}^*_\circ}^{-1}}{\big(1-\alpha_{1,\mathsf{g}_\mathrm{P}}\alpha_{2,\mathsf{g}_\mathrm{P}}^{-1}\alpha_{\mathsf{f}_\circ^*}^{-1} \big)}\cdot \mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}). \end{split}\end{equation}
If we let \[ \boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}:=\alpha_{\mathsf{f}_\circ}\cdot\Big(1-\mathscr{G}(\mathbf{T}(\varpi_{\frak{p}_1})\mathbf{T}(\varpi_{\frak{p}_2}^{-1}))\alpha_{\mathsf{f}^*_\circ}^{-1}\Big)\in\mathbf{I}_\mathscr{G} \] then we can rewrite ($\ref{eq111}$) as \[
\boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}(\mathrm{P})\cdot\mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P})
= \mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}). \] The equality $ \boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}\cdot\mathscr{L}^\mathrm{mot}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)= \mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\mathsf{f}_\circ)$ follows from Theorem \ref{cor: vanishing criterion}. \end{proof}
\begin{corollary}\label{second step}
For any arithmetic point $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$
\[
\mathscr{L}_p^\mathrm{aut}(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P})
\not=0\qquad\iff\qquad
\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P})\not=0.
\] \end{corollary} \begin{proof}
The claim follows from Theorem $\ref{comparison aut-mot}$ and the fact that $\boldsymbol{\zeta}_{\mathscr{G},\mathsf{f}_\circ}(\mathrm{P})\not=0$ for any arithmetic point $\mathrm{P}\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$. \end{proof}
\begin{remark}
We are not assuming Conjecture \ref{wishingOhta} for Theorem \ref{comparison aut-mot} and Corollary \ref{second step}. \end{remark}
\subsection{Specialization in parallel weight one}
\begin{proposition}\label{nontriv-specialization} Suppose that the running assumptions on the prime $p$, the representation $\varrho$, and Conjecture \ref{wishingOhta} hold. If the special $L$-value $L(\mathsf{f}_\circ,\mathrm{As}(\varrho),1)$ does not vanish, then there is a surjection
\[
\xymatrix{ \boldsymbol{\cal{V}}_\mathscr{G}(M)(-1)\otimes_{\mathrm{P}_\circ}E_\wp\ar@{->>}[r]
& \mathrm{As}(\varrho).
}\] such that $\boldsymbol{\kappa}^{\mathsf{f}_\circ}_p(\mathscr{G})(\mathrm{P}_\circ)$ has non-trivial image under the induced map \[ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_{\mathscr{G}_{\mathrm{P}_\circ}}(M)\big)\longrightarrow\mathrm{H}^1(\mathbb{Q}_p,\mathrm{As}(\varrho)\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}). \] \end{proposition} \begin{proof}
Corollary \ref{firststep} shows that the non-vanishing of the special $L$-value $L(\mathsf{f}_\circ,\mathrm{As}(\varrho),1)$ is equivalent to the non-vanishing of the automorphic $p$-adic $L$-function $\mathscr{L}_p^\mathrm{aut}(\breve{\mathscr{G}},\mathsf{f}_\circ)$ at the arithmetic point $\mathrm{P}_\circ\in\cal{A}_{\boldsymbol{\chi}}(\mathbf{I}_\mathscr{G})$ of parallel weight one corresponding to $\mathsf{g}_\circ^{\mbox{\tiny $(p)$}}$. Then
\[\begin{split}
L\Big(\mathsf{f}_\circ,\mathrm{As}(\varrho),1\Big)\not=0\qquad
\iff&\qquad \mathscr{L}_p^\mathrm{aut}(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}_\circ)\not=0\\
\mbox{\tiny $(\text{Corollary}\ \ref{second step})$}\qquad \iff&\qquad
\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\mathsf{f}_\circ)(\mathrm{P}_\circ)\not=0\\
\mbox{\tiny $(\text{Lemma}\ \ref{cruximplicat})$}\qquad\implies&\qquad
\mathrm{exp}^*_\mathrm{BK}\big(\boldsymbol{\kappa}^{\mathsf{f}_\circ}_p(\mathscr{G})(\mathrm{P}_\circ)\big)\not=0\\
\mbox{\tiny $(\text{Equation}\ (\ref{step four}))$}\qquad \iff&\qquad
\boldsymbol{\kappa}^{\mathsf{f}_\circ}_p(\mathscr{G})(\mathrm{P}_\circ)\not=0.\\
\end{split}\]
Therefore, by Lemma \ref{correctspec}, the non-trivial class $\boldsymbol{\kappa}^{\mathsf{f}_\circ}_p(\mathscr{G})(\mathrm{P}_\circ)\in \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}^{\mathsf{f}_\circ}_{\mathscr{G}_{\mathrm{P}_\circ}}(M)\big)$ maps non-trivially to some copy of $\mathrm{H}^1(\mathbb{Q}_p,\mathrm{As}(\varrho)\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ})$.
\end{proof}
\noindent The choice of surjection $\boldsymbol{\cal{V}}_{\mathscr{G}_{\mathrm{P}_\circ}}(M)(-1)\otimes_OE_\wp\twoheadrightarrow\mathrm{As}(\varrho)$ in Proposition \ref{nontriv-specialization} induces a map \[ \mathrm{H}^1\big(\mathbb{Q}_p,\boldsymbol{\cal{V}}_{\mathscr{G}_{\mathrm{P}},\mathsf{f}_\circ}(M)\big)\longrightarrow\mathrm{H}^1(\mathbb{Q}_p,\mathrm{V}_{\varrho,\mathsf{f}_\circ}). \] We denote the image of $\boldsymbol{\kappa}_{\mathscr{G},\mathsf{f}_\circ}$ under such map by \begin{equation}
\kappa(\mathsf{g}_\circ^{\mbox{\tiny $(p)$}},\mathsf{f}_\circ)\in \mathrm{H}^1\big(\mathbb{Q}, \mathrm{V}_{\varrho,\mathsf{f}_\circ}\big). \end{equation} The quotient map $\mathrm{V}_{\varrho,\mathsf{f}_\circ}\twoheadrightarrow \mathrm{As}(\varrho)\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}$ induces a homomorphism \[ \partial_p: \mathrm{H}^1\big(\mathbb{Q}_p, \mathrm{V}_{\varrho,\mathsf{f}_\circ}\big) \longrightarrow \mathrm{H}^1\big(\mathbb{Q}_p, \mathrm{As}(\varrho)\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}\big) \]
whose kernel is the local Selmer group at $p$, as one can see by analyzing the Hodge--Tate weights, \[ \mathrm{H}^1_f\big(\mathbb{Q}_p, \mathrm{V}_{\varrho,\mathsf{f}_\circ}\big)=\ker(\partial_p). \]
\begin{theorem}\label{criterion crystalline} Suppose that the running assumptions on the prime $p$, the representation $\varrho$, and Conjecture \ref{wishingOhta} hold. Let $\mathsf{g}_\circ^{\mbox{\tiny $(p)$}}$ be any ordinary $p$-stabilization of $\mathsf{g}_\circ$. If the special $L$-value $L(\mathsf{f}_\circ,\mathrm{As}(\varrho),1)$ does not vanish, then the global cohomology class $\kappa(\mathsf{g}_\circ^{\mbox{\tiny $(p)$}},\mathsf{f}_\circ)$ is not crystalline at $p$. Furthermore, \[ \partial_p\big(\kappa(\mathsf{g}_{\circ}^{\mbox{\tiny $(p)$}},\mathsf{f}_\circ)\big)\in \mathrm{H}^1\Big(\mathbb{Q}_p,\mathrm{As}(\varrho)^{\beta_p}\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}\Big) \] where $\mathrm{As}(\varrho)^{\beta_p}$ is the subpace where $\mathrm{Fr}_p$ acts as multiplication by $\beta_p=\beta_1\beta_2$. \end{theorem} \begin{proof}
It follows from the definitions that
\[
\mathrm{Im}\Big(\boldsymbol{\cal{V}}_{\mathscr{G}}^{\mathsf{f}_\circ}(M)\longrightarrow \mathrm{As}(\varrho)\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}\Big)=\mathrm{Im}\Big(\mathrm{Fil}^2 \boldsymbol{\cal{V}}_{\mathscr{G},\mathsf{f}_\circ}(M)\longrightarrow \mathrm{As}(\varrho)\otimes\mathrm{Gr}^0\mathrm{V}_{\mathsf{f}_\circ}\Big),
\]
and that the image of $\boldsymbol{\kappa}^{\mathsf{f}_\circ}_p(\mathscr{G})(\mathrm{P}_\circ)$ coincides with $\partial_p\big(\kappa_p(\mathsf{g}_{\circ}^{\mbox{\tiny $(p)$}},\mathsf{f}_\circ)\big)$. Therefore, invoking Proposition \ref{nontriv-specialization} we deduce the first claim. We obtain the second claim by observing that Proposition $\ref{AsaiFil}$ implies that
\[
\mathrm{As}(\varrho)^{\beta_p}=\mathrm{Fil}^2\mathrm{As}(\varrho).
\] \end{proof}
\section{On the equivariant BSD-conjecture}
Let $(\mathrm{W},\varrho)$ be a $d$-dimensional self-dual Artin representation with coefficients in a number field $D$. Suppose $\varrho$ factors through the Galois group $G(H/\mathbb{Q})$ of a number field $H$ \[\xymatrix{ \Gamma_\mathbb{Q}\ar[rr]^{\varrho}\ar@{->>}[dr]&& \mathrm{GL}_d(D)\\ & G(H/\mathbb{Q})\ar@{^{(}->}[ru]&. }\] Let $E_{/\mathbb{Q}}$ be a rational elliptic curve, then its algebraic rank with respect to the Artin representation $\varrho$ is defined as \[ r_\mathrm{alg}(E,\varrho)=\mathrm{dim}_D\ E(H)^\varrho_D, \] the dimension of $E(H)^\varrho_D=\mathrm{Hom}_{G(H/\mathbb{Q})}(\varrho, E(H)\otimes D)$ the $\varrho$-isotypic component of the Mordell-Weil group. For $p$ a rational prime and $\wp\mid p$ an $O_D$-prime ideal, we can consider the $p$-adic Galois representations \[ \mathrm{V}_\wp(E)=\mathrm{H}^1_{\acute{\mathrm{e}}\mathrm{t}}(E_{\bar{\mathbb{Q}}},D_\wp(1)), \qquad \mathrm{W}_\wp=\mathrm{W}\otimes_DD_\wp. \] As the Artin representation $\varrho$ is self-dual, the Kummer map allows us to identify the group $E(H)^\varrho_D$ with a subgroup of the Bloch--Kato Selmer group $\mathrm{H}^1_f(\mathbb{Q},\mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))$. Then, local Tate duality together with the global Poitou-Tate exact sequence can be used to show that global cohomology classes not crystalline at $p$ bound the size of the $\varrho$-isotypic component of the Mordell-Weil group of $E_{/\mathbb{Q}}$ (\cite{DR2}, Section 6.1).
\begin{lemma}\label{zerolocalization}
Let $\kappa_1,\dots,\kappa_d\in\mathrm{H}^1(\mathbb{Q}, \mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))$ be global cohomology classes with linearly independent images in the singular quotient $\mathrm{H}^1_\mathrm{sing}(\mathbb{Q}_p, \mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))$ at $p$. Then the $\varrho$-isotypic part of $E(H)$ is trivial:
\[
r_\mathrm{alg}(E,\varrho)=0.
\]
\end{lemma} \begin{proof}
The local cohomology group $\mathrm{H}^1(\mathbb{Q}_p, \mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))$ is a $2d$-dimensional $D_\wp$-vector space and the local Tate pairing induces a perfect duality of $d$-dimensional spaces (\cite{DR2}, Lemma 6.1)
\[
\langle\ , \rangle: \mathrm{H}^1_f(\mathbb{Q}_p, \mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))\times\mathrm{H}^1_\mathrm{sing}(\mathbb{Q}_p, \mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))\longrightarrow D_\wp.
\]
The global Poitou-Tate exact sequence implies that the image of the localization at $p$
\[
\mathrm{loc}_p:\mathrm{H}^1(\mathbb{Q}, \mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))\longrightarrow\mathrm{H}^1(\mathbb{Q}_p, \mathrm{W}_\wp\otimes \mathrm{V}_\wp(E))
\]
is $d$-dimensional (\cite{DR2}, Lemma 6.2). Therefore, the existence of global cohomology classes $\kappa_1,\dots,\kappa_d$ whose localizations generate the singular quotient at $p$ implies that the restriction of $\mathrm{loc}_p$ to the Bloch--Kato Selmer group $\mathrm{H}^1_f(\mathbb{Q},\mathrm{V}_\wp(E)\otimes \mathrm{W}_\wp)$ is the zero map. The commutativity of the diagram \[\xymatrix{ E(H)^{\varrho}_D\ar[r]\ar@{^{(}->}[d]& \oplus_{\mathfrak{p}\mid p}\mathrm{Hom}_{G(H_\mathfrak{p}/\mathbb{Q}_p)}(\mathrm{W}_\wp, E(H_\mathfrak{p})\otimes D_\wp)\ar@{^{(}->}[d]\\ \mathrm{H}^1_f(\mathbb{Q}, \mathrm{W}_\wp\otimes\mathrm{V}_\wp(E))\ar[r]^0& \mathrm{H}^1(\mathbb{Q}_p,\mathrm{W}_\wp\otimes \mathrm{V}_\wp(E)) }\] and the injectivity of the vertical Kummer maps imply the triviality of the top horizontal morphism. Since $E(H)\otimes D\hookrightarrow E(H_\mathfrak{p})\otimes D_\wp$ is injective for all $O_H$-prime ideal $\mathfrak{p}\mid p$, we deduce that $\dim_DE(H)_D^{\varrho}=0$. \end{proof}
\subsubsection{On twisted triple products.} The goal of this section is apply the idea of $p$-adic deformation to the setting where the self-dual Artin representation is $4$-dimensional and arises as the tensor induction \[ \mathrm{As}(\varrho)=\otimes\mbox{-}\mathrm{Ind}_L^\mathbb{Q}(\varrho) \] of a totally odd, irreducible two-dimensional Artin representation $\varrho:\Gamma_L\to\mathrm{GL}_2(D)$ of the absolute Galois group of a real quadratic field $L$. We suppose that $\varrho$ has conductor $\mathfrak{Q}$ and that the tensor induction of the determinant $\det(\varrho)$ is the trivial character. Let $E_{/\mathbb{Q}}$ be a rational elliptic curve of conductor $N$, and for any rational prime $p$, we consider the Kummer self-dual $p$-adic Galois representation of $\Gamma_\mathbb{Q}$ \[ \mathrm{V}_{\varrho,E}=\mathrm{As}(\varrho)\otimes \mathrm{V}_\wp(E). \] By modularity (\cite{W}, \cite{TW}, \cite{PS}) there are a primitive Hilbert cuspform $\mathsf{g}_\varrho\in S_{t_L,t_L}(\frak{Q};D)$ of parallel weight one, and a primitive elliptic cuspform $\mathsf{f}_E\in S_{2,1}(N;\mathbb{Q})$ of weight $2$ associated to $\varrho$ and $E$ respectively. The twisted $L$-function $L\big(E,\mathrm{As}(\varrho),s\big)$ has meromorphic continuation to $\mathbb{C}$ and a functional equation centered at $s=1$, at which the $L$-function is holomorphic.
\begin{theorem}\label{Main Theorem} Suppose that $N$ is coprime to $\mathfrak{Q}$, split in $L$, and there exists an ordinary prime $p\nmid 2N\cdot\frak{Q}$ for $E_{/\mathbb{Q}}$ such that
\begin{itemize}
\item[($1$)] $p$ splits in $L$ with narrowly principal factors;
\item[($2$)] there is no totally positive unit in $L$ congruent to $-1$ modulo $p$;
\item[($3$)] the eigenvalues of $\mathrm{Fr}_p$ on $\mathrm{As}(\varrho)$ are all distinct modulo $p$.
\end{itemize} If, additionally, $\varrho$ is residually not solvable and Conjecture \ref{wishingOhta} holds, then \[ r_\mathrm{an}\big(E,\mathrm{As}(\varrho)\big)=0\quad\implies\quad r_\mathrm{alg}\big(E,\mathrm{As}(\varrho)\big)=0. \] \end{theorem} \begin{proof}
As the eigenvalues of $\mathrm{Fr}_p$ on $\mathrm{As}(\varrho)$ are all distinct modulo $p$, the cuspform $\mathsf{g}_\varrho$ has $4$ distinct $p$-stabilizations
\[
\mathsf{g}_\varrho^{\mbox{\tiny $(\alpha_1\alpha_2)$}},
\quad
\mathsf{g}_\varrho^{\mbox{\tiny $(\alpha_1\beta_2)$}},
\quad
\mathsf{g}_\varrho^{\mbox{\tiny $(\beta_1\alpha_2)$}}
\quad \text{and}\quad
\mathsf{g}_\varrho^{\mbox{\tiny $(\beta_1\beta_2)$}}.
\]
Therefore, assuming $r_\mathrm{an}\big(E,\mathrm{As}(\varrho)\big)=0$, Theorem \ref{criterion crystalline} produces $4$ global cohomology classes
\[
\kappa\big(\mathsf{g}_\varrho^{\mbox{\tiny $(\alpha_1\alpha_2)$}},\mathsf{f}_E\big),
\quad
\kappa\big(\mathsf{g}_\varrho^{\mbox{\tiny $(\alpha_1\beta_2)$}},\mathsf{f}_E\big),
\quad
\kappa\big(\mathsf{g}_\varrho^{\mbox{\tiny $(\beta_1\alpha_2)$}},\mathsf{f}_E\big),
\quad
\kappa\big(\mathsf{g}_\varrho^{\mbox{\tiny $(\beta_1\beta_2)$}},\mathsf{f}_E\big)\quad\in\ \mathrm{H}^1\big(\mathbb{Q},\mathrm{V}_{\varrho,E}\big)
\]
whose images in the singular quotient $\mathrm{H}^1_\mathrm{sing}\big(\mathbb{Q}_p,\mathrm{V}_{\varrho,E}\big)$ are linearly independent. The result follows by invoking Lemma \ref{zerolocalization}. \end{proof}
\subsection{Rational elliptic curves over quintic fields} In this section we show that in many cases of interest there are infinitely many ordinary primes $p\nmid 2N\cdot\frak{Q}$ for a rational elliptic curve $E_{/\mathbb{Q}}$ satisfying assumptions ($1$),($2$),($3$) of Theorem \ref{Main Theorem}.
\subsubsection{Narrowly principal prime factors.} By class field theory, a prime is split in $L$ with narrowly principal prime factors if and only if it totally splits in the narrow class field $H_L^+$ of $L$. \begin{lemma}\label{narrow class}
Let $L=\mathbb{Q}(\sqrt{d})$ be a real quadratic field.
\begin{itemize}
\item[\bfcdot] If $d\equiv_41$ then $H_L^+\cap\mathbb{Q}(\zeta_8)=\mathbb{Q}$.
\item[\bfcdot] If $d\equiv_43$ then $\mathbb{Q}(i)\subseteq H_L^+$ and $H_L^+\cap\mathbb{Q}(\sqrt{\pm2})=\mathbb{Q}$.
\item[\bfcdot] If $d\equiv_86$ then $H_L^+\cap \mathbb{Q}(i)=\mathbb{Q}$, $H_L^+\cap\mathbb{Q}(\sqrt{2})=\mathbb{Q}$ and $\mathbb{Q}(\sqrt{-2})\subseteq H_L^+$.
\item[\bfcdot] If $d\equiv_82$ then $H_L^+\cap \mathbb{Q}(i)=\mathbb{Q}$, $H_L^+\cap\mathbb{Q}(\sqrt{-2})=\mathbb{Q}$ and $\mathbb{Q}(\sqrt{2})\subseteq H_L^+$.
\end{itemize} \end{lemma} \begin{proof}
The proof is a straightforward verification.
\end{proof}
We recall the lattice of subfields of $\mathbb{Q}(\zeta_{16})$ \begin{equation}\label{lattice of subfields}
\xymatrix{
& \mathbb{Q}(\zeta_{16})\ar@{-}[d]\ar@{-}[dl]\ar@{-}[dr]& &\\
\mathbb{Q}(\zeta_{16})^+\ar@{-}[d]& F\ar@{-}[dl]& \mathbb{Q}(\zeta_8)\ar@{-}[dll]\ar@{-}[dl]\ar@{-}[d]\\
\mathbb{Q}(\sqrt{2})\ar@{-}[dr]& \mathbb{Q}(i)\ar@{-}[d]& \mathbb{Q}(\sqrt{-2})\ar@{-}[dl]\\
& \mathbb{Q}& &
} \end{equation} where $F/\mathbb{Q}$ is the splitting field of the polynomial $X^4+4X^2+2$.
\begin{proposition}\label{inf narrow princ}
Let $L=\mathbb{Q}(\sqrt{d})$ be a real quadratic field, then the primes $p\equiv_{16}9$ which are split in $L$ with narrowly principal factors have positive density. \end{proposition} \begin{proof} It follows directly from Lemma $\ref{narrow class}$ that:
\begin{itemize}
\item [\bfcdot] if $d\equiv_41$ then $\mathbb{Q}(\zeta_{16})\cap H_L^+=\mathbb{Q}$;
\item [\bfcdot] if $d\equiv_43$ then $\mathbb{Q}(\zeta_{16})\cap H_L^+=\mathbb{Q}(i)$;
\item [\bfcdot] if $d\equiv_86$ then $\mathbb{Q}(\zeta_{16})\cap H_L^+=\mathbb{Q}(\sqrt{-2})$.
\end{itemize} When $d\equiv_82$ we claim that $\mathbb{Q}(\zeta_{16})\cap H_L^+=\mathbb{Q}(\sqrt{2})$. Indeed, in this case the intersection could be either $\mathbb{Q}(\sqrt{2})$, $\mathbb{Q}(\zeta_{16})^+$ or $F$ and we show that the latter two options cannot occur. Let $A$ denote either $\mathbb{Q}(\zeta_{16})^+$ or $F$. When $d=2$ then $\mathbb{Q}(\zeta_{16})\cap H_L^+=\mathbb{Q}(\sqrt{2})$ because $A/\mathbb{Q}(\sqrt{2})$ is ramified at $2$. When $d\not=2$, then $L(\sqrt{2})/\mathbb{Q}(\sqrt{2})$ is a proper extension unramified at $2$. It follows that $L(\sqrt{2})\cdot A/L(\sqrt{2})$ is ramified at $2$ and cannot be contained in $H_L^+$.
\noindent Since the rational primes $p\equiv_{16}9$ are those totally split in $\mathbb{Q}(\zeta_8)$ and inert in the extension $\mathbb{Q}(\zeta_8)\subseteq\mathbb{Q}(\zeta_{16})$, the analysis above of the intersection $\mathbb{Q}(\zeta_{16})\cap H_L^+$ together with Chebotarev's density theorem finishes the proof. \end{proof}
\subsubsection{Congruences for totally positive units.} Let $\epsilon\in\cal{O}_{L,+}^\times$ be a generator of the totally positive units and $p$ a rational prime split in $L$. Then requiring that there is no totally positive unit congruent to $-1$ modulo $p$ is equivalent to ask that, for $\frak{p}\mid p$, the subgroup $\langle \bar\epsilon\rangle$ of $(\cal{O}_L/\frak{p})^\times$, generated by the reduction of $\epsilon$, has odd order.
\begin{lemma}\label{intermediatefields}
Let $\epsilon\in\cal{O}_{L,+}^\times$ be a generator of the totally positive units of $L=\mathbb{Q}(\sqrt{d})$, then the totally real number field $L(\sqrt{\epsilon})$ is either equal to $L$ or it is biquadratic over $\mathbb{Q}$. Suppose that $L(\sqrt{\epsilon})$ is biquadratic and write $\epsilon=a+b\sqrt{d}$ for $a,b\in\mathbb{N}$, then the subfields of $L(\sqrt{\epsilon})$ are
\[\xymatrix{
& L(\sqrt{\epsilon})\ar@{-}[d]\ar@{-}[dl]\ar@{-}[dr]&\\
\mathbb{Q}(\sqrt{2(a+1)})\ar@{-}[dr]& \mathbb{Q}(\sqrt{d})\ar@{-}[d]& \mathbb{Q}(\sqrt{2(a-1)})\ar@{-}[dl]\\
&\mathbb{Q}&.
}\] \end{lemma} \begin{proof}
If the fundamental unit of $L$ is not totally positive, then $\epsilon$ is a square in $L$ and $L(\sqrt{\epsilon})=L$. If $\epsilon$ is the fundamental unit, then the number field $L(\sqrt{\epsilon})$ is the splitting field of the polynomial
\[
X^4-\mathrm{Tr}_{L/\mathbb{Q}}(\epsilon)X^2+1 = (X^2-\epsilon)(X^2-1/\epsilon),
\]
hence it is biquadratic over $\mathbb{Q}$ and totally real. Using the relation $\mathrm{N}_{L/\mathbb{Q}}(\epsilon)=1$ one sees that
\[
\left(\sqrt{\frac{a+1}{2}}+\sqrt{\frac{a-1}{2}}\right)^2=\epsilon
\]
and the claim follows. \end{proof}
\begin{remark} The number field $L(\sqrt[8]{\epsilon})$ is not Galois over $\mathbb{Q}$. Its Galois closure is obtained by adding an $8$-th root of unity. Indeed $J=L(\sqrt[8]{\epsilon},\zeta_8)$ is the splitting field of the polynomial \[ X^{16}-\mathrm{Tr}_{L/\mathbb{Q}}(\epsilon)X^8+1 = (X^8-\epsilon)(X^8-1/\epsilon). \] It is clear from this description that $J/\mathbb{Q}$ is a solvable extension. \end{remark}
\begin{lemma}\label{nounits} Let $\epsilon\in\cal{O}_{L,+}^\times$ be a generator of the totally positive units and $J/\mathbb{Q}$ the Galois closure of $L(\sqrt[8]{\epsilon})$. Then for all but finitely many primes $p\equiv_{16}9$ which are totally split in $J$, there is no totally positive unit in $L$ congruent to $-1$ modulo $p$. \end{lemma} \begin{proof}
Suppose that $p\equiv_{16}9$ and totally split in $J$. If $\frak{p}$ is an $\cal{O}_L$-prime ideal above $p$ then $(\cal{O}_L/\frak{p})^\times\cong(\mathbb{Z}/p\mathbb{Z})^\times$ and for all but finitely many such primes the reduction $\bar{\epsilon}$ of $\epsilon$ modulo $\frak{p}$ is an $8$-th power. It follows that $\bar{\epsilon}$ generates a subgroup of order dividing $(p-1)/8$. Since $p\equiv_{16}9$ that order is odd and the subgroup cannot contain $-1$.
\end{proof}
\begin{corollary}\label{narrow cong}
Let $L$ be a real quadratic field, then the primes $p$ split in $L$ with narrowly principal factors and such that there is no totally positive unit congruent to $-1$ mod $p$ have positive density. \end{corollary} \begin{proof}
By Proposition $\ref{inf narrow princ}$ and Lemma $\ref{nounits}$, all the primes which are totally split in $J$, $H_L^+$, $\mathbb{Q}(\zeta_8)$ and inert in the extension $\mathbb{Q}(\zeta_8)\subseteq\mathbb{Q}(\zeta_{16})$ satisfy the requirements. Clearly the splitting conditions for $J$ and $H_L^+$ are compatible, and from Proposition $\ref{inf narrow princ}$ we know that also the splitting conditions for $H_L^+$ and $\mathbb{Q}(\zeta_{16})$ are compatible too. We are left to understand $J\cap\mathbb{Q}(\zeta_{16})$. Clearly $\mathbb{Q}(\zeta_8)$ is contained in the intersection because $J=L(\sqrt[8]{\epsilon},\zeta_8)$. One can check that \[ [J:\mathbb{Q}]=\begin{cases}
16\cdot 2& \text{if}\ \mathbb{Q}(\sqrt{2})\subseteq L(\sqrt{\epsilon})\\
16\cdot 4& \text{if}\ \mathbb{Q}(\sqrt{2})\not\subseteq L(\sqrt{\epsilon}),\\ \end{cases} \] $[L(\sqrt[4]{\epsilon},i):\mathbb{Q}]=16$, and \[ L(\sqrt[4]{\epsilon},i)\cap\mathbb{Q}(\zeta_{16})=\begin{cases}
\mathbb{Q}(\zeta_8) & \text{if}\ \mathbb{Q}(\sqrt{2})\subseteq L(\sqrt{\epsilon})\\
\mathbb{Q}(i)& \text{if}\ \mathbb{Q}(\sqrt{2})\not\subseteq L(\sqrt{\epsilon}).\\ \end{cases} \] Suppose by contradiction that $\mathbb{Q}(\zeta_{16})\subseteq J$. Then $J=\mathbb{Q}(\zeta_{16})\cdot L(\sqrt[4]{\epsilon},i)$ because $\mathbb{Q}(\zeta_{16})\cdot L(\sqrt[4]{\epsilon},i)$ is a subfield of the same degree as $J$. Therefore the natural injection \[ G(J/\mathbb{Q})\hookrightarrow G(\mathbb{Q}(\zeta_{16})/\mathbb{Q})\times G(L(\sqrt[4]{\epsilon},i)/\mathbb{Q}) \] produces a contradiction because $G(J/\mathbb{Q})$ contains an element of order $8$ while the other two Galois groups have exponent $4$. In summary, we showed that $J\cap\mathbb{Q}(\zeta_{16})=\mathbb{Q}(\zeta_8)$ so that all the required splitting conditions are compatible. Chebotarev's density theorem finishes the proof. \end{proof}
\noindent When $\varrho$ is one of the Artin representations constructed in \cite{MicAnalytic}, the next proposition shows that there are infinitely many primes satisfying assumptions ($1$),($2$),($3$) of Theorem \ref{Main Theorem}. \begin{proposition}\label{choiceofp} Let $K/\mathbb{Q}$ be an $S_5$-quintic extension whose Galois closure $\widetilde{K}/\mathbb{Q}$ contains a real quadratic field $L$. Suppose $E_{/\mathbb{Q}}$ is a rational elliptic curve, then there are infinitely many ordinary primes $p$ for $E_{/\mathbb{Q}}$ such that \begin{itemize}
\item[\bfcdot] $p$ splits in $L$ with narrowly principal factors;
\item[\bfcdot] there is no totally positive unit in $L$ congruent to $-1$ modulo $p$;
\item[\bfcdot] the conjugacy class of $\mathrm{Fr}_p$ in $G(\widetilde{K}/\mathbb{Q})\cong S_5$ is that of $5$-cycles. \end{itemize} \end{proposition} \begin{proof} Since $\widetilde{K}/L$ is a non-solvable extension, we deduce that $\widetilde{K}\cap J=L$ and $\widetilde{K}\cap H_L^+=L$. Moreover, $\widetilde{K}\cap\mathbb{Q}(\zeta_{16})$ is either $\mathbb{Q}(\sqrt{2})$ or $\mathbb{Q}$ according to whether $L=\mathbb{Q}(\sqrt{2})$ or not.
Given that $5$-cycles are in the kernel of the surjection $G(\widetilde{K}/\mathbb{Q})\twoheadrightarrow G(L/\mathbb{Q})$, one can prove the existence of a set of positive density consisting of rational primes satisfying the listed conditions as in Corollary $\ref{narrow cong}$. It then remains to show that infinitely many of such primes are of good ordinary reduction for the given elliptic curve.
When $E_{/\mathbb{Q}}$ does not have complex multiplication, the ordinary primes have density one so there are infinitely many ordinary primes that satisfy the listed conditions. When the elliptic curves $E_{/\mathbb{Q}}$ has complex multiplication by a quadratic imaginary field $B$, a prime is ordinary for $E_{/\mathbb{Q}}$ if it splits in $B$. As this new splitting requirement is compatible with those coming from the conditions above, Chebotarev's density theorem gives the claim. \end{proof}
\begin{corollary}\label{finalquintic}
Let $K/\mathbb{Q}$ be a non-totally real $S_5$-quintic extension whose Galois closure contains a real quadratic field $L$. Suppose that $N$ is odd, unramified in $K/\mathbb{Q}$ and split in $L$, and that Conjecture \ref{wishingOhta} holds, then
\[
r_\mathrm{an}(E/K)=r_\mathrm{an}(E/\mathbb{Q})\quad\implies\quad r_\mathrm{alg}(E/K)=r_\mathrm{alg}(E/\mathbb{Q}).
\] \end{corollary} \begin{proof} By (\cite{MicAnalytic}, Corollary 4.2) there exists a parallel weight one Hilbert eigenform $\mathsf{g}_K$ over $L$ of level $\frak{Q}$ prime to $N$ such that $\varrho_{\mathsf{g}_K}$ is residually not solvable and $\mathrm{As}(\varrho_{\mathsf{g}_K})\cong\mathrm{Ind}_K^\mathbb{Q}\mathbbm{1}-\mathbbm{1}$. From the Artin formalism of $L$-functions we deduce that
\[
r_\mathrm{an}\big(E,\mathrm{As}(\varrho_{\mathsf{g}_K})\big)=r_\mathrm{an}(E/K)-r_\mathrm{an}(E/\mathbb{Q})\quad\text{and}\quad r_\mathrm{alg}\big(E,\mathrm{As}(\varrho_{\mathsf{g}_K})\big)=r_\mathrm{alg}(E/K)-r_\mathrm{alg}(E/\mathbb{Q}).
\] The result follows from Theorem $\ref{Main Theorem}$ after invoking Proposition \ref{choiceofp}. \end{proof}
\iffalse
\section{Maybe can be used to prove Ohta's isomorphism}
\subsection{Ohta's isomorphism avoidance} We are interested in $\mathrm{H}^d_\mathrm{n.o.}(K^1(p^\alpha);O)_\mathscr{G}= e^*_\mathscr{G}\mathrm{H}^d_\mathrm{et}(\mathrm{Sh}^\mathrm{tor}_{K^1(p^\alpha)}(G_L)_{\bar{\mathbb{Q}}_p},O(d))$ and
we denote by $\mathrm{Gr}^i_\mathscr{G}(K^1(p^\alpha);O)$ the graded pieces of $\mathrm{H}^d_\mathrm{n.o.}(K^1(p^\alpha);O)_\mathscr{G}$ with respect to the nearly ordinary filtration. By looking at the diagram \[\xymatrix{ e^*_\mathscr{G}S_{2t,t}(K^1(p^\alpha);E_\wp)\ar@{^{(}->}[r]\ar[dr]^\sim& e^*_\mathscr{G}\mathrm{H}^d_\mathrm{dR}(\mathrm{Sh}^\mathrm{tor}_{K^1(p^\alpha)}(G_L)/E_\wp)\ar[d]& \\ \mathrm{D}\big(\mathrm{Gr}^0_\mathscr{G}(K^1(p^\alpha);O)\big)\ar@{^{(}->}[r]\ar@{^{(}.>}[u]_{\Upsilon_\alpha}&\mathrm{D}_\mathrm{dR}\big(\mathrm{Gr}^0_{\mathscr{G}}(K^1(p^\alpha);E_\wp)\big), }\] we set $\mathfrak{S}^\mathrm{n.o.}(K^1(p^\alpha);O)_\mathscr{G}=\mathrm{Im}\big(\Upsilon_\alpha\big)$, which is an $O$-lattice in $e^*_\mathscr{G}S_{2t,t}(K^1(p^\alpha);E_\wp)$, and we denote by $\Upsilon_\infty$ the isomoprhism between the projective limits \begin{equation}\label{eq: d-map} \Upsilon_\infty:\mathbb{D}\big(\mathrm{Gr}^0_{\mathscr{G}}(K^1(p^\infty);O)\big)\overset{\sim}{\longrightarrow}\mathfrak{S}^\mathrm{n.o.}(K^1(p^\infty);O)_\mathscr{G}. \end{equation}
If $\mathscr{G}$ is an $\mathfrak{Q}$-isolated family, we can fix an isomorphism $e^*_\mathscr{G}S^*_{2t,t}(K^1(p^\infty);O)\cong\boldsymbol{\Lambda}_{L,\chi}\cong O\llbracket \mathbf{W}_L\rrbracket$ of $\boldsymbol{\Lambda}_{L,\chi}$-modules. Then, each $\mathfrak{S}^\mathrm{n.o.}(K^1(p^\alpha);O)_\mathscr{G}$ can be seen as an $O[\mathbf{W}_\alpha]$-module $X_\alpha\subset E_\wp[\mathbf{W}_\alpha]$. Their projective limit $X_\infty$ is an $O\llbracket \mathbf{W}_L\rrbracket$-submodule of $E_\wp\llbracket \mathbf{W}_L\rrbracket$.
\begin{remark}
An element $x\in E_\wp\llbracket \mathbf{W}_L\rrbracket$ is zero if and only if $\varepsilon(x)=0$ for all finite order characters $\varepsilon:\mathbf{W}_L\to\bar{\mathbb{Q}}_p^\times$.
\end{remark} For any $x\in X_\infty$ there is a map $\phi_{x}:e^*_\mathscr{F}S^*_{2t,t}(K^1(p^\infty);O)\to \mathfrak{S}^\mathrm{n.o.}(K^1(p^\infty);O)_\mathscr{F}$ defined by the commuting diagram \begin{equation}\label{eq: map} \xymatrix{
\mathfrak{S}^\mathrm{n.o.}(K^1(p^\infty);O)_\mathscr{F} & e^*_\mathscr{F}S^*_{2t,t}(K^1(p^\infty);O)\ar@{.>}[l]_{\phi_{x}}\ar[d]^\sim\\
X_\infty\ar[u]_\sim& O\llbracket \mathbf{W}_F\rrbracket\ar[l]_x
} \end{equation}
For any $\cal{O}_L$-ideal $\mathfrak{Q}'$ divisible by $\mathfrak{Q}$, we consider $\mathrm{H}^d_\mathrm{n.o.}(K^1_{\mathfrak{Q}'}(p^\infty);O)_\mathscr{G}$ defined similarly as before, with the only difference that we use the idempotent determined by $\mathscr{G}$ in the Hecke algebra of tame level $\mathfrak{Q}'$ generated by the good Hecke operators. We denote by $\mathrm{Gr}^i_\mathscr{G}(K^1_{\mathfrak{Q}'}(p^\alpha);O)$ the graded pieces of $\mathrm{H}^d_\mathrm{n.o.}(K^1_{\mathfrak{Q}'}(p^\alpha);O)_\mathscr{G}$ with respect to the nearly ordinary filtration.
Let $\mathscr{G}$ be an $\mathfrak{Q}$-isolated Hida family. For any $\cal{O}_L$-ideal $\mathfrak{Q}'$ divisible by $\mathfrak{Q}$, the $\mathbf{K}_\mathscr{G}$-submodule of $\overline{\mathbf{S}}^\mathrm{n.o.}(V^1(\mathfrak{Q}');\mathbf{K}_\mathscr{G})$ generated by $\{\mathscr{G}_\mathfrak{a}\}_{\mathfrak{a}\lvert (\mathfrak{Q}'/\mathfrak{Q})}$ can be mapped to $\mathbb{D}\big(\mathrm{Gr}^0_\mathscr{G}(K^1_{\mathfrak{Q}'}(p^\infty);O)\big)\otimes_\mathbb{Z}\mathbb{Q}$ as follows: \begin{itemize}
\item[] Since $\mathscr{G}$ is $\mathfrak{Q}$-isolated, by choosing $x\in X_\infty$ we can define a map $\mathbf{K}_\mathscr{G}\cdot\mathscr{G}\to\mathbb{D}\big(\mathrm{Gr}^0_{\mathscr{G}}(K^1(p^\infty);O)\big)\otimes_\mathbb{Z}\mathbb{Q}$ using ($\ref{eq: d-map}$) and ($\ref{eq: map}$). By pulling back the morphism along the different degeneracy maps from level $\mathfrak{Q}'$ to level $\mathfrak{Q}$ we get that for every $\mathfrak{a}\lvert (\mathfrak{Q}'/\mathfrak{Q})$ there is a diagram
\[\xymatrix{
\mathbb{D}\big(\mathrm{Gr}^0_\mathscr{G}(K^1_{\mathfrak{Q}'}(p^\infty);O)\big)\otimes_\mathbb{Z}\mathbb{Q}&& \overline{\mathbf{S}}^\mathrm{n.o.}(V^1(\mathfrak{Q}');\mathbf{K}_\mathscr{G})&\mathscr{G}_\mathfrak{a}\\
\mathbb{D}\big(\mathrm{Gr}^0_{\mathscr{G}}(K^1(p^\infty);O)\big)\otimes_\mathbb{Z}\mathbb{Q}\ar@{^{(}->}[u]&& \mathbf{K}_\mathscr{G}\cdot \mathscr{G}\ar[ll]_{\phi_x}\ar@{^{(}->}[u]&\mathscr{G}\ar@{{|}->}[u]
}\]
which we use to define the image of $\mathscr{G}_\mathfrak{a}$. \end{itemize}
\begin{proposition}
There is a perfect pairing (explicitly defined from the Poincar\'e pairing) of $\mathbf{h}_\mathrm{good}^\mathrm{n.o.}(V^1(\mathfrak{Q}'),O)$-modules
\[
\mathrm{H}^d_\mathrm{n.o.}(K^1_{\mathfrak{Q}'}(p^\infty);O)_\mathscr{G}\times \mathrm{H}^d_\mathrm{n.o.}(K_1^{\mathfrak{Q}'}(p^\infty);O)_\mathscr{G}\longrightarrow\boldsymbol{\Lambda}_{L,\chi}(d)
\]
such that there is an induced perfect pairing
\[
\mathrm{Fil}^d\mathrm{H}^d_\mathrm{n.o.}(K_1^{\mathfrak{Q}'}(p^\infty);O)_\mathscr{G}\times \mathrm{Gr}^0_\mathscr{G}(K^1_{\mathfrak{Q}'}(p^\infty);O)\longrightarrow\boldsymbol{\Lambda}_{L,\chi}(d).
\] \end{proposition} \begin{proof}
$\color{red}{TO}$ $\color{red}{DO..}$ \end{proof} For any $x_\infty\in X_\infty$ and test-vector $\breve{\mathscr{G}}$ there should be a $\mathbf{h}_\mathrm{good}^\mathrm{n.o.}(V^1(\mathfrak{Q}'),O)$-linear map\footnote{Here I hope that, by multiplicity one, the abstract Hecke algebra for the good Hecke operators away from $\mathfrak{Q}'$ generates $\mathbf{I}_\mathscr{G}$.} (hecke algebra away from $\mathfrak{Q}'$) \[ \Phi_{\breve{\mathscr{G}}}(x_\infty):\mathbf{K}_\mathscr{G}\longrightarrow \mathbb{D}\big(\mathrm{Gr}^0_\mathscr{G}(K^1_{\mathfrak{Q}'}(p^\infty);O)\big)\otimes_\mathbb{Z}\mathbb{Q}\qquad\mathrm{by}\qquad 1\mapsto x_\infty\breve{\mathscr{G}}. \] By taking $\boldsymbol{\Lambda}_{L,\chi}$-linear duals we obtain an explicit homomorphism of $\boldsymbol{\Lambda}_{L,\chi}$-modules \begin{equation}\label{eq: dieudonne projection} \Phi_{\breve{\mathscr{G}}}(x_\infty)^*: \mathbb{D}\big(\mathrm{Fil}^d\mathrm{H}^d_\mathrm{n.o.}(K_1^{\mathfrak{Q}'}(p^\infty);O)_\mathscr{G}\big)\longrightarrow \mathbf{K}_\mathscr{G}. \end{equation}
\begin{proposition} Let $\mathbf{U}_\mathscr{G}=\mathbf{I}_\mathscr{G}(\boldsymbol{\psi}_{\mathscr{G},p}^{-1}\chi_{\mathscr{G},p})$ be the unramified twist of $\mathrm{Fil}^d\big(\mathrm{As}\mathbf{V}_\mathscr{G}\big)$. For any $\Lambda$-adic test vector $\breve{\mathscr{G}}\in e_\mathscr{G}\overline{\mathbf{S}}^\mathrm{n.o.}(V^1(\mathfrak{Q}');\mathbf{K}_\mathscr{G})$ there exists a homomorphism of $\mathbf{K}_\mathscr{G}$-modules
\[
\langle\ ,x_\infty\omega_{\breve{\mathscr{G}}}\rangle: \mathbb{D}(\mathbf{U}_\mathscr{G})\otimes\mathbb{Q}\longrightarrow\mathbf{K}_\mathscr{G}
\]
whose specialization a parallel weight 2 point $\mathrm{P}$ is $\mathrm{P}\circ\langle\ ,x_\infty\omega_{\breve{\mathscr{G}}}\rangle=\mathrm{P}(x_\infty)\cdot\langle\ ,\omega_{\breve{\mathscr{G}}_\mathrm{P}}\rangle: \mathrm{D}(\mathrm{U}_{\mathscr{G}^*_\mathrm{P}})\longrightarrow E_\wp$. \end{proposition} \begin{proof}
Follows from ($\ref{eq: dieudonne projection}$) and some work. \end{proof}
\subsection{Leveraging analytic $p$-adic $L$-functions} In this section we consider $L/\mathbb{Q}$ a real quadratic extensiom. We have to recall Proposition 10.1.1 of Kings-Loeffler-Zerbes for elliptic cuspforms.. Then we note that
\begin{proposition}
Given test vectors $(\breve{\mathscr{G}},\breve{\mathscr{F}})\in e_\mathscr{G}\overline{\mathbf{S}}^\mathrm{n.o.}(V^1(\mathfrak{Q}');\boldsymbol{\Lambda}_L)\times e_\mathscr{F}\overline{\mathbf{S}}^\mathrm{n.o.}(V^1(\mathfrak{N'});\boldsymbol{\Lambda}_\mathbb{Q})$ there is a homomorphism of $\mathbf{I}_{\mathscr{G},\mathscr{F}}$-modules
\[
\langle\ , x_\infty\omega_{\breve{\mathscr{G}}^*}\otimes\eta_{\breve{\mathscr{F}}^*}\rangle:\mathbb{D}(\mathbf{U})\longrightarrow \mathbf{Q}^\mathscr{F}_\mathscr{G}
\]
such that for all $\boldsymbol{\lambda}\in \mathbb{D}(\mathbf{U})$ and all weight 2 specializations $(\mathrm{P},\mathrm{Q})$ such that $\mathscr{F}_\mathrm{Q}$ is the nearly ordinary stabilization of an eigenform $\mathscr{F}^\circ_\mathrm{Q}$ of level $M$ we have
\[
\nu_{\mathrm{P},\mathrm{Q}}(\langle \boldsymbol{\lambda}, \omega_{\breve{\mathscr{G}}^*}\otimes\eta_{\breve{\mathscr{F}}^*}\rangle)=\frac{1}{?}\times\nu_\mathrm{Q}(x_\infty)\cdot\langle\nu_{\mathrm{P},\mathrm{Q}}(\boldsymbol{\lambda}), \omega_{\breve{\mathscr{G}}^*_\mathrm{P}}\otimes\eta_{\breve{\mathscr{F}}^*_\mathrm{Q}}\rangle.
\] \end{proposition} \begin{definition} Thanks to Proposition $\ref{prop: Greenberg condition}$ we can define $\boldsymbol{\kappa}^\mathsf{f}(\mathscr{G},\mathscr{F})\in\mathrm{H}^1(\mathbb{Q}_p,\mathbf{V}^\mathscr{F}_\mathscr{G}(M))$ as the projection of the local class $\boldsymbol{\kappa}(\mathscr{G},\mathscr{F})$ to $\mathbf{V}^\mathscr{F}_\mathscr{G}(M)$. If we apply the machinery to two isolated Hida families $\mathscr{G},\mathscr{F}$ and obtain a motivic $p$-adic $L$-function \[ \mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\breve{\mathscr{F}},x_\infty):=\langle \boldsymbol{\cal{L}}^\mathscr{F}_\mathscr{G}(\boldsymbol{\kappa}^\mathsf{f}(\mathscr{G},\mathscr{F})), x_\infty\omega_{\breve{\mathscr{G}}^*}\otimes\eta_{\breve{\mathscr{F}}^*}\rangle\in\mathbf{Q}^\mathscr{F}_\mathscr{G}=\mathbf{I}_\mathscr{G}\widehat{\otimes}\mathrm{Frac}(\mathbf{I}_\mathscr{F}) \] and we compare it with the analytic twisted triple product $p$-adic $L$-function $\mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\breve{\mathscr{F}})\in\mathbf{Q}^\mathscr{F}_\mathscr{G}$ of \cite{Michele}, or better of the revised version of that paper. Here is the crucial theorem. \end{definition} \begin{lemma}
The homomorphism
\[
\boldsymbol{\Theta}:\boldsymbol{\Lambda}_L\widehat{\otimes}\boldsymbol{\Lambda}_\mathbb{Q}\longrightarrow \boldsymbol{\Lambda}_L,\qquad[z,a]\otimes[w,b]\mapsto[zw^{-1},ab^{-1}] \] satisfies $\mathrm{P}_{2t,t,\psi,\psi'}\circ\boldsymbol{\Theta}=\mathrm{P}_{2t,t,\psi,\psi'}\otimes \mathrm{Q}_{2,1,\psi^{-1}_{\lvert\mathbb{Q}},\psi^{'-1}_{\lvert\mathbb{Q}}}$ for all $\psi:\mathrm{cl}^+_L(\mathfrak{Q}p^\alpha)\to O^\times,\psi':(\cal{O}_L/p^\alpha)^\times\to O^\times$. \end{lemma}
\begin{theorem}
There is a non-zero element $\Xi$ of $\cal{O}\llbracket\mathbf{W}_L\rrbracket$ such that $\Xi\cdot X_\infty\subset \mathrm{Frac}(\cal{O}\llbracket\mathbf{W}_L\rrbracket)$. \end{theorem} \begin{proof} We should be able to compute the values of
$\boldsymbol{\Theta}\big( \mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\breve{\mathscr{F}},x_\infty)\big), \boldsymbol{\Theta}\big(\mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\breve{\mathscr{F}})\big)\in \mathrm{Frac}(\boldsymbol{\Lambda}_{L,\chi})$
at $\mathrm{P}_{2t,t,\psi,\psi'}$ for $\underline{\mathrm{all}}$ $\psi:\mathrm{cl}^+_L(\mathfrak{Q}p^\alpha)\to O^\times,\psi':(\cal{O}_L/p^\alpha)^\times\to O^\times$ (use $p$-adic Gross-Zagier formula) showing they have no poles there. Hence, we see $\boldsymbol{\Theta}\big( \mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\breve{\mathscr{F}},x_\infty)\big)$, $\boldsymbol{\Theta}\big(\mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\breve{\mathscr{F}})\big)$ as elements of $E_\wp\llbracket\mathbf{W}_L\rrbracket$ such that $
\boldsymbol{\Theta}\big(\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\breve{\mathscr{F}},x_\infty)\big)=x_\infty \boldsymbol{\Theta}\big(\mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\breve{\mathscr{F}})\big)$.
Moreover, if we write $\boldsymbol{\Theta}\big(\mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\breve{\mathscr{F}})\big)=\mathrm{Num}/\mathrm{Den}$, the denominator $\mathrm{Den}$ never vanishes at points of the form $\mathrm{P}_{2t,t,\psi,\psi'}$, i.e. $\mathrm{Den}\in E_\wp\llbracket\mathbf{W}_L\rrbracket^\times$. Hence,
\[
x_\infty\mathrm{Num}=\boldsymbol{\Theta}\big(\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\breve{\mathscr{F}},x_\infty)\big)\cdot\mathrm{Den}\in \mathrm{Frac}(\cal{O}\llbracket\mathbf{W}_L\rrbracket).
\] \end{proof} \begin{corollary}
Choose $x_\infty\in X_\infty$ such that $x_\infty\in \Xi\cdot X_\infty\subset \mathrm{Frac}(\cal{O}\llbracket\mathbf{W}_\mathfrak{Q}\rrbracket)$, then
\[
\frac{1}{x_\infty}\mathscr{L}_p^\mathrm{mot}(\breve{\mathscr{G}},\breve{\mathscr{F}},x_\infty)=\mathscr{L}^\mathrm{an}_p(\breve{\mathscr{G}},\breve{\mathscr{F}})\in\mathbf{Q}^\mathscr{F}_\mathscr{G}.
\] \end{corollary} \begin{proof}
Use $p$-adic Gross-Zagier formula on a dense subset of the weight space. \end{proof}
\subsection{Concrete Explanation} Let me try to explain the situation in the case of ordinary families, i.e. parallel weight forms. For any $x_\infty\in X_\infty$ we have a motivic and an analytic $p$-adic $L$-function \[ \mathscr{L}_p^\mathrm{mot}(X,Y), \mathscr{L}_p^\mathrm{an}(X,Y)\in O\llbracket X\rrbracket\widehat{\otimes}\mathrm{Frac}(O\llbracket Y\rrbracket) \] ($X$ is the variable for the Hilbert family, $Y$ for the elliptic family). My idea is to restrict them to a curve $Y=s(X)$ in $O\llbracket X,Y\rrbracket$ with the property $\varepsilon\circ s(X)=\varepsilon^{-1}\circ Y$ for any finite order character $\varepsilon:1+p\mathbb{Z}_p\to\bar{\mathbb{Q}}_p^\times$, so that the specializations of the Hida families have characters that cancels out. Then, we expect to have a $p$-adic Gross-Zagier formula for \emph{every} $p$-power finite order character. Those formulas first imply that $\mathscr{L}_p^\mathrm{mot}(X,s(X)), \mathscr{L}_p^\mathrm{an}(X,s(X))\in \mathrm{Frac}(O\llbracket X\rrbracket)$ are also elements of $E_\wp\llbracket 1+p\mathbb{Z}_p\rrbracket$, i.e. no poles at weight 2 points, and then that they satisfy the equality \[ \mathscr{L}_p^\mathrm{mot}(X,s(X))=x_\infty \mathscr{L}_p^\mathrm{an}(X,s(X))\quad \mathrm{in}\quad E_\wp\llbracket 1+p\mathbb{Z}_p\rrbracket=\prod_{n\ge 0} E_\wp[X]/(\Phi_{p^n}(X)). \] We deduce $x_\infty \mathscr{L}_p^\mathrm{an}(X,s(X))\in \mathrm{Frac}(O\llbracket X\rrbracket)$ so that $x_\infty\Xi\in \mathrm{Frac}(O\llbracket X\rrbracket)$ for some $\Xi\in O\llbracket X\rrbracket$ ($\Xi$ can be a distinguished polynomial), i.e. $\Xi\cdot X_\infty\subset \mathrm{Frac}(O\llbracket X\rrbracket)$. This is great news because now we can go back and choose $x_\infty$ in $\Xi\cdot X_\infty\subset \mathrm{Frac}(O\llbracket X\rrbracket)$. Then, the $p$-adic Gross-Zagier formulas at a dense subset of points (don't need all points here) gives \[ \frac{1}{x_\infty}\mathscr{L}_p^\mathrm{mot}(X,Y)=\mathscr{L}_p^\mathrm{an}(X,Y)\in O\llbracket X\rrbracket\widehat{\otimes}\mathrm{Frac}(O\llbracket Y\rrbracket). \] Finally, we can use the non-vanishing of the analytic $p$-adic $L$-function to prove the non-vanishing of the dual exponential of the specialization of the big cohomology classes.
\fi
\end{document} | arXiv |
de Longchamps point
In geometry, the de Longchamps point of a triangle is a triangle center named after French mathematician Gaston Albert Gohierre de Longchamps. It is the reflection of the orthocenter of the triangle about the circumcenter.[1]
Definition
Let the given triangle have vertices $A$, $B$, and $C$, opposite the respective sides $a$, $b$, and $c$, as is the standard notation in triangle geometry. In the 1886 paper in which he introduced this point, de Longchamps initially defined it as the center of a circle $\Delta $ orthogonal to the three circles $\Delta _{a}$, $\Delta _{b}$, and $\Delta _{c}$, where $\Delta _{a}$ is centered at $A$ with radius $a$ and the other two circles are defined symmetrically. De Longchamps then also showed that the same point, now known as the de Longchamps point, may be equivalently defined as the orthocenter of the anticomplementary triangle of $ABC$, and that it is the reflection of the orthocenter of $ABC$ around the circumcenter.[2]
The Steiner circle of a triangle is concentric with the nine-point circle and has radius 3/2 the circumradius of the triangle; the de Longchamps point is the homothetic center of the Steiner circle and the circumcircle.[3]
Additional properties
As the reflection of the orthocenter around the circumcenter, the de Longchamps point belongs to the line through both of these points, which is the Euler line of the given triangle. Thus, it is collinear with all the other triangle centers on the Euler line, which along with the orthocenter and circumcenter include the centroid and the center of the nine-point circle.[1][3][4]
The de Longchamp point is also collinear, along a different line, with the incenter and the Gergonne point of its triangle.[1][5] The three circles centered at $A$, $B$, and $C$, with radii $s-a$, $s-b$, and $s-c$ respectively (where $s$ is the semiperimeter) are mutually tangent, and there are two more circles tangent to all three of them, the inner and outer Soddy circles; the centers of these two circles also lie on the same line with the de Longchamp point and the incenter.[1][3] The de Longchamp point is the point of concurrence of this line with the Euler line, and with three other lines defined in a similar way as the line through the incenter but using instead the three excenters of the triangle.[3][5]
The Darboux cubic may be defined from the de Longchamps point, as the locus of points $X$ such that $X$, the isogonal conjugate of $X$, and the de Longchamps point are collinear. It is the only cubic curve invariant of a triangle that is both isogonally self-conjugate and centrally symmetric; its center of symmetry is the circumcenter of the triangle.[6] The de Longchamps point itself lies on this curve, as does its reflection the orthocenter.[1]
References
1. Kimberling, Clark, "X(20) = de Longchamps point", Encyclopedia of Triangle Centers.
2. de Longchamps, G. (1886), "Sur un nouveau cercle remarquable du plan du triangle", Journal de Mathématiques spéciales, 2. Sér. (in French), 5: 57–60. See especially section 4, "détermination du centre de Δ", pp. 58–59.
3. Vandeghen, A. (1964), "Mathematical Notes: Soddy's Circles and the De Longchamps Point of a Triangle", The American Mathematical Monthly, 71 (2): 176–179, doi:10.2307/2311750, JSTOR 2311750, MR 1532529.
4. Coxeter, H. S. M. (1995), "Some applications of trilinear coordinates", Linear Algebra and Its Applications, 226/228: 375–388, doi:10.1016/0024-3795(95)00169-R, MR 1344576. See in particular Section 5, "Six notable points on the Euler line", pp. 380–383.
5. Longuet-Higgins, Michael (2000), "A fourfold point of concurrence lying on the Euler line of a triangle", The Mathematical Intelligencer, 22 (1): 54–59, doi:10.1007/BF03024448, MR 1745563, S2CID 123022896.
6. Gibert, Bernard, "K004 Darboux cubic = pK(X6,X20)", Cubics in the Triangle Plane, retrieved 2012-09-06.
External links
• Weisstein, Eric W. "de Longchamps Point". MathWorld.
| Wikipedia |
\begin{document}
\begin{frontmatter}
\title{ \huge Broad-UNet: Multi-scale feature learning for nowcasting tasks}
\author{Jesús García Fernández} \ead{[email protected]}
\author{Siamak Mehrkanoon \corref{cor1}} \ead{[email protected]}
\cortext[cor1]{Corresponding author}
\address{Department of Data Science and Knowledge Engineering, Maastricht University, The Netherlands}
\begin{abstract} Weather nowcasting consists of predicting meteorological components in the short term at high spatial resolutions. Due to its influence in many human activities, accurate nowcasting has recently gained plenty of attention. In this paper, we treat the nowcasting problem as an image-to-image translation problem using satellite imagery. We introduce Broad-UNet, a novel architecture based on the core UNet model, to efficiently address this problem. In particular, the proposed Broad-UNet is equipped with asymmetric parallel convolutions as well as Atrous Spatial Pyramid Pooling (ASPP) module. In this way, The the Broad-UNet model learns more complex patterns by combining multi-scale features while using fewer parameters than the core UNet model. The proposed model is applied on two different nowcasting tasks, i.e. precipitation maps and cloud cover nowcasting. The obtained numerical results show that the introduced Broad-UNet model performs more accurate predictions compared to the other examined architectures.
\end{abstract}
\begin{keyword} Satellite imagery \sep Precipitation forecasting \sep cloud cover forecasting \sep Deep learning \sep Convolutional neural network \sep U-Net \end{keyword} \end{frontmatter}
\section{Introduction} Weather forecasting is an essential task that has a great influence on humans daily life and activities. Industries such as agriculture \cite{cogato2019extreme}, mining \cite{ivanov2019weather} and construction \cite{senouci2018impact} rely on the weather forecasts to make decisions and thus unexpected climatological events may result in large economic losses. Similarly, accurate weather forecasts improve safety on flights and roads and help us foresee potential natural disasters.
Due to its importance, precipitation nowcasting is becoming an increasingly popular research topic. This term refers to the problem of forecasting precipitation in the near future at high spatial resolutions. It is usually performed through satellite imagery and many different approaches have been proposed for this problem. Classical nowcasting approaches mainly focus on two methods: Numerical Weather Prediction (NWP) \cite{sun2014use} and extrapolation based techniques, such as Optical Flow (OF) \cite{woo2017operational}. NWP methods simulate the underlying physics of the atmosphere and ocean to generate predictions, so they require a vast amount of computational resources. In contrast, optical flow based methods identify and predict how objects move through a sequence of images. But they are unable to represent the dynamics behind them. In recent years, the massive amount of existing data has aroused research interest in data driven machine learning techniques for nowcasting \cite{shi2015convolutional, holmstrom2016machine, grover2015deep}. By taking advantage of available historical data, data-driven based approaches have shown better performance than classical ones in many forecasting tasks \cite{faloutsos2019classical}. Furthermore, while classical machine learning techniques rely on handcrafted features and domain knowledge, deep learning techniques automatize the extraction of those features. Recent advances in deep learning have shown promising results in diverse research areas such as neuroscience, biomedical signal analysis, weather forecasting and dynamical systems, among others \cite{webb2018deep,mehrkanoon2018deep,mehrkanoon2019deep,mehrkanoon2015learning,mehrkanoon2019cross, gamboa2017deep, salman2015weather, coban2018neuro, coban2013context}. Convolutional Neural Networks (CNNs) are the most popular algorithms used in computer vision \cite{voulodimos2018deep}, achieving the state-of-the-art in various tasks \cite{lu2007survey, voulodimos2018deep, goel2020state}. CNN architectures, such as AlexNet \cite{krizhevsky2017imagenet}, ResNet \cite{he2016deep} and InceptionNet \cite{szegedy2015going}, to name a few, mainly consist of the combination of convolutional and pooling layers. They are outstanding at classification, identification and recognition tasks. Among other architectures, autoencoders have emerged as one of the most powerful approaches in both supervised \cite{zhang2019light, berthomier2020cloud, fernandez2020deep} and unsupervised learning \cite{baldi2012autoencoders, lample2017unsupervised, chung2016audio} with the UNet \cite{ronneberger2015u} being one of the most versatile architectures. The UNet architecture was first proposed for medical image segmentation, but it has been employed in various domains \cite{trebing2021smaat, tao2017background, fernandez2020deep}. It consists of a contracting path, to extract features, and an expanding path, to reconstruct a segmented image, with a set of residual connection between them to enable precise localization. In our previous work \cite{fernandez2020deep}, we introduced various extended versions of the UNet for weather forecasting problem. In this paper, we further extend the best performing model in that work \cite{fernandez2020deep}, i.e. the AsymmIncepRes3DDR-UNet. In particular, motivated by the results of \cite{chen2017deeplab}, we augment the AsymmIncepRes3DDR-UNet's feature extraction capacity by incorporating an Atrous Spatial Pyramidal Pooling module (ASPP) \cite{chen2017deeplab} in the bottleneck of the network. The ASPP module works in line with the existing building blocks of our network (Multi-scale feature convolutional block), extracting multi-scale features in parallel and combining them. Therefore, unlike the original UNet, the proposed model is designed to capture multi-scale information. In addition, it keeps the temporal dimension unchanged along the encoder path and then reduces it before being concatenated with the output of every level in the decoder path. As a result, it can efficiently learn a mapping between 3-dimensional input data and 2-dimensional output data. Furthermore, we apply a kernel factorization in most of the convolutional operations of the model, resulting in a significant reduction in the total number of parameters compared to the original UNet while having improved performance. These techniques are explained in detail in the subsequent sections.
We further present an analysis of this multi-scale features extraction and the enhancement provided by the ASPP module. We show its versatility by applying it to two different nowcasting tasks, i.e. precipitation nowcasting and cloud cover nowcasting. In the precipitation nowcasting task, the model performs a regression of every pixel. In the case of cloud cover nowcasting, the model classifies each pixel as containing clouds or not. In addition, we directly compare the proposed model with the model introduced in \cite{trebing2021smaat}, a variation of UNet architecture that relies on depthwise-separable convolutions and includes a CBAM attention module \cite{woo2018cbam} at each level. While the model in \cite{trebing2021smaat} approximates the performance of the original UNet with a significantly reduced number of parameters, our model outperforms the original UNet with a reduced number of parameters.
\section{Related work} Traditionally, optical flow based models are the most popular techniques among classical methods for precipitation nowcasting tasks \cite{bowler2004development, li2018subpixel}. However, machine learning and deep learning based approaches are dominating this field of research in recent years. Due to the vast amount of available satellite imagery, powerful deep neural networks based models are suitable candidates that can be used to address various problems existing in this field. In particular, CNN based architectures have show their great ability to handle 2D and 3D images. Thanks to the versatility of CNN's, nowcasting problems can be tackled in different fashions. For instance, the authors in \cite{ayzel2019all} and \cite{agrawal2019machine} treated the multiple time-steps as multiple channels in the network. In this way, they could apply a simple 2D-CNN to perform the predictions. Additionally, the authors in \cite{shi2017deep} treated the multiple time-steps as depth in the samples. Thus they can apply a 3D-CNN and approximate more complex functions. As it has been shown in \cite{lebedev2019precipitation,agrawal2019machine,trebing2021smaat}, among the used CNN architectures, UNet is more suitable for this task, due to its autoencoder-like architecture and ability to tackle image-to-image translation problems.
In addition to CNN's, Recurrent Neural Networks (RNN's) have proved to be a robust approach. However, these architectures struggle to work with images but can capture long-range dependencies, an ability that CNN's can only partially achieve with the addition of attention mechanisms, such as self-attention \cite{vaswani2017attention}. In \cite{shi2015convolutional}, the authors introduce an architecture that combines both CNN's and RNN's strengths. They extend the fully connected LSTM (FC-LSTM) with convolutional structures, obtaining the Convolutional LSTM network (ConvLSTM). As a result, the proposed model captures spatiotemporal correlations better than the FC-LSTM model. The authors in \cite{shi2017deep} introduce the Trajectory GRU (TrajGRU) model as an extension of the ConvLSTM. This architecture keeps the advantages of the previous model and also learns the location-variant structure of the recurrent connections, showing superior performance than the other models compared. Nevertheless, these RNN models have not been directly compared with the UNet in nowcasting tasks. \\ The authors in \cite{berthomier2020cloud} make a comparison among different types of models for cloud cover nowcasting. In \cite{berthomier2020cloud}, the models under assessment are various versions of CNN's, RNN's, LSTM and UNet. The authors showed that the UNet model is the best performing model for the given cloud cover nowcasting task.
\section{Proposed model}\label{sec:proposedmodels} In this section, we introduce our Broad-UNet model. First, different elements that are used for building the network are presented. The complete architecture is then explained.
\begin{figure}
\caption{Multi-scale feature convolutional block. Convolutions with different kernels are performed in parallel to extract features at different scales. A residual connection also keeps some unmodified information.}
\label{fig:convBlock}
\end{figure}
\subsection{Multi-scale feature convolutional block}
Motivated by the goal of extracting features at different scales, the model contains a block consisting of parallel arms as shown in Fig \ref{fig:convBlock}. This block serves as the core building block of our network. Within this block, the data forks into parallel branches of convolutions with different kernel sizes, after going through an initial convolution. A $3\times3\times3$ convolution is followed by a set of parallel convolutions with $1\times1\times1$, $3\times3\times3$ and $5\times5\times5$ kernel sizes. The outputs of the different branches are then concatenated and merged with a $1\times1\times1$ convolution. Additionally, inspired by the results found in \cite{szegedy2016inception}, we keep some information intact alongside the parallel branches with a residual connection. Lastly, the output of the block is rectified with a ReLU activation function. To reduce the large number of features resulting from these branches, we factorize the convolutions as suggested in \cite{yang2019asymmetric}. That means a convolution $N \times N\times N$ decomposes into the three consecutive $1\times1\times N$, $1\times N\times1$ and $N\times1\times1$ convolutions. Hence, this sequence is an approximation of the original convolution with fewer parameters.
\begin{figure}
\caption{Atrous Spatial Pyramidal Pooling (ASPP) block. Different dilation rates allow the network to extract multi-scale information. Due to the kernel shapes ($1\times N\times N$) and the image-level pooling mechanism, only spatial information is extracted.}
\label{fig:ASPP}
\end{figure}
\subsection{Atrous Spatial Pyramid Pooling (ASPP)}
Atrous Spatial Pyramid Pooling (ASPP), is a mechanism used to capture multi-scale information. It consists of parallel branches of convolutions, similar to the convolutional block presented above. However, instead of using different kernel sizes, the same kernel is chosen with an increasing dilation rate (6, 12 and 18). In this kind of convolutions, the filter is upsampled by inserting zeros between successive values. As a result, they employ a larger field of view, without experiencing an explosion in the number of parameters. Further, it only extracts information in the spatial dimensions by applying a 2-dimensional filter (shape $1\times N\times N$). In addition, ASPP incorporates one branch to extract image-levels features, allowing to capture global context information. Here, we implement it by applying a global average pooling, and subsequent reshaping and upsampling back. The obtained extracted features are then concatenated and combined with a $1\times1\times1$ convolution. The scheme of this mechanism is shown in Fig. \ref{fig:ASPP}.
\subsection{Broad-UNet} \begin{figure*}
\caption{Complete architecture of Broad-UNet. The Multi-scale feature convolutional block is displayed as \textit{Conv. Block} for simplicity. The annotation over these blocks describes the output dimension, where \textit{T} represents the time-steps (lags), \textit{H} and \textit{W} represent the height and width of each image and \textit{F} the number of features or elements predicting.}
\label{fig:architecture}
\end{figure*} Thanks to the effectiveness of UNet architecture in solving image-to-image mapping tasks, it is chosen to serve as the basis to construct our model. UNet core model which was originally proposed for medical image segmentation tasks, adopts an autoencoder structure. While the encoder part extracts features from the input image, the decoder part performs classification on each pixel to reconstruct the segmented output. Plus, a set of residual connections between both parts allows a precise localization in the output image. Differently, our proposed Broad-UNet manipulates 3-dimensional data in the encoder and 2-dimensional data in the decoder. Thus, we can input several time-steps in the first dimension, and it outputs only one time-step in the same dimension. Multi-scale feature convolutional blocks are alternated with pooling operations in the encoder, resulting in five levels. The pooling is only performed in the spatial dimensions (2nd and 3rd) and implemented with a Max Pooling layer. In this way, the temporal dimension of the data remains unchanged. Then, the decoder follows a similar structure. It alternates multi-scale feature convolutional blocks and upsampling in the spatial dimensions. Additionally, we incorporate extra convolutions in the connections between different levels of the encoder and decoder. These intermediate convolutional operations aim to reduce the temporal dimension
from \textit{T} time-steps to 1.
To extend the multi-scale feature learning process, we combine the convolutional blocks with the ASPP module. It is placed in the bottleneck of the network, where the data has a highly abstract representation. In this way, we allow the network to capture more information from this representation without using larger kernels and more computational resources. Also, dropout is included in the bottleneck to force the network to learn a more sparse representation of the data and avoid possible overfitting. As a result, the network input is of shape $T \times H\times W \times F$ and output is of shape $1 \times H\times W \times F$, where \textit{T} is the number of time-steps (lags), \textit{H} and \textit{W} are the height and width of the images, and \textit{F} is the number of features or elements, which we consider as channels in our network. Here, the convolutions to reduce the temporal dimension have a kernel size $1 \times T\times T$ with valid padding. In addition, the use of asymmetric convolutions drastically reduces the total number of parameters of the network. While the number of parameters is $\sim$28 million using regular kernels $N \times N\times N$, the number of parameters after factorizing the convolutions into $1 \times 1\times N$, $1 \times N\times 1$ and $N \times 1\times 1$ is $\sim$11 million. The complete architecture of the model can be found in Fig. \ref{fig:architecture}. Furthermore, a comparison in the number of learnable parameters among different UNet based models examined in this paper is shown in Table \ref{fig:parameters}.
\begin{table}[!htbp]
\centering
\caption{Comparison between the number of learnable parameters in different UNet based models examined in this paper.}
\begin{tabular}{|c c|}
\hline
\textbf{Model} & \textbf{Number of parameters}\\ [0.5ex]
\hline\hline
UNet & $\sim$17M\\
\hline
SmaAt-UNet & $\sim$4M \\
\hline
AsymmIncepRes3DDR-UNet & $\sim$9.5M \\
\hline
Broad-UNet & $\sim$11M \\
\hline
\end{tabular} \label{fig:parameters} \end{table}
\section{Data description and preprocessing}\label{sec:datadescription} To assess the performance of our model, we apply it to two different datasets. Both of them consist of satellite images and are intended to tackle weather nowcasting problems. The first one includes precipitation maps, in which the value of each pixel shows the amount of rainfall in that region. The second one consists of cloud cover maps, in which the pixel values are binary and indicate whether there is a cloud or not in that region. Here, we recreate the same samples as in \cite{trebing2021smaat} for the first dataset, and as in \cite{berthomier2020cloud} for the second dataset. In this way, we can make a fair comparison with the results obtained in those research works. For reproducibility purposes, all our models and scripts are available on Github \footnote{\url{https://github.com/jesusgf96/Broad-UNet}}. Also, the datasets and pre-trained models are available upon request.
\subsection{Precipitation maps dataset} \label{subsec:precipdata} The first dataset, provided by the Royal Netherlands Meteorological Institute (Koninklijk Nederlands Meteorologisch Instituut, KNMI) \cite{KNMI}, includes rainfall measurements from two Dutch radar stations (De Bilt and Den Helder). These measurements are in the shape of images. The images cover the region of the Netherlands and neighbouring countries, spanning four years in 5-minutes intervals. To train and validate the models, we use data from the years 2016-2018 (80\% train/ 20\% validation), and the data from 2019 is used as test set.
The values of each pixel represent the accumulated amount of rainfall in the last five minutes. That means that a value \textit{n} represents $n \times 10^{-2} $ mm of rain per square kilometre. The resolution of the images is $765 \times 700$, and the measured region is circle-shaped with a large margin. Following the lines of \cite{trebing2021smaat}, we cropped the central squared area with size $288 \times 288$, as shown in Fig. \ref{fig:precipitationMap}.
\begin{figure}
\caption{Example of precipitation map from the first dataset, before and after applying the preprocessing.}
\label{fig:precipitationMap}
\end{figure}
Moreover, there is a high imbalance between pixels with rain and no rain, with plenty of images lacking raining pixels. Therefore, as in \cite{trebing2021smaat}, we filter the dataset choosing only the images with at least 50\% of pixels containing any amount of rain. This dataset is then used to create the training/validation/test samples. Additionally, we create a second dataset filtering the images with at least $20\%$ of pixels containing any amount of rain. From this second dataset, we use only the test set. Therefore, it serves as a way of testing our trained models under different conditions. We also normalize both datasets by dividing them by the highest value in the training set.
\subsection{Cloud cover dataset} \label{subsec:clouddata} The second dataset is the "Geostationary Nowcasting Cloud Type" classification product \cite{clouds} from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSTAT). It is composed of satellite images of the whole globe at longitude $0$ degrees, taken every 15 mins. The resulting size of the images in the dataset is $3712 \times 3712$, spanning the years 2017-2018. For multi-comparison purposes, we generate two different dataset from this data. In the first dataset, we follow the lines of \cite{berthomier2020cloud} and use data from 2017 and the first semester of 2018 as training set. Then the data from the second semester of 2018 is used for both validation and test. On the contrary, as in \cite{trebing2021smaat}, we use data from 2017 and the first semester of 2018 for train and validate our models (80\% train / 20\% validation). The data from the second semester of 2018 is thus used only for test. We use data from 2017 and the first semester of 2018 to train the models. To validate and test the models, we use data from the second semester of 2018. In this data, every pixel can have $15$ different values (1: Cloud-free land, 2: Cloud-free sea, 3: Snow over land, 4: Sea ice, 5: Very low clouds, 6: Low clouds, 7: Mid-level clouds, 8: High opaque clouds, 9: Very high opaque clouds, 10: Fractional clouds, 11: High semitransparent thin clouds, 12: High semitransparent meanly thick clouds, 13: High semitransparent thick clouds, 14: High semitransparent above low or medium clouds, 15: High semitransparent above snow/ice). However, following the lines of \cite{berthomier2020cloud}, we aim to perform a classification between cloud or no-cloud. Therefore, we group the labels from 1 to 4 into 0 (no-cloud) and the labels from 5 to 15 into 1 (cloud). Also, we crop the images according to the boundaries of France: [51.896, 41.104, -5.842, 9.842] (upper latitude, lower latitude, left longitude, right longitude). Then we apply a transformation to obtain a suitable projection and reshape the resulting image to $256\times256$ pixels. Fig. \ref{fig:cloudsImages}, displays an example of the described pre-processing steps.
\begin{figure}
\caption{Example of an image from the cloud cover dataset, before and after applying the different steps of the preprocessing.}
\label{fig:cloudsImages}
\end{figure}
\section{Experimental setup and evaluation} In order to have a fair comparison with the results obtained in \cite{trebing2021smaat} and \cite{berthomier2020cloud} for both datasets, we reproduce the same experimental setups. The data is arranged in such a way that the resulting input is a four-dimensional array $\mathcal{I} \in \mathbb{R}^{T \times H \times W \times F}$, where $T$ is the number of lags or previous time-steps, which corresponds to the time dimension. $H$ and $W$ refer to the size of the image and make up the spatial dimensions. The last element $F$ corresponds to the predicted features, which in both cases is $1$. We use TensorFlow to implement our models and train and evaluate them on the given datasets. The hyperparameters of models are tuned and the optimal ones are empirically found and used.
\subsection{Precipitation maps nowcasting} \label{subsec:precipdataEval} As for the precipitation maps dataset, we apply the preprocessing and split the dataset as described in section \ref{subsec:precipdata}. We aim to predict a precipitation map $30$ minutes ahead or considering that the images are generated five minutes apart, six time-steps ahead. The number of lags, previous time-steps, is set to 12 which was emprically found to be the best one among othre tested lag values. The height and width of the images are 288 and 288, and the number of features in the input is 1, i.e. the precipitation maps. Therefore, the inputs of the model has the shape (12, 288, 288, 1), and output data has the shape (1, 288, 288, 1).
In this nowcasting task, we perform a regression of every pixel. Mean Squared Error (MSE) is used as the loss function and Adam optimizer to optimize it, with an initial learning rate of 0.0001. The batch size and the dropout rate are set to 2 and 0.5, respectively. We also implemented a checkpoint callback to monitor the validation loss. Thus the best performing model on the validation set is saved. We use MSE as the main metric to assess the performance of the model in this dataset. Furthermore, we also include additional metrics such as accuracy, precision and recall. Following the lines of \cite{trebing2021smaat}, in order to calculate these new metrics, we first create a binarized mask of the image, according to a threshold. This threshold is the mean value of the training set from the 50\% of rain pixel dataset. Hence, any value equal or over the threshold is replaced by 1, and any value under it is replaced by 0.
\subsection{Cloud cover nowcasting} \label{subsec:evalclouddata} Regarding the cloud cover dataset, we preprocess the data and split the dataset as described in section \ref{subsec:clouddata}. In this case, we predict six different time-steps: from 15 minutes to 70 minutes ahead, or from 1 to 6 time-steps ahead. Due to the architecture of our network, we train six different model. Thus each model predicts a different time-step. Here, the number of lag is set to 4, the height and width of the images are 256 and 256, and the number of input features is again 1, the cloud cover map. That means that the model receives input data with the shape (4, 256, 256, 1), and outputs data with the shape (1, 256, 256, 1). In this task, we perform binary classification of every pixel. The binary cross-entropy is used as the loss function in this case. We use Adam optimizer with an initial learning rate of 0.001. The batch size and the dropout rate are set to 8 and 0.5, respectively. Similarly, we implemented a checkpoint callback to monitor the validation loss. Thus the best performing model on the validation set is saved. Following the lines of \cite{berthomier2020cloud}, here we also use MSE as the metric to assess the performance of the model. First, we calculate the MSE between the ground truth and the raw prediction as the main metric. In this case, the values between 0 and 1 in the predictions indicate the probability of cloud occurrence in that region. In addition, we binarize the prediction of the network with a threshold of 0.5 to generate a second assessment with the MSE metric. We also include additional metrics, i.e. accuracy, precision and recall, to compare the performance of the Broad-UNet with the model introduced in \cite{trebing2021smaat}, which also uses UNet architecture as the basis. To calculate these new metrics, we first create a binarized mask of the image, using the value 0.5 as the threshold.
\section{Results}
\subsection{Precipitation maps nowcasting} In the precipitation maps prediction task, we compare the performance of the Broad-UNet with the persistence, a simple meteorological baseline used in forecasting, and different models over the test sets of two different datasets, i.e. 50\% of rain pixels and 20\% of rain pixels. These models are the UNet \cite{ronneberger2015u} and two variants \cite{trebing2021smaat, fernandez2020deep}. The MSE is the main metric used for this comparison, and it is calculated over the denormalized data. The additional metrics are computed over the binarized data, as described in section \ref{subsec:precipdataEval}. The performance of different models over the first precipitation maps dataset is shown in Table \ref{tab:resultsPrecipitation50}. In the same way, the performance of the models in the second precipitation maps dataset is listed in Table \ref{tab:resultsPrecipitation20}. From the obtained results, one can observe that the Broad-UNet achieved the lowest MSE score in both datasets. \begin{table}[!htbp]
\centering
\scriptsize{
\renewcommand{1.5}{1.5}
\caption{Test MSE and additional metrics values for the precipitation maps prediction task using the 50\% of rain pixels dataset. $\downarrow$ indicates that the optimal values are the smallest ones and $\uparrow$ indicates that the optimal values are the highest ones.}
\label{tab:resultsPrecipitation50}
\begin{tabular}{l l c c c}
\Xhline{3\arrayrulewidth}
&\multicolumn{4}{c}{\textbf{MSE 50\% of rain pixels dataset}} \\
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\textbf{Model}}&
\multirow{2}{*}{\textbf{ MSE $\downarrow$}} &
\multirow{2}{*}{\textbf{Accuracy $\uparrow$}} &
\multirow{2}{*}{\textbf{Precision $\uparrow$}} &
\multirow{2}{*}{\textbf{Recall $\uparrow$}}\\
& & & & \\\Xhline{3\arrayrulewidth}
\textbf{Persistance} & 2.48e-02 & 0.756 & 0.678 & 0.643 \\
\textbf{UNet} & 1.22e-02 & 0.836 & 0.740 & \underline{0.855} \\
\textbf{SmaAt-UNet} & 1.22e-02 & 0.829 & 0.730 & 0.850 \\
\textbf{AsymmIncepRes3DDR-UNet} & 1.11e-02 & \underline{0.858} & \underline{0.759} & 0.800 \\
\textbf{Broad-UNet} & \underline{1.08e-02} & 0.850 & 0.715 & 0.817 \\
\Xhline{3\arrayrulewidth}
\end{tabular}
} \end{table} \begin{table}[!htbp]
\centering
\scriptsize{
\renewcommand{1.5}{1.5}
\caption{Test MSE and additional metrics values for the precipitation maps prediction task using the 20\% of rain pixels dataset. $\downarrow$ indicates that the optimal values are the smallest ones and $\uparrow$ indicates that the optimal values are the highest ones.}
\label{tab:resultsPrecipitation20}
\begin{tabular}{l l c c c}
\Xhline{3\arrayrulewidth}
&\multicolumn{4}{c}{\textbf{MSE 20\% of rain pixels dataset}} \\
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\textbf{Model}}&
\multirow{2}{*}{\textbf{MSE $\downarrow$}} &
\multirow{2}{*}{\textbf{Accuracy $\uparrow$}} &
\multirow{2}{*}{\textbf{Precision $\uparrow$}} &
\multirow{2}{*}{\textbf{Recall $\uparrow$}}\\
& & & & \\\Xhline{3\arrayrulewidth}
\textbf{Persistance} & 2.28e-02 & 0.827 & 0.559 & 0.543 \\
\textbf{UNet} & 1.11e-02 & 0.880 & \underline{0.666} & 0.782 \\
\textbf{SmaAt-UNet} & 1.11e-02 & 0.867 & 0.626 & \underline{0.801} \\
\textbf{AsymmIncepRes3DDR-UNet} & \underline{1.02e-02} & 0.893 & 0.621 & 0.767 \\
\textbf{Broad-UNet} & \underline{1.02e-02} & \underline{0.895} & 0.611 & 0.772 \\
\Xhline{3\arrayrulewidth}
\end{tabular}
} \end{table} Two examples of $30$ minutes ahead prediction with the Broad-UNet are displayed in Fig. \ref{fig:precipImages}. The images on the first and second row are generated using the first and second precipitation dataset, respectively.
\begin{figure}
\caption{Broad-UNet precipitation prediction examples. The images in the first row are generated with the test set from the 50\% of pixels containing rain dataset. The images in the second row are generated with the test set from the 20\% of pixels containing rain dataset.}
\label{fig:precipImages}
\end{figure}
\subsection{Cloud cover nowcasting}
When applying the Broad-UNet to the second dataset, we compare its performance with the persistence and various models. These models are introduced and explained in \cite{berthomier2020cloud}. We perform this comparison with the results obtained from the test set of the cloud cover dataset. The used evaluation metrics are explained in section \ref{subsec:evalclouddata}. In Fig. \ref{fig:mseClouds}, we show the MSE obtained using the ground truth and the actual prediction.
Fig. \ref{fig:mseCloudsBin} depicts the MSE calculated with the ground truth and binarized prediction. From Fig. \ref{fig:mseClouds} and Fig. \ref{fig:mseCloudsBin}, one can notice that the Broad-UNet performance is superior in short-term forecasting. As the number of step-ahead increases, the gap between the performance of the proposed Broad-UNet and the classical UNet model decreases.
In addition, in Table \ref{tab:resultsCloudVariousMetrics}, we show the comparison between the Broad-UNet's and the model introduced in \cite{trebing2021smaat}. As in \cite{trebing2021smaat}, the metrics tabulated in this table are averaged over different time-steps (15-90 minutes ahead). From the obtained results one can observe that the Broad-UNet performs better than other compared models in three out of four used metrics.
\begin{figure}
\caption{Test MSE values of the different models for the cloud cover prediction task.}
\label{fig:mseClouds}
\end{figure}
\begin{figure}
\caption{Test MSE values of the different models for the cloud cover prediction task with binarized predictions.}
\label{fig:mseCloudsBin}
\end{figure}
\begin{table}[!htbp]
\centering
\scriptsize{
\renewcommand{1.5}{1.5}
\caption{Average of the MSE and additional metrics for the cloud cover prediction task. $\downarrow$ indicates that the optimal values are the smallest ones and $\uparrow$ indicates that the optimal values are the highest ones.}
\label{tab:resultsCloudVariousMetrics}
\begin{tabular}{l l c c c}
\Xhline{3\arrayrulewidth}
&\multicolumn{4}{c}{\textbf{Averaged test MSE cloud cover prediction}} \\
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\textbf{Model}}&
\multirow{2}{*}{\textbf{ MSE $\downarrow$}} &
\multirow{2}{*}{\textbf{Accuracy $\uparrow$}} &
\multirow{2}{*}{\textbf{Precision $\uparrow$}} &
\multirow{2}{*}{\textbf{Recall $\uparrow$}}\\
& & & & \\\Xhline{3\arrayrulewidth}
\textbf{Persistance} & 0.1491 & 0.851 & 0.849 & 0.849 \\
\textbf{UNet} & 0.0785 & 0.890 & 0.895 & 0.919 \\
\textbf{SmaAt-UNet} & 0.0794 & 0.889 & 0.892 & \underline{0.921} \\
\textbf{Broad-UNet} & \underline{0.0783} & \underline{0.891} & \underline{0.898} & 0.914\\
\Xhline{3\arrayrulewidth}
\end{tabular}
} \end{table} In addition, two examples of the Broad-UNet's predictions are displayed in Fig. \ref{fig:cloudsImages}. Both predictions are generated with the test set of the cloud cover dataset. In Fig. \ref{fig:cloudsImages}, the images on the first and second row correspond to 30 mins and 90 mins ahead prediction, respectively.
\begin{figure}
\caption{Broad-UNet cloud cover prediction examples. The image above is predicted 30 mins ahead, and the image below is predicted 90 minutes ahead.}
\label{fig:cloudsImages}
\end{figure}
\section{Discussion}\label{sec:discussion}
\begin{figure*}
\caption{Feature maps outputted by different branches inside the first multi-scale feature convolutional block. Every row represents the output of a different branch. Next to each row, the kernel employed by the convolution in that branch is shown.}
\label{fig:featsMaps}
\end{figure*}
From the obtained results, one can observe that the multi-scale feature learning allows the Broad-UNet to perform more precise predictions. This is thanks to the use of different convolutional filters in parallel. By combining convolutions with larger and smaller kernels, the model considers different amounts of information around the same region to generate the feature maps. Likewise, the inclusion of the ASPP module in the architecture allows the network to apply convolutions with diverse receptive fields at the same time.
In the precipitation nowcasting task, we can observe an 11\% and an 8\% improvement with respect to the simple UNet for both datasets. In the cloud cover nowcasting task, the binarized predictions of the Broad-UNet are 7\% more accurate than the simple UNet for 15 minutes ahead predictions, and 1\% more accurate for 90 minutes ahead predictions. Since in the first nowcasting task, i.e. precipitation prediction, the model aims to perform a regression of each pixel with a wide range of values, achieving accurate forecasting or equivalently lower MSE values is more desirable. That is where the Broad-UNet shows more superior performance respect to the UNet. In the second nowcasting task, where the goal is to carry out a binary classification on each pixel, Broad-UNet performs slightly more accurate predictions than the UNet. While the immediate predictions (i.e. 15 and 30 mins ahead) are more precise, more distant predictions (more than 45 mins ahead) are comparable to UNet's predictions. Therefore, we can state that the wide building blocks of the Broad-UNet let the network to extract the spatial and short-term temporal information more accurately than the regular UNet.
The learnt feature maps in different branches inside a convolutional block is shown in Fig. \ref{fig:featsMaps}.
The chosen convolutional block is the first one so that the data doesn't have too abstract representation and is thus easier to interpret. The image fed to the network belongs to the precipitation maps dataset, and it is shown in Fig. \ref{fig:origImgFeatsMaps}. In Fig. \ref{fig:featsMaps}, the first row of the feature maps is the output of the convolutional branch with kernel size 1. The second row corresponds to the output of the branch with kernel size 3x3x3. Lastly, the third row corresponds to the output of the branch with kernel size 5x5x5. From Fig. \ref{fig:origImgFeatsMaps}, one can observe the differences between the features extracted in each branch. The convolutions with kernel size 1 seem to strengthen detailed differences in the image, and convolutions with kernel size 3x3x3 seem to accentuate differences between areas containing a high and a low rain concentration. In addition, convolutions with kernel size 5x5x5 seem to highlight regions with high rain concentration. \begin{figure}
\caption{Image fed to the network to generate the feature maps shown below. It belongs to the precipitation maps datasets, specifically to th 50\% of rain pixel dataset.}
\label{fig:origImgFeatsMaps}
\end{figure} \section{Conclusion}\label{sec:conclusion} In this paper the Broad-UNet, an extension of the UNet architecture, is introduced for precipitation as well as cloud cover nowcasting.
Thanks to the combination of the multi-scale feature convolutional block and the incorporation of the ASPP module, the proposed network is able to capture multi-scale information. In addition, the use of factorized kernels drastically reduces the number of parameters in the network compared to classical UNet model The performance of Broad-UNet is examined for addressing two nowcasting problems. The first problem consists of predicting precipitation maps 30 mins ahead. The second one consists of forecasting cloud cover 15 to 90 mins ahead.
The obtained results suggest that the Broad-UNet extracts features more efficiently and therefore performs more accurate predictions in short-term nowcasting tasks compared to other tested UNet based models.
\section*{Acknowledgment} We would like to thank Léa Berthomier from Meteo France for providing the cloud cover dataset, as well as the required code to preprocess it.
\end{document} | arXiv |
Towards pixel-to-pixel deep nucleus detection in microscopy images
Fuyong Xing ORCID: orcid.org/0000-0003-0982-86751,
Yuanpu Xie2,
Xiaoshuang Shi2,
Pingjun Chen2,
Zizhao Zhang3 &
Lin Yang2,3
This article has been updated
The Correction to this article has been published in BMC Bioinformatics 2019 20:509
Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed.
We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance.
We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.
Nucleus/cell detection is usually a prerequisite for nuclear/cellular morphology computation in microscopy and digital pathology image analysis. It can enable quantitative information measurement to better understand the biological system or disease progression [1–3]. Manual assessment of object detection is labor intensive or even impossible due to the large amount of collected image data, which is rapidly increasing [4, 5], and thus many computerized methods have been developed for microscopy image computing [6–8]. In particular, machine learning techniques have been widely used to detect individual nuclei or cells in various microscopy images. Nevertheless, conventional learning methods heavily rely on appropriate data representations, which often require sophisticated expertise and domain knowledge, to achieve desired detection accuracies. In microscopy imaging, it is not unusual to generate images that exhibit significant appearance variation (e.g., staining, scale, etc.) in a single set of experiments such that designing appropriate data representations would be very difficult. Furthermore, it might be necessary to re-design image representations for each new dataset, and this is a non-trivial task. Therefore, most methods solve the detection problem only in a limited context or require substantial effort to adapt the models to new situations [9].
Recently, deep neural networks (DNNs) have powered many aspects in computer vision and attracted considerable attention in biomedical image computing [10]. Instead of relying on non-trivial image representation engineering, DNNs directly deal with raw image data and automatically learns the representations for different tasks. Compared with hand-crafted image features, learned representations require slight or no human intervention and can better capture intrinsic information for image description [11, 12]. DNNs have been applied to nucleus/cell detection in different types of microscopy images, leading to improved performance compared to other methods [13]. However, DNNs usually require a large number of training data, which might be often unavailable in the medical domain. In particular, supervised models like convolutional neural networks (CNNs), which are the most widely used for object detection in microscopy image analysis, need massive individual object annotation that is more expensive to obtain. Even though a sufficient number of annotated images are available on one specific dataset, it is currently common to annotate new target training images, i.e., label the locations of individual nuclei or cells, and re-train the models when applying them to other datasets.
It has been witnessed that CNNs can produce very powerful generic descriptors for visual recognition tasks [14, 15]. Feature representations extracted from CNNs, which are trained on large-scale image datasets such as ImageNet [16], are readily applicable to different tasks on a diverse set of datasets [17, 18]. ImageNet pre-trained CNNs are also fine-tuned or used as feature extractors on medical image datasets [19–21]. However, there is very limited literature covering deep model adaptation and evaluation for nucleus/cell detection on a wide range of microscopy image data. Although [22] learns a CNN architecture with multiple-organ tissue images, there are still several important, open questions to be answered. Another single CNN is trained with both magnetic resonance (MR) and computed tomography (CT) image data for multi-task image segmentation [23], but the conclusion might not be applicable to nucleus detection in microscopy images because of different imaging modalities and tasks. In addition, a large amount of previous work applies CNNs to object recognition with a sliding window strategy, which might not be computationally efficient for nucleus localization in high-dimensional pathology and microscopy images containing hundreds or thousands of nuclei or cells.
In this paper, we seek to answer two critical questions that have not been systematically studied yet: 1) Are deep nucleus detection models trained with one microscopy image dataset (i.e., images from one type of organ) applicable to other datasets (i.e., images from other organs), which are generated using the same staining technique and microscopy imaging protocol (see Fig. 1)? 2) For one specific organ dataset, will the use of image data from other organs for model training be helpful for a detection performance improvement? To this end, we present and extensively evaluate an end-to-end, pixel-to-pixel U-Net-like network (see Fig. 2) for nucleus detection in large-scale public pathology image datasets, The Cancer Genome Atlas (TCGA) [24]. In summary, the contributions are three-fold:
Sample images from different organs. Row 1 (from left to right): adrenal gland, bladder and breast; row 2: cervix, colorectum and eye. More details of data description can be found in the Results section
Network architecture. The black or red boxes denote feature maps, and the number of feature maps in each layer is also provided. The connections with different colors between feature maps represent distinct operations
1) We observe that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desired results and would require model fine-tuning to be on a par with those trained with target data.
2) We demonstrate that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection. A naive pooling of these two types of data might not be beneficial compared with target data alone, but learning with proper dataset balancing via loss function re-weighting could improve nucleus detection.
3) We conduct extensive experiments on 23 types of organ images from the publicly available TCGA pathology archive, which covers image data from different organs/cancer diseases and distinct institutions. We believe the findings from this systematic case study would be helpful to nucleus/cell detection in microscopy and digital pathology images.
Deep networks have been successfully applied to medical image computing in different kinds of imaging modalities [25–27]. They have proven to be very effective in various image analysis tasks such as disease classification, lesion detection, object segmentation, image registration, tumor detection, etc [28–33]. DNNs also draw increasing attention in microscopy image analysis and a recent review can be found in [13]. Nucleus/cell detection, which is a critical step of image quantification in digital pathology and cell biology, is getting involved with deep learning and improved performance is emerging. Although different DNN architectures are used in medical image computing, CNNs and their variants are the dominant deep networks for object detection in microscopy images [13].
One straightforward method for nucleus/cell detection with DNNs is to conduct pixel-wise binary classification. Cireşan et al. [34] learn multiple CNNs with two types of small image patches (mitotic nuclei or not) and perform mitosis detection using a sliding window in hematoxylin and eosin (H&E) stained breast cancer images. Another CNN-based mitosis detection approach is presented in [35], and the difference is that it allows noisy data annotation by dealing with a data aggregation in the learning process. CNNs are also applied to nucleus/cell detection in other organ/tissue screening microscopy images such as brain, pancreas, bowel and circulatory systems [36–40]. Recently, a three-class CNN [22], which explicitly models nuclear boundaries, is introduced to detect nuclei in H&E stained images acquired from multiple organs. Stacked auto-encoder [41] is also applied to nucleus detection in breast cancer images, and it is first trained via unsupervised learning and then fine-tuned towards individual object detection. All of these approaches conduct pixel-wise prediction in a sliding window manner, which would be computationally expensive for high-dimensional images such as whole-slide scanned data.
In order to accelerate the algorithms, DNNs can be utilized to classify only nucleus/cell proposals instead of all image pixels. Dong et al. [42] apply CNNs to cell detection on region candidates, which are generated by a shallow model, support vector machine (SVM); Shkolyar et al. [43] first use simple image processing techniques to extract mitosis proposals and then exploit CNNs to conduct mitotic cell detection; Liu and Yang [44] assign CNN-predicted scores to cell candidates and then solves an inter linear programming problem for final cell detection in pancreatic neuroendocrine tumor (NET) and lung cancer images. Instead of relying on shallow models to extract regions of interest, Chen et al. [45] take advantage of fully convolutional networks (FCNs) followed by standard CNNs for mitosis detection. These methods avoid the expensive pixel-wise CNN predictions, but they require proper candidate collection, which might be usually challenging for histopathological images. Alternatively, a sparse kernel technique is incorporated into CNNs to reduce redundant computation [46, 47], and it has been applied to cell detection in lung cancer images.
Instead of performing independent pixel-wise classification, CNNs can take advantage of spatial topology to perform regression-based detection. Xie et al. [48] have replaced the classification layer with a structured regression in a conventional CNN such that the prediction can take into consideration adjacent information in the label space. This approach has been successfully applied to cell detection in multiple datasets including NET, breast cancer, and cervical cancer images. Another similar CNN-based spatial regression is presented in [49] for nucleus detection in colon cancer images and a CNN-based voting method is reported in [50], which learns an implicit codebook based on neighboring information for cell localization in NET pathology images. Regression modeling is also formulated with FCNs [51], which allow arbitrary-sized image inputs and enable efficient model inference, for fast cell detection in microscopy images [52, 53]. More recently, an FCN network with two sibling branches is proposed for simultaneous nucleus detection and classification [54] and the joint learning allows both tasks to benefit from each other. Another FCN-based cell detection method can be found in [55], where it introduces deconvolutional layers to the ResNet [56] such that the output probability map has an identical dimension as the input image.
We implement the model with PyTorch [57] on a PC machine with a 3.50 GHz Intel i7 CPU and an Nvidia GeForce GTX 1080 Ti GPU. We train the model using stochastic gradient descent with Nesterov momentum and set the parameters as: learning rate=0.01, momentum=0.9, weight decay= 10−6, batch size=4 and number of iterations= 105. The learning rate will decrease by a factor of 10 if the performance on the validation sets does not improve for 104 iterations until it is smallar than 10−4. We set α=3, d=15 in Eq. (1) and λ=5 in Eq. (2). Following [52, 53], we scale the proximity values by a factor (i.e., 5) to facilitate training. The hyperparameter of the exponential linear unit (ELU) is set as 1. Dropout [58] with a rate of 0.5 is used after the convolution operations in the last two residual blocks of the downsampling path.
For model training, we randomly crop four 200×200×3 image patches from each training image to form the training sets. We normalized the patches by stracting mean and dividing standard deviation in each image channel. We adopt data augmentation including random rotation, shifting, mirroring and elastic distortion to prevent overfitting. In order to save storage space, we dynamically crop image patches within each iteration.
We collect 23 types of H&E stained tissue image data from the public TCGA Research Network [24], with each containing 50 images and in total 1128 images (only 35, 44 and 49 images available for bile duct, lymph nodes and stomach respectively), one per patient. Each category corresponds to one specific organ/cancer disease, and it covers image data from multiple institutions. Thus, in total we have 23 different image datasets. For one patient of each dataset, only one 500×500×3 image patch (in this paper, we simply use images for description) is cropped from the whole-slide image, which is generated with digital microscopy imaging at 40x magnification. A few example images are displayed in Fig. 1, which exhibit significant challenges including background clutter, inhomogeneous intensity, nucleus touching/overlapping, scale variation, etc. For each dataset, we randomly split the image data into two halves, one for training and the other for testing. We further randomly select 20% of training data (i.e., 5 images) as a validation set. There is no overlapping between any two sets of training, validation and testing. For all the images, the gold-standard nucleus centers are manually annotated.
Evaluation metrics
We use the evaluation metrics [52] in the experiments. Specifically, we define the circular region with a 16-pixel radius centered at each annotated nucleus centroid as its gold-standard region. For a test image, the detected points within the gold-standard regions are matched with corresponding annotated nucleus centroids by using the Hungarian algorithm [59], which is to find an assignment of detections to annotations with a minimal cost. The cost of a detection to a human (or gold-standard) annotation is defined as the Euclidean distance between these two points. After the assignment, the detections matched with human annotations are considered true positives (TP), and those detections not matched with any annotations are false positives (FP). The human annotations that do not have matched detections are viewed as false negatives (FN). Based on these definitions, we report detection accuracy using precision (P), recall (R) and F1 score: \(P=\frac {TP}{TP+FP},\ R=\frac {TP}{TP+FN},\ F_{1}=\frac {2\times P\times R}{P + R}\).
Nucleus detection evaluation
Baseline experiments
To set up a baseline, we train the proposed FCN regression model, referred to as MicroNet, on all the 23 datasets and compare it with other recent state-of-the-art deep methods such as FRCN [52], FCRNA [53], FCRNB [53], U-Net [60] and FCN [51]. Here we select these pixel-to-pixel learning and inference models for a fair comparison. We evaluate each method on the testing sets, where the optimal value for ξ is determined by calculating the best F1 score on the validation set in each dataset. Additionally, we measure the detection using the Euclidean distance (ED) between TP and matched gold-standard annotations. Table 1 shows the mean and standard deviation of each metric for different methods over all the 23 datasets. As we can see, MicroNet produces the highest F1 score and the lowest ED. In particular, MicroNet outperforms FCRNB, U-Net and FCN by a large margin in terms of the F1 score, which is a unary measurement for object detection. Interestingly, the pixel-wise classification models, U-Net and FCN, produce significantly lower recall compared with the regression models, probably due to a high FN rate. FCRNA and FCRNB exhibit better recall but lower precision, and the recent deep regression model FRCN provides a good tradeoff between precision and recall. MicroNet provides a slightly better F1 score than FRCN but a much lower ED, which means MicroNet can deliver more accurate nucleus localization. These observations demonstrate that MicroNet is readily suitable for further study. Figure 3 shows qualitative results of nucleus detection using MicroNet on several example images.
Qualitative results of MicroNet. Nucleus detection is marked with green dots on example images. These images exhibit the difficulty of nucleus localization due to significant challenges, such as background clutter, inhomogeneous intensity, object touching/overlapping, dense object clustering, scale and shape variations of objects, etc
Table 1 Nucleus detection (mean ± standard deviation) over the 23 datasets in terms of precision, recall, F1 score, and Euclidean distance
Generalization on different datasets
For each dataset, we train one individual MicroNet model and apply it to the testing sets from both the same and other datasets. In other words, we test MicroNet using images from not only the same types of organs but also distinct ones, which are not used for model training. Figure 4 shows the F1 score of MicroNet on each individual dataset. For most datasets, models trained and tested on the same categories of organs, denoted by MicroNetsame, produce better detection accuracies than those trained and tested on different organ images, denoted by MicroNetdiff. Interestingly, for adrenal gland, bile duct, kidney, lymph nodes, and pleura datasets, there are models trained on certain other organ images providing slightly higher F1 scores than MicroNetsame; however, MicroNetsame produces competitive performance to the best models on these datasets. We also observe that for each individual dataset, the F1 score of MicroNetsame is much higher than the average F1 score of MicroNetdiff across all the other datasets, and many MicroNetdiff models provide much lower accuracies than MicroNetsame. This suggests for one specific dataset, learning with other organ images is not necessary to deliver desired nucleus detection results, although all the tissue images are generated with H&E staining and digital microscopy. The precision-recall curves of MicroNet models on all 23 datasets are provided in Additional file 1: Figure S1.
The F1 score of MicroNet on different data. Blue stars denote models trained and tested on the same datasets (MicroNetsame), i.e., images from the same types of organs across different patients. The boxes represent models trained on one dataset but tested on another and the orange triangles denote the average performance (MicroNetdiff-average) over these models
We further explore whether training on one dataset can be beneficial to nucleus detection on other datasets via model fine-tuning. To this end, we compare MicroNet fine-tuning to learning from scratch on different datasets. Due to the large number of combinations from the entire set of all data, we choose a subset of data to conduct experiments. Specifically, we randomly select 3 target datasets and 6 base datasets, each two corresponding to one target dataset. Based on Fig. 4, we also choose the non-target datasets that produce the highest and lowest F1 scores for each target dataset as two additional base training sets, as shown in Table 2. For simplifying descriptions, these two types of datasets are called the best and worst base datasets for each target data respectively. We train one MicroNet model on each base dataset and then fine-tune it towards the corresponding target data. Here we fine-tune the entire neural network instead of freezing some layers. We compare these models to those directly learned from scratch on target data in the first row of Fig. 5. We note that model fine-tuning can perform as good as learning from scratch with much less training time, no matter from which base dataset it conducts the fine-tuning. It means model fine-tuning might have a lower requirement of the number of iterations for training convergence.
Performance of MicroNet fine-tuning. From top to bottom, each row represents the F1 score of model fine-tuning with respect to the number of training iterations (row 1), the percentage of target training data (row 2), the stage of base models (row 3) and the number of fixed learning blocks during fine-tuning (row 4). For a comparison, the F1 score of learning from scratch is also provided in each plot
Table 2 Base datasets for model fine-tuning
The second row of Fig. 5 compares model fine-tuning to learning from scratch on different numbers (i.e., 10%, 20%, 40%, 60%, 80% and 100%) of target training data. With the increasing of target training data, both of these two learning strategies improve the nucleus detection accuracies, and this suggests learning with more target data is beneficial. More important, fine-tuning can achieve a desired detection accuracy with less target training data than learning from scratch. In particular, only 20% of eye target training images to obtain a 0.80 F1 score for fine-tuning from the uterus base dataset, while learning from scratch needs 80% of the eye training images. These experimental results demonstrate fine-tuning MicroNet could achieve a specific nucleus detection accuracy with limited training data. Therefore, it can reduce human effort for training data annotation and enable high-throughput image quantification when applying MicroNet to different datasets.
We also explore when it is ready for MicroNet model fine-tuning. If early transfer can provide competitive performance to fine-tuning from the optimal base models, it might significantly reduce the computational cost of training on base datasets. This would be particularly helpful when base datasets are large. Specifically, we take a snapshot of models trained on base datasets at every 2000 iterations and then fine-tune these base models towards corresponding target datasets, as shown in the third row of Fig. 5. We find that fine-tuning early-stage base models (e.g., no more than 6000 iterations) can provide similar performance to those learned from scratch. For most cases, early transfer is competitive to late transfer. Meanwhile, fine-tuning late-stage models from the best base datasets always outperforms learning from scratch in the selected datasets.
In order to evaluate the transferability of network layers on nucleus detection, we freeze the first several learning blocks during model fine-tuning. Figure 2 shows that in addition to the downsampling and upsampling paths, each of which has four residual learning blocks, MicroNet has one input (one convolutional operation) and one output (two convolutional operations) transition blocks. From input to output, we label all the blocks from 1 to 10. The fourth row of Fig. 5 demonstrates the F1 scores of model fine-tuning with keeping different blocks frozen. For most cases (except fine-tuning from esophagus towards soft tissue), fine-tuning only the last 2 or 3 blocks provides lower accuracies than learning from scratch. On the other hand, fine-tuning can improve the performance with less blocks fixed. When only the first 2 blocks are frozen, fine-tuning can compete with or even outperform learning from scratch.
Training with auxiliary datasets
In this experiment, we evaluate whether training with a mixture of different datasets is beneficial. To this end, we randomly choose 3 target datasets, i.e., cervix, colorectum and kidney, and mix each training data with corresponding auxiliary data. For one target dataset such as cervix, all the other 22 non-cervix datasets are pooled to form the auxiliary training data, from which we randomly select 1%, 2%, 5%, 10%, 20% and 50% to mix with the target training data respectively. We train one MicroNet for each of these mixed training data (denoted by MicroNetmix) and compare it to the one learned with only the target training set (denoted by MicroNettarget), as shown in the top row of Fig. 6. For cervix and colorectum datasets, the F1 score of MicroNetmix decreases as the increasing of auxiliary training data and becomes lower than that of MicroNettarget; for kidney, the score of MicroNetmix first decreases and then increases to be higher than that of the counterparts. These observations suggest that a mixture of target and non-target datasets might not be always helpful for nucleus detection on one specific target dataset, even though all the images are generated with the same microscopy imaging protocol and H&E staining technique.
Comparison between MicroNet learning with and without auxiliary data. For each target dateset (i.e., cervix, colorectum or kidney) in the top row, all its training data are mixed with different numbers of auxiliary training data (x-axis represents the percentage of auxiliary training data). In the bottom row, a fixed number (i.e., 5% and 50%) of auxiliary training data are used to mix with different numbers of target training data, as shown in the x-axis
In order to explore whether training with auxiliary data is helpful when target data are limited, we train multiple models with different numbers of target training images. For each of the aforementioned 3 target datasets, we randomly generate multiple training sets with 10%, 20%, 40%, 60%, 80% and 100% of the original training data respectively; meanwhile, we randomly select 5% and 50% of auxiliary training data and mix each with the generated target training data to form new training sets. The bottom row of Fig. 6 shows a comparison between training with and without auxiliary data. Clearly, for the 10%, 20% and 40% target training sets, training with only target data produces poor performance probably due to overfitting, while learning with a mixture of target and auxiliary data provides significantly better results. However, learning with too much auxiliary data (i.e., 50%) might overwhelm the target data such that the detection accuracy would decrease, as illustrated in the plot for the cervix dataset in Fig. 6.
In the previous experiments above, the auxiliary data can have a much larger size than the target data and dataset balancing during model training might be helpful for performance improvement. Thus, we further evaluate nucleus detection based on dataset balancing with weighting the loss. Specifically, we minimize the weighted sum of two losses, \(\mathcal {L}=\gamma \mathcal {L_{T}} + \mathcal {L_{A}}\), where \(\mathcal {L_{T}}\) and \(\mathcal {L_{A}}\) are target and auxiliary losses respectively, and both of them use the definition in Eq. (2). γ is a control parameter balancing the contributions from the two data sources. For each target dataset, we use all of its training images as the target training set and randomly select 5% and 50% of auxiliary training data as the auxiliary training set respectively. The top two rows of Fig. 7 show the precision-recall curves with respect to different γ values. We can see that for either 5% or 50% of auxiliary data, a small γ value (e.g., less than 1.0) leads to poor performance, especially for cervix and colorectum datasets, perhaps because model training mainly relies on the auxiliary data. The performance improves with the increasing of γ values. In addition, compared to the naive pooling (i.e., γ=1.0) of target and auxiliary training sets, learning with larger weights on target data (i.e., γ>1.0) can produce better nucleus detection. We also find that there is no significant performance variation for the kidney dataset with 50% of auxiliary data. This observation is consistent with those in Fig. 6, where models learned with auxiliary data is helpful for nucleus detection in kidney data and actually slightly outperforms the models trained with only target data. Figure 8 shows the F1 scores with different γ values. As expected, learning with more emphasis on auxiliary data, i.e., γ<1.0, leads to lower detection accuracies than those trained with naive data pooling or only target data (denoted by MicroNettarget). However, learning with a proper weighting of target data might improve the performance and be on a par with or even outperform MicroNettarget.
Precision-recall curves with different γ values on different training data. The top two rows represent 5% (row 1) and 50% (row 2) of mixed auxiliary data respectively, and the bottom two rows denote two different single-source auxiliary data respectively, which are the base datasets producing the lowest (row 3) and highest (row 4) F1 score on the target data in Fig. 4. Each curve is generated by varying the threshold ξ. x/y-axis represents recall/precision. For each curve label of "A+B", A and B represent target and auxiliary data respectively
The F1 score with different γ values on different training data. x/y-axis represents F1 score/ log(γ). For each curve label of "A+B", A and B represent target and auxiliary data respectively
We also evaluate learning with a single auxiliary data source and compare it to those with the mixed multi-source auxiliary data above. For each target data, i.e., cervix, colorectum and kidney, we select the base datasets that producing the lowest and highest F1 scores in Fig. 4 as two single-source auxiliary data: kidney/breast for cervix, kidney/cervix for colorectum, and colorectum/pancreas for kidney. The bottom two rows of Fig. 7 show the precision-recall curves with different γ values on these single-source auxiliary data. Similar to learning with multi-source auxiliary data, it exhibits poor performance when γ<1.0 and the detection accuracy improves as the increasing of γ values. Meanwhile, training with higher weights on target data (γ>5.0) leads to better performance. We also observe for single-source auxiliary data, learning with a proper γ value (e.g., larger than 1.0) might outperform those models learned with only target training data.
Effects of parameters
The parameter λ in Eq. (2) plays an important role on nucleus localization. We randomly select 3 datasets, i.e., lung, lymph nodes and pancreas, to evaluate its effects. Figure 9 shows the precision-recall curves with different λ values: λ=0, 0.005, 0.05, 0.5, 5 and 50. We do not include the performance for λ=500 and λ=5000 due to the exploding gradient problem. As we can see, the models with λ≤0.5 are outperformed by those with λ≥5.0; in particular, the model with λ=0, which indicates no additional penalty on the regions with nonzero values in the proximity maps, exhibits significantly worse performance than those with large λ values. This might be because for a single training image, a dominant portion of its proximity map has zero values and a penalty on the central regions of nuclei would enforce model learning to pay more attention to these nonzero-value regions. In this scenario, model inference would be encouraged to avoid trivial solutions and predict nonzero values on the central regions of nuclei.
Precision-recall curves with different λ values. Each curve is generated by varying the threshold ξ. x/y-axis represents recall/precision
Evaluation of other deep models
We further explore whethter other deep nucleus detection models exhibit similar performance. Here we choose a very recent state-of-the-art model, FRCN [52], to evaluate its generalization on different datasets. Specifically, we train one FRCN model for each type of organ data and apply it to nucleus detection on the same- and different-organ images. Figure 10 shows the F1 score of FRCN on all the datasets. Similar to MicroNet, FRCNsame produces better performance than many FRCNdiff on each dataset, and its F1 score is significantly higher than the average F1 score of FRCNdiff. This observation is consistent with that for MicroNet, i.e. training models with a specific dataset might not provide desired results on other datasets and learning with the same type of organ might be usually preferred. We also evaluate whether learning FRCN models with auxiliary datasets is helpful for nucleus detection. Following the experimental setting in Fig. 6 (the top row), we compare the models trained with target data only to those learned with mixed datasets, which contain all target training data and different numbers of auxiliary images. Figure 11 shows that learning FRCN models with a mixture of target and non-target data might not always improve nucleus detection, and this is also consistent with the study of MicroNet above.
The F1 score of FRCN on different data. Blue stars denote models trained and tested on the same datasets (FRCNsame), i.e., images from the same types of organs across different patients. The boxes represent models trained on one dataset but tested on another and the orange triangles denote the average performance (FRCNdiff-average) over these models
Comparison between FRCN learning with and without auxiliary data. For each target dateset (i.e., cervix, colorectum or kidney), all its training data are mixed with different numbers of auxiliary training data (x-axis represents the percentage of auxiliary training data)
On the basis of Figs. 4 and 5, we observe that models learned on one type of organ images might perform poorly in other organ datasets; however, model fine-tuning from other organ data can provide similar performance to those directly trained with target data from scratch in a relatively shorter period of time. In addition, fine-tuning models from the best base datasets requires slightly smaller iteration numbers than learning from the worst base datasets, e.g., 2000 versus 5000. For the target datasets, fine-tuning usually achieves stable performance within only 5000 iterations, while training from scratch needs over 15000 iterations. These observations show model fine-tuning is more efficient when base models trained with other datasets are available, and this condition might be usually satisfied in real applications. We also find that compared with learning from scratch and fine-tuning from the worst base datasets, fine-tuning from the best base datasets provide slightly better detection accuracies, especially when insufficient target training data are available. This suggests a proper selection of base datasets might be important for model fine-tuning from limited target data. Interestingly, the F1 score might decrease when fine-tuning late-stage models from the worst base datasets, as shown in soft tissue and thymus subplots (row 3 of Fig. 5). This might be because these base models are well trained and too specific to the base datasets.
From Figs. 6, 7 and 8, we find that learning with mixed target and auxiliary data might be not always beneficial. However, using a certain amount of auxiliary data to assist model training for nucleus detection might be helpful when only a small target training set is available. We also observe that for either multi-source or single-source auxiliary data, it is critical to balance the datasets during model training. Interestingly, learning with the best auxiliary data (row 4 of Fig. 7) provides better nucleus detection than the other single-source auxiliary learning (row 3 of Fig. 7), especially when γ<1.0, as shown in Fig. 8. This suggests it might be critical to choose auxiliary dataz when using a single auxiliary source, which has a similar size to the target set.
In this paper, we address several important but previously understudied questions on deep models for nucleus detection in microscopy images. We present and evaluate an end-to-end, pixel-to-pixel FCN model for nucleus detection on a wide variety of digital pathology image data, which cover 23 types of different organs/diseases. All images are H&E stained and digital microscopy at 40× magnification. The datasets are collected from multiple institutions and should be sufficiently diverse. We find that for a specific target dataset, i.e., images from one type of organ, training with images from different organs might not deliver desired results, even although the images are generated using the same staining technique and imaging protocol. Our experiments further demonstrate model fine-tuning or transfer learning is more efficient compared to training from scratch. To achieve a desired object detection accuracy, model fine-tuning requires less target training data or a smaller number of training iterations.
We also observe learning with auxiliary data might be helpful for nucleus detection, but it does not always mean higher accuracies. When there are limited target training data, a naive mixture of target and auxiliary data would be helpful since it could address the overfitting problem; however, this naive data pooling might not be always beneficial if sufficient target training data are available. On the other hand, learning with dataset balancing can provide better nucleus detection than training with a simple pooling of target and auxiliary data. With an appropriate data weighting, it would be able to provide competitive or even higher detection accuracies than training with only target data. We also show learning with more emphasis on the central regions of nuclei is helpful for nucleus detection in microscopy images.
Network architecture
Our model is shown in Fig. 2, which can be viewed as a variant of FCNs. It is mainly inspired by the residual regression network [56] and U-Net [60], and the major difference is that we aggregate different levels of contextual information for robust pixel-wise prediction. The network consists of four basic paths: downsampling, upsampling, concatenation and multi-context aggregation. The downsampling path aims at extracting hierarchical features from input images, while the upsampling path maps feature representations into the input space for dense prediction. In order to preserve the high-resolution information for object localization, low-layer feature maps from the downsampling path are copied and concatenated with corresponding representations in the upsampling path. Finally, a multi-context aggregation is introduced to ensemble contextual information such that the model can handle scale variation of nuclei.
The downsampling path consists of a stack of residual learning blocks [56], which learn feature representations via a residual mapping instead of the original underlying mapping such that gradients would not vanish in backpropagation. Specifically, a shortcut connection is used to realize an identity mapping, which is added to a non-linear mapping (i.e., convolution followed by an activation) for residual learning in an element-wise way. A strided convolution with a stride of 2 is exploited to downsample feature maps between adjacent residual blocks. Batch normalization [61] is applied after each convolution and strided convolution. Subsequently, an exponential linear unit (ELU) [62] is used for the non-linear transform after each batch normalization and element-wise addition. After the input transition block that consists of one convolutional layer, four residual blocks are stacked to learn high level abstraction information by following the rules [63]: 1) double the number of convolutional filters when the feature map size is halved (except for the last residual block) and 2) the filter size is set as 3×3 with the padding being 1 in each residual block. The upsampling path is also built with four cascaded residual blocks, but in a converse direction of the downsampling configuration. A transposed convolution [64] instead of a strided convolution is applied to the connection of residual blocks, aiming at increasing the resolution of learned high-level feature maps for pixel-wise prediction.
In order to compensate for high-resolution information loss due to strided convolutions, we use concatenation connections [60] to combine feature maps from both downsampling and upsampling paths. Specifically, downsampled outputs are copied and linked to corresponding upsampled outputs such that a successive layer can learn to fuse this information for precise object localization. It is worth noting that these feature maps might not be directly concatenated with each other, because the downsampling and upsampling layers could be not exactly symmetric. For instance, a 75×75 feature map after downsampling with a factor of 2 would have a dimension of 37×37 (without loss of generality, the floor operation is used); however, a 37×37 feature map after upsampling would have a size of 74×74. In order to preserve a proper output size, we pad upsampled outputs with zeros to match downsampled ones for information fusion.
Due to object scale variation, network prediction based on a single-sized receptive field might not well localize all the nuclei. Inspired by [65], we introduce a multi-context aggregation path to assemble different levels of feature maps for final pixel-wise prediction. Since downsampled outputs in different layers correspond to distinct-sized receptive fields and those in deeper layers have larger receptive fields, we can take advantage of this contextual information by combining the hierarchical feature representations. More specifically, we directly apply transposed convolutions to the downsampled outputs at certain layers (see Fig. 2) and aggregate the generated feature representations to form a multi-context feature map, which is fed to the output transition block (consisting of two convolutional layers) for final output prediction.
Model formulation
In this paper, nucleus detection is formulated as a regression problem. Compared to binary classification, regression modeling can employ additional context during the learning stage for more accurate detection [52, 66]. Our goal is to learn an FCN regressor to predict an identical-sized proximity map given an input image, where each predicted pixel value measures how proximal this pixel is to its closet nucleus center. To this end, we define gold-standard proximity maps (or structured labels) based on the Euclidean distance. For a w×h training image \(\mathbf {x}^{i}\in \mathbb {R}^{w\times h\times c}\) with human annotation of nucleus centers, where c is the number of image channels, we generate its corresponding proximity map \(\mathbf {y}^{i}\in \mathbb {R}^{w\times h}\) as follows
$$\begin{array}{@{}rcl@{}} y^{i}_{uv} = \left\{\begin{array}{ll} \frac{e^{\alpha(1-\frac{D^{i}_{uv}}{d})} - 1}{e^{\alpha} - 1} & \text{if}\,\, D^{i}_{uv} \leq d \\ 0, & \text{if otherwise}, \end{array}\right. \end{array} $$
where \(y^{i}_{uv}\) represents the value of yi at pixel (u,v), and \(D^{i}_{uv}\) denotes the Euclidean distance between pixel (u,v) and its closet annotated nucleus center. d is a distance threshold and α controls the proximity value decay. With this definition, the structured label has continuous values and only a small region (controlled by d) around the nucleus center has positive values, as shown in Fig. 12. In this way, the model can learn to predict higher values for pixels in the central regions of nuclei.
Proximity map generation. From left to right: the original image, manual annotation (green dots) of nucleus positions and proximity map, where the central regions (light blue) of nuclei have continuous, nonzero values and all the other regions are assigned zero values (dark blue)
Let Θ denote the FCN parameters to be learned and Φ(·) represent the nonlinear mapping from network inputs to outputs. Given a set of training images and corresponding proximity maps \(\{\mathbf {x}^{i}, \mathbf {y}^{i}\}_{i=1}^{N}\), we estimate the parameters by minimizing the prediction error between network outputs oi=Φ(xi;Θ) and gold-standard proximity maps yi, i=1,2,...,N. To optimize this problem, one straightforward choice of the objective loss function is the mean squared error (MSE); however, this loss might not be suitable for our case, because a dominant portion of each proximity map has zero values and a plain MSE might lead to a trivial solution such that the predictions for all the pixels can be simply assigned zeros [67]. Thus, we adopt a weighted MSE loss that enforces model learning to pay more attention to the central regions of nuclei. Formally, the loss for the i-th training image is defined as
$$\begin{array}{@{}rcl@{}} \mathcal{L}(\mathbf{o}^{i},\mathbf{y}^{i}) = \frac{1}{2} \sum_{(u,v) \in \mathbf{y}^{i}} (y^{i}_{uv} + \lambda \bar{y}^{i}) (o^{i}_{uv} - y^{i}_{uv})^{2}, \end{array} $$
where λ controls the weights of the losses for different image regions. \(\bar {y}^{i}\) is the mean value of yi and allows the model to automatically adjust the contribution of each individual image. In practice, the \(\mathcal {L}(\mathbf {o}^{i},\mathbf {y}^{i})\) is normalzied by the number of image pixels and the overall loss is the average over the entire training set.
The loss function is differentiable with respect to Θ and the FCN regression model is trained with the standard gradient-based backpropagation [68]. Denote ai the input of the last layer for training image xi. The derivative of (2) with respect if ai can be written as (assuming the sigmoid activation function is chosen in the last layer)
$$\begin{array}{@{}rcl@{}} \frac{\partial \mathcal{L}(\mathbf{o}^{i},\mathbf{y}^{i})}{\partial a^{i}_{uv}} = (y^{i}_{uv} + \lambda \bar{y}^{i}) (o^{i}_{uv} - y^{i}_{uv})a^{i}_{uv}(1-a^{i}_{uv}). \end{array} $$
The derivative of the loss function with respect to the network parameters can be calculated using the chain rule for model training. During testing, the model predicts a proximity map p for each unseen image. Those pixels with small values, i.e., less than ξ· max(p) where ξ∈[0,1], are suppressed. Thereafter, nucleus centers are localized by identifying local maxima on the processed proximity map.
Following publication of the original article [1], we have been notified of a few errors in the html version:
Convolutional neural network
DNN:
Deep neural network
ELU:
Exponential linear unit
FCN:
Fully convolutional network
False negative
FP:
H&E:
Hematoxylin and eosin
MSE:
Neuroendocrine tumor
SVM:
Support vector machine
TP:
True positive
Rittscher J. Characterization of biological processes through automated image analysis. Annu Rev Biomed Eng. 2010; 12:315–44.
Gurcan MN, Boucheron LE, Can A, Madabushi A, Rajpoot NM, Yener B. Histopatological image analysis: a review. IEEE Rev Biomed Eng. 2009; 2:147–71.
Irshad H, Veillard A, Roux L, Racoceanu D. Methods for nuclei detection, segmentation, and classification in digital histopathology: a review – current status and future potential. IEEE Rev Biomed Eng. 2014; 7:97–114.
Sommer C, Gerlich DW. Machine learning in cell biology teaching computers to recognize phenotypes. J Cell Sci. 2013; 126(24):5529–39.
Kothari S, Phan JH, Stokes TH, Wang MD. Pathology imaging informatics for quantitative analysis of whole-slide images. J Am Med Inform Assoc. 2013; 20(6):1099–108.
Xing F, Yang L. Robust nucleus/cell detection and segmentation in digital pathology and microscopy images: A comprehensive review. IEEE Rev Biomed Eng. 2016; 9:234–63.
Veta M, Pluim JPW, van Diest PJ, Viergever MA. Breast cancer histopathology image analysis: a review. IEEE Trans Biomed Eng. 2014; 61(5):1400–11.
Wang H, Xing F, Su H, Stromberg A, Yang L. Novel image markers for non-small cell lung cancer classification and survival prediction. BMC Bioinformatics. 2014; 15(1):310.
Meijering E, Carpenter AE, Peng H, Hamprecht FA, Olivo-Marin J-C. Imagining the future of bioimage analysis. Nat Biotechnol. 2015; 34:1250–5.
Greenspan H, van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans Med Imaging. 2016; 35(5):1153–9.
Goodfellow I, Bengio Y, Courville A. Deep Learning. 2016. Book in preparation for MIT Press. http://www.deeplearningbook.org. Accessed Dec 2017.
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(28):436–44.
Xing F, Xie Y, Su H, Liu F, Yang L. Deep learning in microscopy image analysis: A survey. IEEE Trans Neural Netw Learn Syst. 2018; 29(10):4550–68.
Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T. Decaf: A deep convolutional activation feature for generic visual recognition. In: Int. Conf. Mach. Learn. Beijing: PMLR: 2014. p. 647–55.
Razavian AS, Azizpour H, Sullivan J, Carlsson S. Cnn features off-the-shelf: An astounding baseline for recognition. In: IEEE Conf. Comput. Vis. Pattern Recognit. Workshops. IEEE: 2014. p. 512–9. https://doi.org/10.1109%2Fcvprw.2014.131.
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, M. S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Li F-F. Imagenet large scale visual recognition challenge. Int J Comput Vis. 2015; 115(3):211–52.
Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: Euro. Conf. Comput. Vis. Springer International Publishing: 2014. p. 818–33. https://doi.org/10.1007%2F978-3-319-10590-1_53.
Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Conf. Comput. Vis. Pattern Recognit. IEEE: 2014. p. 580–7. https://doi.org/10.1109%2Fcvpr.2014.81.
Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J. Convolutional neural networks for medical image analysis: Full training or fine tuning?IEEE Trans Med Imaging. 2016; 35(5):1299–312.
Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016; 35(5):1285–98.
Xu Y, Jia Z, Ai Y, Zhang F, Lai M, Chang EIC. Deep convolutional activation features for large scale brain tumor histopathology image classification and segmentation. In: IEEE Int. Conf. Acoustics, Speech, Signal Process. IEEE: 2015. p. 947–51. https://doi.org/10.1109%2Ficassp.2015.7178109.
Kumar N, Verma R, Sharma S, Bhargava S, Vahadane A, Sethi A. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans Med Imaging. 2017; 36(7):1550–60.
Moeskops P, Wolterink JM, van der Velden BHM, Gilhuijs KGA, Leiner T, Viergever MA, Isgum I. Deep learning for multi-task medical image segmentation in multiple modalities. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent, vol. 9901. Cham: Springer International Publishing: 2016. p. 478–86.
The Cancer Genome Altas. 2018. http://cancergenome.nih.gov/.
Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017; 19(1):221–48.
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal. 2017; 42:60–88.
Zhou SK, Greenspan H, Shen D. Deep Learning for Medical Image Analysis. Amsterdam, Netherlands: Elsevier Inc.; 2017.
Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017; 542:115–8.
van Grinsven M, van Ginneken B, Hoyng CB, Theelen T, Sánchez CI. Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images. IEEE Trans Med Imaging. 2016; 35(5):1273–84.
Dubost F, Bortsova G, Adams H, Ikram A, Niessen WJ, Vernooij M, De Bruijne M. GP-Unet: Lesion detection from weak labels with a 3d regression network. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent., vol. 10435. Springer International Publishing: 2017. p. 214–21. https://doi.org/10.1007%2F978-3-319-66179-7_25.
Eppenhof KAJ, Pluim JPW. Supervised local error estimation for nonlinear image registration using convolutional neural networks. In: SPIE Med. Imaging 2017: Image Process., vol. 10133. SPIE: 2017. p. 1–6. https://doi.org/10.1117%2F12.2253859.
Khoshdeli M, Winkelmaier G, Parvin B. Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes. BMC Bioinformatics. 2018; 19(1):294.
Zhang Z, et al.Pathologist-level interpretable whole-slide cancer diagnosis with deep learning. Nat Mach Intell. 2019; 1:236–45.
Ciresan DC, Giusti A, Gambardella LM, Schmidhuber J. Mitosis detection in breast cancer histology images with deep neural networks. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent., vol. 8150. Springer Berlin Heidelberg: 2013. p. 411–8. https://doi.org/10.1007%2F978-3-642-40763-5_51.
Albarqouni S, Baur C, Achilles F, Belagiannis V, Demirci S, Navab N. Aggnet: Deep learning from crowds for mitosis detection in breast cancer histology images. IEEE Trans Med Imaging. 2016; 35(5):1313–21.
Xing F, Xie Y, Yang L. An automatic learning-based framework for robust nucleus segmentation. IEEE Trans Med Imaging. 2016; 35(2):550–66.
Wang J, MacKenzie JD, Ramachandran R, Chen DZ. Neutrophils identification by deep learning and voronoi diagram of clusters. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent. Springer International Publishing: 2015. p. 226–33. https://doi.org/10.1007%2F978-3-319-24574-4_27.
Mao Y, Yin Z, Schober JM. Iteratively training classifiers for circulating tumor cell detection. In: IEEE Int. Symp. Biomed. Imag. IEEE: 2015. p. 190–4. https://doi.org/10.1109%2Fisbi.2015.7163847.
Veta M, van Diest PJ, Pluim JPW. Cutting out the middleman: Measuring nuclear area in histopathology slides without segmentation. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent. Springer International Publishing: 2016. p. 632–9. https://doi.org/10.1007/978-3-319-46723-8_73.
Khoshdeli M, Parvin B. Feature-based representation improves color decomposition and nuclear detection using a convolutional neural network. IEEE Trans Biomed Eng. 2018; 65(3):625–34.
Xu J, Xiang L, Liu Q, Gilmore H, Wu J, Tang J, Madabhushi A. Stacked sparse autoencoder (ssae) for nuclei detection on breast cancer histopathology images. IEEE Trans Med Imaging. 2016; 35(1):119–30.
Dong B, Shao L, Costa MD, Bandmann O, Frangi AF. Deep learning for automatic cell detection in wide-field microscopy zebrafish images. In: IEEE Int. Symp. Biomed. Imag. IEEE: 2015. p. 772–6. https://doi.org/10.1109/isbi.2015.7163986.
Shkolyar A, Gefen A, Benayahu D, Greenspan H. Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using convolutional neural networks. In: Annu. Int. Conf. IEEE Eng. Med. Biol. Society. IEEE: 2015. p. 743–6. https://doi.org/10.1109/embc.2015.7318469.
Liu F, Yang L. A novel cell detection method using deep convolutional neural network and maximum-weight independent set. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent. vol. 9351. Springer International Publishing: 2015. p. 349–57. https://doi.org/10.1007/978-3-319-42999-1_5.
Chen H, Dou Q, Wang X, Qin J, Heng P-A. Mitosis detection in breast 966 cancer histology images via deep cascaded networks. In: AAAI Conf. Artif. Intell. MDPI AG: 2016. p. 1160–6.
Xu Z, Huang J. Detecting 10,000 cells in one second. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent., vol. 9901. Springer International Publishing: 2016. p. 676–84. https://doi.org/10.1007/978-3-319-46723-8_78.
Wang S, Yao J, Xu Z, Huang J. Subtype cell detection with an accelerated deep convolution neural network. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent., vol. 9901. Springer International Publishing: 2016. p. 640–8. https://doi.org/10.1007/978-3-319-46723-8_74.
Xie Y, Xing F, Kong X, Yang L. Beyond classification: structured regression for robust cell detection using convolutional neural network. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent., vol. 9351. Springer International Publishing: 2015. p. 358–65. https://doi.org/10.1007/978-3-319-24574-4_43.
Sirinukunwattana K, Raza SEA, Tsang YW, Snead DRJ, Cree IA, Rajpoot NM. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans Med Imaging. 2016; 35(5):1196–206.
Xie Y, Kong X, Xing F, Liu F, Su H, Yang L. Deep voting: a robust approach toward nucleus localization in microscopy images. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent., vol. 9351. Springer International Publishing: 2015. p. 374–82. https://doi.org/10.1007/978-3-319-24574-4_45.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: IEEE Conf. Comput. Vis. Pattern Recognit. IEEE: 2015. p. 3431–40. https://doi.org/10.1109/cvpr.2015.7298965.
Xie Y, Xing F, Shi X, Kong X, Su H, Yang L. Efficient and robust cell detection: A structured regression approach. Med Image Anal. 2018; 44:245–54.
Xie W, Noble JA, Zisserman A. Microscopy cell counting with fully convolutional regression networks. In: MICCAI 1st Workshop on Deep Learning in Medical Image Analysis. Informa UK Limited: 2015. p. 1–8. https://doi.org/10.1080/21681163.2016.1149104.
Zhou Y, Dou Q, Chen H, Qin J, Heng PA. SFCN-OPI: Detection and fine-grained classification of nuclei using sibling fcn with objectness prior interaction. In: AAAI Conf. Artif. Intell: 2018. p. 2652–9.
Rempfler M, Kumar S, Stierle V, Paulitschke P, Andres B, Menze BH. Cell lineage tracing in lens-free microscopy videos. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent. Springer International Publishing: 2017. p. 3–11. https://doi.org/10.1007/978-3-319-66185-8_1.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: IEEE Conf. Comput. Vis. Pattern Recognit. IEEE: 2016. p. 770–8. https://doi.org/10.1109/cvpr.2016.90.
PyTorch. 2017. https://github.com/pytorch. Accessed Oct 2017.
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014; 15:1929–58.
Kuhn HW. The hungarian method for the assignment problem. Nav Res Logist Q. 1955; 2:83–97.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent. Springer International Publishing: 2015. p. 234–41. https://doi.org/10.1007/978-3-319-24574-4_28.
Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Int. Conf. Mach. Learn., vol. 37. Lille: PMLR: 2015. p. 448–56.
Clevert D-A, et al.Fast and accurate deep network learning by exponential linear units (elus). In: Int. Conf. Learn. Repres: 2016. p. 1–14.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Int. Conf. Learn. Represent: 2015. p. 1–14.
Dumoulin V, Visin F. A guide to convolution arithmetic for deep learning. 2016:1–31. arXiv:1603.07285 [stat.ML].
Chen H, Qi X, Yu L, Heng PA. Dcan: Deep contour-aware networks for accurate gland segmentation. In: IEEE Conf. Comput. Vis. Pattern Recognit. IEEE: 2016. p. 2487–96. https://doi.org/10.1109/cvpr.2016.273.
Kainz P, Urschler M, Schulter S, Wohlhart P, Lepetit V. You should use regression to detect cells. In: Int. Conf. Med. Image Comput. Comput. Assist. Intervent., vol. 9351. Springer International Publishing: 2015. p. 276–83. https://doi.org/10.1007/978-3-319-24574-4_33.
Szegedy C, Toshev A, Erhan D. Deep neural networks for object detection. In: Adv. Neural Inform. Process. Sys. Curran Associates, Inc.: 2013. p. 2553–61.
LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998; 86(11):2278–324.
We thank the TCGA Research Network for data access. We also thank all the BICI2 lab members for their support and discussion.
Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R21CA237493. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funding body did not play any role in the design of the study, collection, analysis and interpretation of data or the writing of the manuscript.
Department of Biostatistics and Informatics, and the Data Science to Patient Value initiative, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, Colorado, 80045, United States
Fuyong Xing
J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida, 32611, United States
Yuanpu Xie, Xiaoshuang Shi, Pingjun Chen & Lin Yang
Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Drive, Gainesville, Florida, 32611, United States
Zizhao Zhang & Lin Yang
Yuanpu Xie
Xiaoshuang Shi
Pingjun Chen
Zizhao Zhang
Lin Yang
FX and LY designed the study. FX and YX developed the method, designed the experiments and analyzed the results. FX, YX, XS, PC and ZZ collected the data and prepared data annotations. FX wrote the manuscript. LY supervised the study. All authors have read and approved the final manuscript.
Correspondence to Fuyong Xing.
The original version of this article was revised: - The captions for Fig. 1 and Fig. 2 have been switched; - The references to Fig. 1 and Fig. 2 have been switched within the main text.
Supplementary document. This supplementary document contains the precision-recall curves of nucleus detection using MiocroNet on all the 23 datasets. (PDF 153 kb)
corrected publication 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Xing, F., Xie, Y., Shi, X. et al. Towards pixel-to-pixel deep nucleus detection in microscopy images. BMC Bioinformatics 20, 472 (2019). https://doi.org/10.1186/s12859-019-3037-5
Nucleus detection
Microscopy images
Deep neural networks
Imaging, image analysis and data visualization | CommonCrawl |
Estimating the daily trend in the size of the COVID-19 infected population in Wuhan
Qiu-Shi Lin1,
Tao-Jun Hu2 &
Xiao-Hua Zhou ORCID: orcid.org/0000-0001-7935-12221,3,4
The Commentary to this article has been published in Infectious Diseases of Poverty 2020 9:129
The outbreak of coronavirus disease 2019 (COVID-19) has become a pandemic causing global health problem. We provide estimates of the daily trend in the size of the epidemic in Wuhan based on detailed information of 10 940 confirmed cases outside Hubei province.
In this modelling study, we first estimate the epidemic size in Wuhan from 10 January to 5 April 2020 with a newly proposed model, based on the confirmed cases outside Hubei province that left Wuhan by 23 January 2020 retrieved from official websites of provincial and municipal health commissions. Since some confirmed cases have no information on whether they visited Wuhan before, we adjust for these missing values. We then calculate the reporting rate in Wuhan from 20 January to 5 April 2020. Finally, we estimate the date when the first infected case occurred in Wuhan.
We estimate the number of cases that should be reported in Wuhan by 10 January 2020, as 3229 (95% confidence interval [CI]: 3139–3321) and 51 273 (95% CI: 49 844–52 734) by 5 April 2020. The reporting rate has grown rapidly from 1.5% (95% CI: 1.5–1.6%) on 20 January 2020, to 39.1% (95% CI: 38.0–40.2%) on 11 February 2020, and increased to 71.4% (95% CI: 69.4–73.4%) on 13 February 2020, and reaches 97.6% (95% CI: 94.8–100.3%) on 5 April 2020. The date of first infection is estimated as 30 November 2019.
In the early stage of COVID-19 outbreak, the testing capacity of Wuhan was insufficient. Clinical diagnosis could be a good complement to the method of confirmation at that time. The reporting rate is very close to 100% now and there are very few cases since 17 March 2020, which might suggest that Wuhan is able to accommodate all patients and the epidemic has been controlled.
As of 5 April 2020, the National Health Commission (NHC) of China has confirmed a total of 81 708 cases of COVID-19 in the mainland of China, including 265 severe cases and 3331 deaths. An additional total of 88 suspected cases were reported. Wuhan has 50 008 confirmed cases. The NHC has also received 890 confirmed reports in Hong Kong Special Administrative Region, 44 in Macau Special Administrative Region, and 363 in Taiwan [1]. More than one million cases have been detected outside China.
Despite the considerable medical resources and personnel that have been dispensed to combat COVID-19 in Hubei province, hospital capacity was overburdened in the early stage of this epidemic. There was a shortage of hospital beds needed to accommodate the rising number of COVID-19 patients. In response to this growing crisis, Wuhan transformed hotels, venues, training centers and college dorms into quarantine and treatment centers for COVID-19 patients. Further, 13 temporary treatment centers were built to provide over 10 000 beds [2]. Therefore, a careful and precise understanding of the potential number of cases in Wuhan is crucial for the prevention and control of the COVID-19 outbreak. Wu et al. [3] provided an estimate of the total number of cases of COVID-19 in Wuhan, using the number of cases exported from Wuhan to cities outside the mainland of China. However, since the number of cases is small, their estimate of the size of the epidemic in Wuhan may not be precise and has large variability. Using the number of cases exported from Wuhan to all cities, including cities in China outside Hubei province, You et al. [4] proposed a method to estimate the total number of cases of COVID-19 in Wuhan. However, their method can only give an estimate of the cumulative number of cases until a certain date.
In this article, we propose a new statistical method to estimate daily number of cases in Wuhan under a similar dynamic equation model as the one in reference [3]. Unlike the one in reference [3], our method can also handle the missing information on whether a case is exported from Wuhan.
The spread of COVID-19 outside Hubei province is relatively controlled given the adequate medical resources. We use the reported number outside Hubei as it is a fairly accurate representation of the actual epidemic situation. In this modelling study, we first estimate the epidemic size in Wuhan from 10 January to 5 April 2020, based on the confirmed cases outside Hubei province that left Wuhan by 23 January 2020. Since some confirmed cases have no information on whether they visited Wuhan before, we adjust the number of imported cases after taking these missing values into account. We then calculate the reporting rate in Wuhan from 20 January to 5 April 2020. Finally, we estimate the date when the first patient was infected.
Data retrieved from publicly available records from provincial and municipal health commissions in China and ministries of health in other countries include detailed information for 10 940 confirmed cases outside Hubei province. An additional table in the Supplementary Materials shows these websites in more detail [see Data_source.xlsx]. Information on confirmed cases including region, gender, age, date of symptom onset, date of confirmation, history of travel or residency in Wuhan, and date of departure from Wuhan. We display demographic characteristics of these patients in Table 1. Among the 7500 patients with gender data, 3509 (46.8%) are female. The mean age of patients is 44.48 and the median age is 44. The youngest confirmed patient outside Hubei province was only 5 days old while the oldest is 97 years old (see Table 1).
Table 1 Demographic characteristics of patients with COVID-19 outside Hubei province
We display the epidemiological data categorized by the date of confirmation in Table 2. An imported case means a patient that had been to Wuhan and was detected outside Hubei province. A local case means a confirmed case that had not been to Wuhan. Among the total of 10 940 cases, 6903 (63.1%) have such epidemiological information. The number of imported cases reached its peak on 29 January 2020, and the fourth column of Table 2 shows that the proportion of imported cases declines over time. This might reflect the effect of containment measures taken in Hubei province to control the COVID-19 outbreak [5]. Meanwhile, the daily counts of local cases are over 300 from 2 February to 7 February 2020, which indicate that infections among local residents should be a major concern for authorities outside Hubei province.
Table 2 Patient data categorized by the date of confirmation
The last column of Table 2 lists the mean time from symptom onset to confirmation for patients confirmed on each day. The median duration of all cases is 5 days, and the mean is 5.54 days. In general, the detection period decreased in the first week after 20 January 2020, but increased since then. The improvements in detection speed and capacity might cause the initial decline, and the rise may be due to more thorough screening, leading to the detection of patients with mild symptoms who would otherwise not go to the hospitals [6].
The proposed method relies on the following assumptions:
Between 10 January and 23 January 2020, the average daily proportion of departing from Wuhan is p.
There is a d = d1 + d2-day window between infection and detection, including a d1-day incubation period and a d2-day delay from symptom onset to detection.
Patients are not able to travel d days after infection.
The proportion of imported cases in the patients with no information is the same as the observed proportion on each day.
Trip durations are long enough that a traveling patient infected in Wuhan will develop symptoms and be detected in other places rather than after returning to Wuhan.
All travelers leaving Wuhan, including transfer passengers, have the same risk of infection as local residents.
Traveling is independent of the exposure risk to COVID-19 or of infection status.
Recoveries are not considered in this method.
Assumptions 1–4 are used explicitly in the Methods section. They are fundamental assumptions for our statistical model. Other assumptions might also affect the result of our model, and we make some remarks about our assumptions.
10 January 2020 is the start of Chinese New Year travel rush, and 23 January 2020, is the date of Wuhan lockdown [5]. In the total of 10 940 cases, only 131 (1.2%) cases' date of departure from Wuhan are not in this period. They are excluded from our analysis.
If the true average daily proportion of leaving Wuhan is larger than the assumed p, this violation of Assumption 1 could lead to overestimation of the number of cases in Wuhan.
If the average time from infection to detection is longer than the assumed d days, this violation of Assumption 2 would lead to an overestimation.
If travelers have a lower risk of infection than residents in Wuhan, this violation of Assumption 6 would cause an underestimation.
If infected individuals are less likely to travel due to the health conditions, this violation of Assumption 7 would cause an underestimation.
In the Supplementary Appendix A, we perform the sensitivity analysis on the effect of some of the violations on our results.
Let Day t0 denote the date of infection for the very first case. Let Nt be the cumulative number of cases that should be confirmed in Wuhan by Day t. Other notations of our model are defined in Table 3.
Table 3 Notations for our model
The numbers Tt, It, and Lt are the observed data used in our model, tc, r, and K are the parameters that determine how Nt changes over time.
The growth trend of the size Nt of infected population is determined by the following ordinary differential equation:
$$ \frac{d{N}_t}{dt}=\frac{r}{K}{N}_t\left(K-{N}_t\right),\kern0.5em r>0,K>0, $$
where K is the size of the population that are susceptible to COVID-19 in Wuhan, and r is a constant that controls the growth rate of Nt. This is the modified version of the famous SIR model [3, 10] in epidemiology. In the equation (1), the growth rate of Nt is proportional to the product of Nt and the number K − Nt of people that are susceptible but not infected yet. It is a reasonable model for the epidemic transmission. At the beginning of this epidemic, when Nt is small, people have little knowledge of COVID-19, Nt grows at an exponential rate r. As Nt becomes larger, containment measures are taken to control it, the growth rate of Nt slows down, resulting in a sigmoid curve of Nt. Detailed explanations of the model (1) are given in the Supplementary Appendix B. The model (1) has an analytical solution,
$$ {N}_t=\frac{K}{1+{e}^{-r\left(t-{t}_c\right)}}=K{f}_t, $$
where \( {f}_t=\frac{1}{1+{e}^{-r\left(t-{t}_c\right)}} \), and the derivative \( \frac{d{N}_t}{dt} \) is maximized at t = tc, \( \frac{r}{2}=\frac{d\log {N}_{t_c}}{dt} \) is the growth rate of logNt at time tc, K is a parameter to be estimated.
We use data on the confirmed cases who left Wuhan between 10 January and 23 January 2020, to estimate K. Under Assumption 2, cases infected on Day t will be detected on Day t + d, so the number of infected cases in Wuhan is Nt + d on Day t. If t0 ≤ t ≤ t0 + d, there should be no confirmed cases. If t0 + d < t ≤ t0 + 2d, imported cases on Day t are infected in Wuhan on Day t − d. There are Nt infected cases in Wuhan on Day t − d, hence the number of imported cases xt on Day t follows a binomial (Nt, p) distribution, where p is the assumed average daily probability of leaving Wuhan between 10 January and 23 January 2020. If t > t0 + 2d, under Assumption 3, Nt − d patients are not able to travel, xt has a binomial (Nt − Nt − d, p) distribution. Let Xt be the cumulative number of imported cases by Day t, then
$$ {X}_t=\sum \limits_{k=1}^t{x}_k\sim \mathrm{Binomial}\left(\sum \limits_{k=t-d+1}^t{N}_k,p\right),\kern0.75em t\ge {t}_0+2d. $$
From equations (2) and (3), \( {X}_t\sim \mathrm{Binomial}\left(K\sum \limits_{k=t-d+1}^t{f}_k,p\right) \). The parameter estimate \( \hat{K} \) is derived by maximizing the likelihood function
$$ l(K)=\left(\genfrac{}{}{0pt}{}{K\sum \limits_{k=t-d+1}^t{f}_k}{X_t}\right){p}^{X_t}{\left(1-p\right)}^{K\sum \limits_{k=t-d+1}^t{f}_k-{X}_t}. $$
The lower and upper bound of the 95% confidence interval \( \left[\hat{K_l}\hat{,{K}_u}\right] \) are values such that the cumulative distribution function \( F(K)={\sum}_{x=0}^{X_t}l(K) \) equals to 0.975 and 0.025, respectively. The reporting rate is the reported cumulative number of cases in Wuhan on Day t divided by our estimated number \( \hat{N_t} \). The estimate of the date t0 of first infection is obtained by solving the equation \( {N}_{t_0}=1. \)
Determining the number of imported cases xt plays a crucial role in the modeling procedure. Note that not all cases have clear records on the history of travel or residency in Wuhan, we need to impute the missing values. Under Assumption 4, the proportion of imported cases in the Ut patients with no information is the same as the observed proportion \( \frac{I_k}{I_k+{L}_k} \). Therefore,
$$ {x}_t={I}_t+{U}_t\times \frac{I_k}{I_k+{L}_k}={T}_k\times \frac{I_k}{I_k+{L}_k}. $$
The average daily proportion of leaving Wuhan between 10 January and 23 January 2020 is estimated to be the ratio of daily volume of travelers to the population of Wuhan (14 million). More than 5 million people were estimated to leave Wuhan due to the Spring Festival and epidemic [7]. This number is mentioned by Wuhan Mayor in a press conference. We assume these passengers left Wuhan between the start of Chinese New Year travel rush on 10 January 2020, and the lockdown of Wuhan city on 23 January 2020. During the travel rush, 34% of the passengers traveled across 300 km [8]. Major cities outside Hubei province are generally over 300 km from Wuhan. This would imply, on average, the daily probability p of traveling from Wuhan to places outside Hubei province would be 5 × 0.34/14/14 = 0.009. Li et al. estimated that the mean incubation period of 425 patients with COVID-19 was 5.2 days (95% CI: 4.1–7.0) [9]. The mean time from symptom onset to detection calculated from our data is 5.54 days, so we choose d = d1 + d2 = 11 days. On 29 January 2020, there was the maximum count of imported cases. Since xt has a binomial (Nt − Nt − d, p) distribution with constant p, Nt − Nt − d also reaches its maximum at t= 29 January 2020. From the logistic function (2), tc is the midpoint of t and t − d, that is \( t-\frac{d}{2}= \) 24 January 2020, which is shortly after the lockdown of Wuhan city [5]. Wu et al. estimated the epidemic doubling time as 6.4 days (95% CI: 5.8–7.1) as of 25 January 2020 [3]. From this result, we estimate that \( \frac{r}{2}=\frac{d\log {N}_{t_c}}{dt}=\frac{\ln 2}{6.4}=0.1 \). Using these values for parameters p, d, tc, and r, we can derive the maximum likelihood estimate \( \hat{K}=51\ 273, \) with 95% CI: 49 844–52 734.
We estimate the number of cases that should be reported in Wuhan by 10 January 2020, as 3229 (95% CI: 3139–3321) and 51 273 (95% CI: 49 844–52 734) by 5 April 2020. Figure 1 shows how the estimated number of cases in Wuhan increases over time, together with the 95% confidence bands.
Estimated number of total cases in Wuhan
As shown in Fig. 2, the reporting rate has grown rapidly from 1.5% (95% CI: 1.5–1.6%) on 20 January 2020 to 39.1% (95% CI: 38.0–40.2%) on 11 February 2020. It becomes 71.4% (95% CI: 69.4–73.4%) on 13 February 2020, and reaches 97.5% (95% CI: 94.8–100.3%) on 5 April 2020.
The ratio of reported number of cases to the estimated number
Table 4 gives the number of confirmed cases reported by Wuhan Health Commission, the estimated number and the reporting rate, as well as the 95% confidence intervals. By solving for t in the equation Nt = 1 with the expression of Nt given in (2), we obtain an estimate of the date of first infection as 30 November 2019.
Table 4 Model estimated number of cases and reporting rates
Most studies estimating the epidemic size of COVID-19 in Wuhan use the reported number of cases to predict the future trend. These researches ignore the possibility of considerable number of unreported cases in the early stage of this outbreak in Wuhan. We estimate the actual size of epidemic in Wuhan and predict the future trend based on information about COVID-19 cases outside Hubei province. Several recent studies share similar ideas that utilize external data to infer the number of cases in Wuhan. You et al. [4] estimated a total of 3933 cases of COVID-19 in Wuhan (95% CI: 3454–4450) that had an onset of symptoms by 19 January 2020. Wu et al. [3] estimated that 75 815 individuals (95% CI: 37 304–130 330) have been infected in Wuhan as of 25 January 2020. This number far exceeds 50 008 cumulative cases reported in Wuhan, which seems not very reasonable. Nishiura et al. [11] estimated a total of 20 767 infected individuals as of 29 January 2020 based on a binomial model, which is simplified version of model (3), and eight confirmed cases on three chartered flights evacuating Japanese citizens from Wuhan. These results are estimates of the cumulative number of cases in Wuhan until a certain date and have wide confidence intervals due to limited data size. Using information of over 10 000 confirmed cases outside Hubei province, our statistical method can handle the problem of missing data and estimate the daily number of cases in Wuhan as shown in Fig. 1. Maugeri et al. [12] estimate a total of 8724 (95% CI: 8478–8921) infected cases and 92.9% (95% CI: 92.5–93.1%) unreported by 23 January 2020 with a proposed SEIRD model based on the reported number of deaths between 23 January and 9 February 2020. However, a total of 1290 cases were added to the death toll in Wuhan on 18 April 2020 by Wuhan government [13]. Thus, the number of deaths used in their research might not be accurate enough, leading to biases in their estimation. In the early stage of this epidemic, estimated numbers given by our method and existing researches are substantially larger than the reported number of confirmed cases. As of 5 April 2020, the reported cumulative number of cases in Wuhan is very close to the estimated number of our model, indicating the effectiveness of our method for long-term epidemic trend prediction. This method can effectively and accurately estimate the actual number of cases when the testing capability is insufficient. Similar statistical methods and ideas can be applied to other countries or regions that are still suffering from the outbreak of COVID-19 to support the prevention and control of this pandemic.
The major limitation of our methodology, as well as many other existing researches, is that time-varying parameters are not taken into consideration. Assumption 1 assumes that the daily probability of leaving Wuhan between 10 January and 23 January 2020, is approximately constant. Our estimate of traveling probability p might not be accurate due to the missing of exact daily number of traveling people from Wuhan to places outside Hubei province. We will try to improve the accuracy of p with more credible and precise transportation data in future research. Quarantine measures may have influences on some parameters in the epidemiological dynamic model (1), so that these parameters may change over time. It is a future research topic to allow time-varying parameters.
We provide a computationally efficient method of estimating the daily development of COVID-19 epidemic in Wuhan. The date of first infection is estimated as 30 November 2019. With the introduction of clinical diagnosis in the confirmation of COVID-19 in Wuhan, the reporting rate increases rapidly from about 40% to over 70% in only 2 days in February 2020. Clinical diagnosis could be a good complement to the method of confirmation in the early stage. The suspected cases in Wuhan declined to zero on 17 March 2020. Both the reported and estimated numbers show that there are very few cases since then. This might suggest the epidemic in Wuhan has been under control. The reporting rate is always increasing during this epidemic. As of 5 April 2020, the reporting rate is very close to 100%. Although the medical resources and testing capacity of Wuhan were insufficient at the beginning of this outbreak, Wuhan is now able to accommodate all patients with the assistance from the whole country and effective measures taken in the fight against COVID-19.
All data and materials used in this work were publicly available.
National Health Commission Update on April 06, 2020. China CDC Weekly. http://weekly.chinacdc.cn/news/TrackingtheEpidemic.htm#NHCApr06. Accessed 6 Apr 2020.
Wang XD. Hubei ordered to admit all patients in hospitals. China Daily https://www.chinadaily.com.cn/a/202002/09/WS5e3fba1ca3101282172760aa.html. Accessed 9 Feb 2020.
Wu JT, Leung K, Leung GM. Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. Lancet. 2020;395(10225):689–97.
You C, Lin Q, Zhou X. An estimation of the total number of cases of NCIP (2019-nCoV)—Wuhan, Hubei Province, 2019–2020. China CDC Weekly. 2020;2(6):87–91.
China declares lockdown in Wuhan on Thursday due to coronavirus outbreak. Tass. https://tass.com/world/1111981. Accessed 23 Jan 2020.
Xin W. Beijing to set up checkpoints in all residential communities. China Daily. https://www.chinadaily.com.cn/a/202002/10/WS5e415cb1a3101282172766c4.html. Accessed 1 Feb 2020.
5 million-plus leave Wuhan: Mayor. China Daily. https://www.chinadaily.com.cn/a/202001/27/WS5e2dcd01a310128217273551.html. Accessed 27 Jan 2020.
Big data perspective: Wuhan in the Chinese New Year travel rush. Daily Economic News. https://m.nbd.com.cn/articles/2020-01-22/1402239.html. Accessed 22 Jan 2020. (In Chinese).
Li Q, Guan X, Wu P, Wang X, Zhou L, et al. Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. N Engl J Med. 2020;382(13):1199–207.
Kermack WO, McKendrick AG. A contribution to the mathematical theory of epidemics. Pro R Soc Lond A. 1927;115(772):700–21.
Nishiura H, Kobayashi T, Yang Y, Hayashi K, Miyama T, Kinoshita R, et al. The rate of underascertainment of novel coronavirus (2019-nCoV) infection: estimation using japanese passengers data on evacuation flights. J Clin Med. 2020;9(2):419.
Maugeri A, Barchitta M, Battiato S, Agodi A. Estimation of unreported novel coronavirus (SARS-CoV-2) infections from reported deaths: a susceptible-exposed-infectious-recovered-dead model. J Clin Med. 2020;9(5):E1350.
Cui J. Death toll in Wuhan revised in evaluation. China Daily. https://www.chinadaily.com.cn/a/202004/18/WS5e9a3742a3105d50a3d17158.html. Accessed 26 May, 2020.
This research is supported by National Natural Science Foundation of China (Grant No. 82041023) and Zhejiang University special scientific research fund for COVID-19 prevention and control (Grant No. 2020XGZX016).
Beijing International Center for Mathematical Research, Peking University, Beijing, 100871, China
Qiu-Shi Lin & Xiao-Hua Zhou
School of Mathematical Sciences, Peking University, Beijing, 100871, China
Tao-Jun Hu
Department of Biostatistics, School of Public Health, Peking University, Beijing, 100191, China
Xiao-Hua Zhou
Center for Statistical Science, Peking University, Beijing, 100871, China
Qiu-Shi Lin
QL and XZ designed the study. QL and TH collected and analyzed the data. QL, TH, and XZ interpreted the results and wrote the manuscript. The author(s) read and approved the final manuscript.
Correspondence to Xiao-Hua Zhou.
The ethics approval and individual consent was not applicable.
The first file's name is Data_source.xlsx. It is a table that lists the official COVID-19 websites of provincial and municipal health commissions for every city in China. It includes province, city and source websites in both Chinese and English. Information of COVID-19 patients are collected from these websites.
The second file's name is Appendix.docx. It contains two appendices with additional sensitivity analysis and detailed description of our method. They are only for reviewers' convenience, not for publication.
Lin, QS., Hu, TJ. & Zhou, XH. Estimating the daily trend in the size of the COVID-19 infected population in Wuhan. Infect Dis Poverty 9, 69 (2020). https://doi.org/10.1186/s40249-020-00693-4
Transmission patterns and control of COVID-19 epidemic | CommonCrawl |
__FULL__ Free To Air Adult Channels ⏳
Download >> DOWNLOAD
Free To Air Adult Channels
Adult Channel – Wikipedia
With over 200 channels and 50+ videos available to stream anytime, live TV is the best way to stay
"Free to air" or "FTA" means that the content is not distributed by a satellite or cable company, but the distribution is either done by a TV station itself (e.g., WABC, WNYW, or WNBC, all three of which in New York City), by a cooperative group of broadcasters (such as The Disney Channel and The WB, both of which in California) or some combination thereof.
As of March 2016, FTA programming was broadcast by 126 TV channels in 25 countries on four continents. The largest FTA broadcasters are the United States, Canada, the United Kingdom, Australia and New Zealand. Most FTA channels in North America are broadcast in high definition video, though a few such as The Comedy Channel and IFC use SD video.
Watch Free To Air Adult Channels.com websites for free! | Free Daily Videos Online
Free TV channels from every country of the world are collected together for you in one place. Free-x.club even has programs from distant lands like Choc TV.
Watch Free-to-Air TV Online FU-TV is the largest cable free-to-air TV streaming websites in the world, broadcasting more than 1,000
Greatest selection of Premium and Free-to-Air TV Channels in South Africa
This content is not available in your region. For content on free-to-air TV in Australia, New Zealand and South Africa, sign up for email alerts.
Watch TV channels from around the globe. Watch live TV on your pc; movies, TV series, sports,, music. Watch live and movies, TV series, sports,, music.
Free TV with the widest range of online channels and the best quality. Watch thousands of On Demand movies, sports, series and documentaries, live TV from all around the world.
Free-X TV is the world's largest free-to-air satellite TV platform. No-charge and no subscription required. Access all the classic TV channels live or on demand.
No commercials, no additional services, and no subscription. Watch hundreds of channels and top American dramas including Seinfeld, Friends, CSI, American Idol, UFC, and many more.
Watch live and on demand TV channels from USA, Canada and UK for free.
Pornhub is the ultimate xxx porn and sex site. Get your
Is the number one destination for online dating with. Hd world without cable, will show from the free over-the-air hd channels.
DTF international (Australia) Free view channel, Astra 33.68MHz free view channel. DVB-T 2MBPS Freeview channel.
Channel 1 Adult – Astra 2E, Channel 2 Adult – Astra 2F, Channel 3 Adult – Astra 3A, Channel 4
Dish Network offers a great selection of movies, sports, and a digital DVR that's easy to use.Q:
Let $f:G\to K$ be a function between set $G$ and $\mathbb{R}$ and let $a\in \mathbb{R}$. Is it true that the $f\mid_{f^{ -1}(a)}$ and $f^{ -1}(a)$ has same number of elements?
Let $f:G\to K$ be a function between set $G$ and $\mathbb{R}$ and let $a\in \mathbb{R}$. We know that $a$ is an element of $K$ and hence it is also an element of $f(G)$ and there is some $g\in G$ s.t. $f(g)=a$ and this $g$ also belong to $f^{ -1}(a)$ and $f\mid_{f^{ -1}(a)}$ is surjective(since $a$ is an element of $K$). Let $h$ be another element of $f^{ -1}(a)$ and let $g'=f(h)$. Then $f(g)=f(g')$ and hence $g$ and $g'$ belong to $f^{ -1}(f(g))$ and $f\mid_{f^{ -1}(f(g))}$ is also surjective. So now for the proof of the question.
Is it true that for the two elements $g$ and $g'$ i.e. $f(g)=f(g')=a$ that they have same
number of elements?
Any help would be appreciated. Thank you in advance.
It's not true. Take the function $f$ defined by $f:\mathbb
Free to Air Channels in Australia | Satellite TV
Satellite TV Australia is an Australian company of pay TV satellite service provider to the Australian market. It currently has two multiple system operators licenses..Pay TV operator Satellite television is a British company based in Guildford, Surrey.The company offers services through digital satellite broadcasting in the.
TV Adelaide | Adelaide TV, Free to Air, Adult
There's an old sat cruiser coolsat DSR 101 Plus+. Is it? I need to "download" something ONTO the satellites to get more channels, can someone explain this? I'm not .
Free To Air Porn Channels – Black african anal also naked eyes … Free to air porn channels … the large number of adult channels available .
There's a push to have adult programs on commercial television shown at any time of the day. Free TV Australia wants to remove a ban on .
Freeview carries two further adult channels, for which viewers have to pay an .
Free to air (free-to-air in Australia) channels are television signals broadcast directly from satellites or space stations to television receivers or digital video recorders without the use of cable television networks or terrestrial repeaters. They are often referred to as direct-broadcast satellite services or as direct-broadcast satellite television. Free to air television, or FTA television, is now available to viewers in Australia on all satellite television networks.
hi, I have a Samsung 32PHL3004B NCS and I'm following the instructions here on the LG HDTVs to get my channels in Australia. The problem is I have to pay an extra $40 to receive the FOC channels. Is there anywhere in
The following are the free channels that you can watch for free at any time in the day. You don't need anything else to get the free channels. The payment channel is only available at certain times of the day.
How can I watch Adult channels free online? – Quora
Watch Adult Channels From The UK – 12 Free To Air Channels To Watch Online – Yahoo Answers
Free to Air Channels on Sky
Free to air television are channels that are broadcast for free that you can pick up using a digital antennae.
http://dummydoodoo.com/2022/07/17/zwcad-portable-2013/
https://www.coursesuggest.com/wp-content/uploads/2022/07/Kunci_Jawaban_Auditing_Dan_Jasa_Assurance_Jilid_2_Arens_Rapidshare_Zip.pdf
https://www.allegrosingapore.com/wp-content/uploads/2022/07/AssassinsCreedIVBlackFlagFreedomCryCrackFixRELOADED.pdf
https://agedandchildren.org/arturia-minimoog-v-vsti-rtas-v2-5-1-incl-_best_-keygen-air-deepstatus/
http://weedcottage.online/?p=103705
https://maisonrangee.com/wp-content/uploads/2022/07/ragnarok_0_delay_grf_18.pdf
https://fantasyartcomics.com/2022/07/17/igo-primo-dem-files-download-link/
https://khaosod.us/classified/advert/vehiculos-pro-v-6-10-51/
http://sinteg.cat/?p=9316
http://shop.chatredanesh.ir/?p=67179
http://www.vidriositalia.cl/?p=56901
https://www.thirtythousandhomes.org/wp-content/uploads/2022/07/league_of_legends_secret_chat_commands.pdf
https://dragalacoaching1.com/wavesznoisebundlevstfreedownload-upd/
https://www.designonline-deco.com/wp-content/uploads/2022/07/Free_Download_Film_Passengers_English_Movie.pdf
https://mevoydecasa.es/zooskool-animais-metendo-em-mulher/
https://thecryptowars.com/yamaha-diagnostic-software-yds-1-33-software/
https://charlottekoiclub.com/advert/madurai-to-theni-tamil-movie-download-hot/
https://cdn.lyv.style/wp-content/uploads/2022/07/17090410/ellaly.pdf
https://commongroundva.com/2022/07/17/configurar-router-wifi-telefonica-comtrend-ct-5361/
https://weilerbrand.de/wp-content/uploads/2022/07/nfs_undercover_crack_rar_password.pdf
Freesat offers a subscription-free service and channels, so there is no need to pay extra. The guide provides an .
Broadcasters have been warned to ensure that adult programming and .
All that is some good news considering that would-be live streamers are going to need to invest in a computer with a special .
Should your computers not play to your entire favour e.g. small screen, low-resolution or lack of .
Rooster will have to make a rule shift to give digital TV's the same adult-free status that their standard is given at present.
Freesat adult channels. Free to air porn channels.. Tags: fta+adult+channels. Adult satellite channels SatsUK – The Digital TV & Smart Tech Support Forums.
This is a rerun. The first one was put online in August 2012, when the FCCÂ .
Your internet option is not the same thing as the Over-The-Air (OTA) option. OTA is the old-fashioned, analog way.
Although subscribers pay for paid-for channels, broadcasters are .
TBS, the network responsible for Adult Swim, announced its free-to-air network is expanding to 3Â .
All this contrasts with what many Americans learn in school that over-the-air digital signals (FTA) are free and .
AeroWhip AeroWhip is free, you can watch it with no subscription. Do you also want to stream to your mobile devices? Over The Air -.
The FCC is going to be making some changes to bring the TV Land network back to terrestrial TV. Unfortunately for those .
You'll want to have both Android and iPhone 4S, as well as a laptop or desktop running Windows and a working TV antenna. Reception will vary, especially in the oblique northeast .
Viewers that catch the action as it unfolds live on television see the same content as those that watch on .
Cabazon is | CommonCrawl |
Search Results: 1 - 10 of 201837 matches for " Christos P. Panteliadis "
Comment on Childhood Febrile Seizures: Overview and Implications by Tonia Jones and Steven J. Jacobsen
Efterpi Pavlidou, Christos P. Panteliadis
International Journal of Medical Sciences , 2007,
Surgical Treatment for Neonatal Hydrocephalus: Catheter-Based Cerebrospinal Fluid Diversion or Endoscopic Intervention? [PDF]
Matthias Krause, Christos P. Panteliadis, Christian Hagel, Franz W. Hirsch, Ulrich H. Thome, Jürgen Meixensberger, Ulf Nestler
Open Journal of Modern Neurosurgery (OJMN) , 2018, DOI: 10.4236/ojmn.2018.81002
Abstract: Neonatal hydrocephalus can arise from a multitude of disturbances, among them congenital aqueductal stenosis, myelomeningocele or posthemorrhagic complications in preterm infants. Diagnostic work-up comprises transfontanellar ultrasonography, T2 weighted MRI and clinical assessment for rare inherited syndromes. Classification of hydrocephalus and treatment guidelines is based on detailed consensus statements. The recent evidence favors catheter-based cerebrospinal fluid diversion in children below 6 months, but emerging techniques such as neuroendoscopic lavage carry the potential to lower shunt insertion rates. More long-term study results will be needed to allow for individualized, multidisciplinary decision making. This article gives an overview regarding contemporary pathophysiological concepts, the latest consensus statements and most recent technical developments.
Helicobacter pylori infection has no impact on manometric and pH-metric findings in adolescents and young adults with gastroesophageal reflux and antral gastritis: eradication results to no significant clinical improvement
Ioannis Xinias,Theophanis Maris,Antigoni Mavroudi,Christos Panteliadis
Pediatric Reports , 2013, DOI: 10.4081/pr.2013.e3
Abstract: The relationship between Helicobacter pylori (Hp) gastritis and gastroesophageal reflux disease (GERD) remains controversial. The aim was to investigate the association between Hp infection and gastroesophageal reflux (GER) and the impact of Hp eradication on esophageal acid exposure and motility in adolescents and young adults with Hp gastritis and GERD. Sixty-four patients with symptoms suggestive for GERD, of which 40 Hp-positive (group A) and 24 Hp-negative (group B), underwent endoscopy-biopsy, esophageal manometry and 24-hour pH-metry. All group A patients received eradication treatment and were re-evaluated six months later again with 24-hour pH-metry, esophageal manometry, endoscopy-biopsy and clinical assessment. At inclusion, there were no significant differences between the two groups regarding sex, age, grade of endoscopic esophagitis, manometric and pH-metry findings. All Hp-positive patients had an antral predominant gastritis. Eradication of Hp was successful in all patients, and gastritis and esophagitis were healed in all patients. The mean lower esophageal sphincter pressure (LESP) increased significantly from 11.25 mmHg before to 11.71 mmHg after eradication (P<0.05). A significant decrease in reflux index was observed (mean RI 6.02% before versus 4.96% after eradication (P<0.05). However clinical symptoms of GER improved not significantly after 6 months follow up. Conclusively, in children and young adults with GER symptoms and GERD, the presence or absence of Hp has no impact on manometric and pH-metric findings. Eradication of Hp infection results in increase in LESP with a consequent decrease in esophageal acid exposure but not significant clinical improvement.
Neurocognitive evaluation of oxcarbazepine monotherapy in children with benign chilhood epilepsy
Tzitiridou Maria,Panou Dora,Kambas Athanasios,Panteliadis Christos
Annals of General Psychiatry , 2006, DOI: 10.1186/1744-859x-5-s1-s158
Highights in the History of Epilepsy: The Last 200 Years
Emmanouil Magiorkinis,Aristidis Diamantis,Kalliopi Sidiropoulou,Christos Panteliadis
Epilepsy Research and Treatment , 2014, DOI: 10.1155/2014/582039
Abstract: The purpose of this study was to present the evolution of views on epilepsy as a disease and symptom during the 19th and the 20th century. A thorough study of texts, medical books, and reports along with a review of the available literature in PubMed was undertaken. The 19th century is marked by the works of the French medical school and of John Hughlings Jackson who set the research on epilepsy on a solid scientific basis. During the 20th century, the invention of EEG, the advance in neurosurgery, the discovery of antiepileptic drugs, and the delineation of underlying pathophysiological mechanisms, were the most significant advances in the field of research in epilepsy. Among the most prestigious physicians connected with epilepsy one can pinpoint the work of Henry Gastaut, Wilder Penfield, and Herbert Jasper. The most recent advances in the field of epilepsy include the development of advanced imaging techniques, the development of microsurgery, and the research on the connection between genetic factors and epileptic seizures. 1. Introduction The history of epilepsy is intermingled with the history of human existence; the first reports on epilepsy can be traced back to the Assyrian texts, almost 2,000 B.C. [1]. Multiple references to epilepsy can be found in the ancient texts of all civilizations, most importantly in the ancient Greek medical texts of the Hippocratic collection. For example, Hippocrates in his book On Sacred Disease described the first neurosurgery procedure referring that craniotomy should be performed at the opposite side of the brain of the seizures, in order to spare patients from "phlegma" that caused the disease [2]. However, it was not until the 18th and 19th century, when medicine made important advances and research on epilepsy was emancipated from religious superstitions such as the fact that epilepsy was a divine punishment or possession [3, 4]. At the beginning of the 18th century, the view that epilepsy was an idiopathic disease deriving from brain and other inner organs prevailed. One should mention the important work in this field by William Culen (1710–1790) and Samuel A. Tissot whose work set the base of modern epileptology describing accurately various types of epilepsies. 2. Anatomy and Physiology of Epilepsy 2.1. Evolution of Thoughts around the Pathophysiology and Causes of Epilepsy At the beginning of the 19th century, physicians from the French medical school started to publish their research in the field of epileptology; famous French physicians published their works on epilepsy such as Maisonneuve (1745–1826)
On Quantum Risk Modelling [PDF]
Christos E. Kountzakis, Maria P. Koutsouraki
Journal of Mathematical Finance (JMF) , 2016, DOI: 10.4236/jmf.2016.61005
This paper is devoted to the connection between the probability distributions which produce solutions of the one-dimensional, time-independent Schr?dinger Equation and the Risk Measures' Theory. We deduce that the Pareto, the Generalized Pareto Distributions and in general the distributions whose support is a pure subset of the positive real numbers, are adequate for the definition of the so-called Quantum Risk Measures. Thanks both to the finite values of them and the relation of these distributions to the Extreme Value Theory, these new Risk Measures may be useful in cases where a discrimination of types of insurance contracts and the volume of contracts has to be known. In the case of use of the Quantum Theory, the mass of the quantum particle represents either the volume of trading in a financial asset, or the number of insurance contracts of a certain type.
Dynamic Conditional Correlation between Electricity, Energy (Commodity) and Financial Markets during the Financial Crisis in Greece [PDF]
Panagiotis G. Papaioannou, George P. Papaioannou, Akylas Stratigakos, Christos Dikaiakos
Abstract: Liberalization of electricity markets has increasingly created the need for understanding the volatility and correlation structure between electricity, financial and energy commodity markets. This work reveals the existence of structural changes in correlation patterns among these markets and links the changes to both fundamentals and regulatory conditions prevailing in the markets, as well as the current European financial crisis. We apply a Dynamic Conditional Correlation (DCC) GARCH model to a set of market's fundamental variables, related commodity markets and Greece's financial market and microeconomic indexes to study their interaction. Emphasis is given on the period of severe financial crisis of the Country to understand "contagion" and volatility spillover between these markets. This approach enables us to capture the changing co-movement of assets within and between markets (financial, commodity, electricity) as market conditions change. The main results are that there is strong evidence of volatility spillover (or co-volatility) between financial and commodity market, while the Greek electricity market seems to be almost "isolated" from these two markets.
Optimal CUR Matrix Decompositions
Christos Boutsidis,David P. Woodruff
Computer Science , 2014,
Abstract: The CUR decomposition of an $m \times n$ matrix $A$ finds an $m \times c$ matrix $C$ with a subset of $c < n$ columns of $A,$ together with an $r \times n$ matrix $R$ with a subset of $r < m$ rows of $A,$ as well as a $c \times r$ low-rank matrix $U$ such that the matrix $C U R$ approximates the matrix $A,$ that is, $ || A - CUR ||_F^2 \le (1+\epsilon) || A - A_k||_F^2$, where $||.||_F$ denotes the Frobenius norm and $A_k$ is the best $m \times n$ matrix of rank $k$ constructed via the SVD. We present input-sparsity-time and deterministic algorithms for constructing such a CUR decomposition where $c=O(k/\epsilon)$ and $r=O(k/\epsilon)$ and rank$(U) = k$. Up to constant factors, our algorithms are simultaneously optimal in $c, r,$ and rank$(U)$.
Tobacco Networks in the Aegean Islands [PDF]
Christos Bakalis
Advances in Historical Studies (AHS) , 2016, DOI: 10.4236/ahs.2016.52007
Abstract: The Aegean Archipelagos could be seen as a networked cultural space covered by multilevel web frameworks. The lines (flows or trajectories) are routes of people, goods, ideas and the nodes are the islands and their communities. At the same time islands are also fields of production that makes them matrixes of social and cultural creation. This paper deals with the cultivation and marketing of tobacco in the Aegean islands from the second half of the 19th century to the end of the 20th century. The presence of tobacco was combined with the transition to modernity for island communities when the capitalistic economy was spread and became dominant in the eastern Mediterranean. Following the division of labor, the tobacco processing reinforced the dependency of island economies from regional and global financial systems; also the establishment of an industrial working class within island societies. The tobacco left its mark in social and spatial formation in the Aegean islands both in urban and rural environments.
New Information Measures for the Generalized Normal Distribution
Christos P. Kitsos,Thomas L. Toulias
Information , 2010, DOI: 10.3390/info1010013
Abstract: We introduce a three-parameter generalized normal distribution, which belongs to the Kotz type distribution family, to study the generalized entropy type measures of information. For this generalized normal, the Kullback-Leibler information is evaluated, which extends the well known result for the normal distribution, and plays an important role for the introduced generalized information measure. These generalized entropy type measures of information are also evaluated and presented. | CommonCrawl |
\begin{document}
\title[Ideal structure of $C^*$-algebras of SGDS] {Ideal structure of \boldmath{$C^*$}-algebras of singly generated dynamical systems} \author[Takeshi KATSURA]{Takeshi KATSURA} \address{Department of Mathematics, Keio University, Yokohama, 223-8522 JAPAN} \email{[email protected]}
\subjclass[2000]{Primary 46L05; Secondary 46L55, 37B99}
\keywords{}
\begin{abstract} In this paper, we show that the set of all ideals of the $C^*$-algebras of a singly generated dynamical system corresponds bijectively to the set of all subsets of the product of the space of the system and the circle
satisfying three conditions.
\end{abstract}
\maketitle
\setcounter{section}{-1}
\section{Introduction}
In the theory of $C^*$-al\-ge\-bra s, it is very important yet very difficult to list all ideals of a given $C^*$-al\-ge\-bra . In this paper, we list all ideals of a $C^*$-al\-ge\-bra of a singly generated dynamical system in terms of subsets of a certain space.
In \cite{R2}, Renault introduces the notion of a singly generated dynamical system (SGDS) and associates a $C^*$-al\-ge\-bra with it. In \cite{K1}, the author introduces the notion of a topological graph and associates a $C^*$-al\-ge\-bra with it. In \cite{K2}, the author shows that an SGDS can be considered as a topological graph and their $C^*$-al\-ge\-bra s coincide. Conversely in \cite{Ksh} every $C^*$-al\-ge\-bra of a topological graph is a $C^*$-al\-ge\-bra of some SGDS (see also \cite{Yee} and \cite{KL}). Thus the class of $C^*$-al\-ge\-bra s of SGDSs is fairly large. Although in \cite{R2} spaces in SGDSs are assumed to be second countable, there is no such restriction in \cite{K1}. In this paper, spaces are not assumed to be second countable, and hence the associated $C^*$-al\-ge\-bra s are not necessarily separable.
The purpose of this paper is to investigate the ideal structure of $C^*$-al\-ge\-bra s of SGDSs. Since we do not assume second countability, it is almost impossible to list primitive ideals of the associated $C^*$-al\-ge\-bra in terms of points of the space. Instead in this paper, we list all ideals directly in terms of subsets of the product of the space and the circle (Theorem~\ref{MainThm}). The investigation here may be useful to describe the topology of primitive ideal space when the space is second countable (cf.\ \cite{SW}).
This paper is organized as follows. In Section~\ref{Sec:C*S}, we introduce SGDSs and their $C^*$-al\-ge\-bra s. To define a $C^*$-al\-ge\-bra of an SGDS, we do not use groupoids as in \cite{R2}, but use a definition similar to \cite{K1}. We examine the structure of a $C^*$-al\-ge\-bra of an SGDS which we need later. In Section~\ref{Sec:irrep}, we construct lots of irreducible representations of a $C^*$-al\-ge\-bra of an SGDS. In Section~\ref{Sec:gii}, we associate a subset of a space to each ideal, and an ideal to each subset. Then, we list all gauge-invariant ideals in terms of invariant subsets (Proposition~\ref{Prop:bij}). In Section~\ref{Sec:prime}, we list all prime ideals, and show that the set of primitive ideals constructed in Section~\ref{Sec:irrep} is sufficiently large (Corollary~\ref{Cor:inter}). In Section~\ref{Sec:IYI}, we associate a subset of the product of a space and the circle to each ideal, and an ideal to each subset. We show that ideals of the $C^*$-al\-ge\-bra of an SGDS correspond injectively to a subset of the product of a space and the circle (Proposition~\ref{Prop:IYI}). In Section~\ref{Sec:YIad}, we show that this subset is an admissible set which is defined in Definition~\ref{Def:adm} (Proposition~\ref{Prop:YIinv}). In Section~\ref{Sec:YIY}, we show that the set of all ideals of the $C^*$-al\-ge\-bra of an SGDS corresponds bijectively to the set of all admissible sets (Theorem~\ref{MainThm}).
We try to make this paper self-contained as well as possible. However, in Section~\ref{Sec:prime} we need two kinds of uniqueness theorems for $C^*$-al\-ge\-bra s of topological graphs which are quoted in Appendix~\ref{Sec:UT}.
\noindent {\bfseries Acknowledgments.} This work was supported by JSPS KAKENHI Grant Number JP18K03345.
\section{Singly generated dynamical systems}\label{Sec:C*S}
\begin{definition} A pair $\Sigma=(X,\sigma)$ is a {\em singly generated dynamical system} (in short SGDS) if $X$ is a locally compact space and $\sigma$ is a local homeomorphism from an open subset $U$ of $X$ to $X$. \end{definition}
It should be emphasized that $X$ is not necessarily second countable. Throughout this paper, $\Sigma=(X,\sigma)$ means an SGDS. We denote by $\mathbb{N} := \{0,1,2,\ldots\}$ the set of natural numbers.
\begin{definition} We define a decreasing sequence $\{U_n\}_{n\in\mathbb{N}}$ of open subsets of $X$ by $U_0=X$, $U_1=U$ and $U_{n+1}=\{x\in U_n\mid \sigma^{n}(x)\in U\}$. \end{definition}
The open set $U_n$ is the domain of $\sigma^{n}$, and $\sigma^{n}\colon U_n\to X$ is a local homeomorphism.
\begin{definition} For $\xi,\eta\in C_c(U)$, we define $\ip{\xi}{\eta}\in C_c(X)$ by \[ \ip{\xi}{\eta}(x)=\sum_{\sigma(y)=x}\overline{\xi}(y)\eta(y) \] for $x\in X$, where $\overline{\xi}\in C_c(U)$ is defined by $\overline{\xi}(y)=\overline{\xi(y)}$ for $y\in U$. \end{definition}
In the expression above, we implicitly assume that $y$ is in $U$. We use this convention. Namely if we write $\sigma^n(x)=\cdots$, we assume $x\in U_n$. The fact that $\ip{\xi}{\eta}$ is in $C_c(X)$ can be shown by using the fact that $\sigma$ is a local homeomorphism (see \cite[Lemma~1.5]{K1}).
\begin{definition}\label{Def:C*S} The $C^*$-al\-ge\-bra $C^*(\Sigma)$ of an SGDS $\Sigma=(X,\sigma)$ is the universal $C^*$-al\-ge\-bra generated by the images of a $*$-ho\-mo\-mor\-phism $t^0\colon C_0(X)\to C^*(\Sigma)$ and a linear map $t^1\colon C_c(U)\to C^*(\Sigma)$ satisfying \begin{enumerate} \rom \item $t^1(\xi)^*t^1(\eta)=t^0(\ip{\xi}{\eta})$ for $\xi,\eta\in C_c(U)$, \item $t^1(\xi)t^1(\eta)^*=t^0(\xi\overline{\eta})$ for $\xi,\eta\in C_c(V)$ where $V\subset U$ is an open subset on which $\sigma$ is injective. \end{enumerate} \end{definition}
We call the pair $(t^0,t^1)$ in Definition~\ref{Def:C*S} the {\em universal} pair for $C^*(\Sigma)$. Hereafter, we investigate properties of the universal pair $(t^0,t^1)$ for $C^*(\Sigma)$. For $f \in C_0(X)$ and $\xi \in C_c(U)$, we have $t^1(\xi)t^0(f)=t^1(\xi (f\circ \sigma))$ because one can see $d^*d=0$ for $d= t^1(\xi)t^0(f)-t^1(\xi (f\circ \sigma))$ by the computation \begin{align*} t^1(\eta)^*\big(t^1(\xi)t^0(f)-t^1(\xi (f\circ \sigma))\big) &=t^0(\ip{\eta}{\xi})t^0(f)-t^0(\ip{\eta}{\xi (f\circ \sigma)})\\ &=t^0\big(\ip{\eta}{\xi}f-\ip{\eta}{\xi (f\circ \sigma)})=0 \end{align*} for arbitrary $\eta \in C_c(U)$. For $f \in C_0(X)$ and $\xi \in C_c(U)$, we have $t^0(f)t^1(\xi)=t^1(f \xi)$. To see this, first we may assume $\xi \in C_c(V)$ where $V\subset U$ is an open subset on which $\sigma$ is injective by the partition of unity because the fact that $\sigma$ is locally homeomorphic implies that the set of such $V$ covers $U$. Now for arbitrary $\eta \in C_c(V)$, we have \begin{align*} \big(t^0(f)t^1(\xi) - t^1(f \xi)\big)t^1(\eta)^* &= t^0(f)t^0(\xi\overline{\eta}) - t^0(f\xi\overline{\eta}) =0. \end{align*} Therefore we get $dd^*=0$ for $d= t^0(f)t^1(\xi) - t^1(f \xi)$. We have shown that $t^0(f)t^1(\xi)=t^1(f \xi)$ for all $f \in C_0(X)$ and $\xi \in C_c(U)$. Finally we see that for $f \in C_c(U)$ there exist $\xi_1,\ldots ,\xi_n, \eta_1,\ldots ,\eta_n \in C_c(U)$ such that \begin{align*} t^0(f)=\sum_{k=1}^n t^1(\xi_k)t^1(\eta_k)^*. \end{align*} By the partition of unity, there exist $f_k \in C_c(V_k)$ for $k=1,\ldots, n$ such that $f= \sum_{k=1}^n f_k$ where $V_k\subset U$ is an open subset on which $\sigma$ is injective for $k=1,\ldots, n$. Then for $k=1,\ldots, n$ one can find $\xi_k,\eta_k \in C_c(V_k)$ such that $\xi_k\overline{\eta_k}=f_k$
(for example one can take $\eta_k=|f_k|^{1/2}$). Then we have \begin{align*} t^0(f)=\sum_{k=1}^n t^0(f_k)=\sum_{k=1}^n t^1(\xi_k)t^1(\eta_k)^*. \end{align*} From these computation, one can see that the $C^*$-al\-ge\-bra $C^*(\Sigma)$ is the $C^*$-al\-ge\-bra of the topological graph $E=(X,U,\sigma,\iota)$, where $\iota \colon U \to X$ is the embedding, as defined in \cite[Definition~2.10]{K1}. Thus by \cite[Proposition~10.9]{K2}, the $C^*$-al\-ge\-bra $C^*(\Sigma)$ is isomorphic to the $C^*$-al\-ge\-bra defined in \cite{R2} when $X$ is second countable.
We try to make this paper self-contained as well as possible. We quote results in \cite{K1} which is used to prove the main theorem (Theorem~\ref{MainThm}) only in Appendix~\ref{Sec:UT}. We know that both $t^0$ and $t^1$ are injective by \cite[Proposition~3.7]{K1}. This fact is reproved in Lemma~\ref{Lem:inj}.
\begin{definition} Let $n\in \mathbb{N}$. For $\xi,\eta \in C_c(U_n)$, we define $\ip{\xi}{\eta}_n \in C_c(X)$ by \[ \ip{\xi}{\eta}_n(x)=\sum_{\sigma^n(y)=x}\overline{\xi}(y)\eta(y) \] for $x\in X$. \end{definition}
Note that $\ip{\xi}{\eta}_0=\overline{\xi}\eta$ for $\xi,\eta \in C_c(X)$, and $\ip{\xi}{\eta}_1=\ip{\xi}{\eta}$ for $\xi,\eta \in C_c(U)$.
\begin{lemma}\label{Lem:ip1} Let $n,m\in \mathbb{N}$. For $\xi_1,\eta_1 \in C_c(U_n)$ and $\xi_2,\eta_2 \in C_c(U_m)$, define $\xi,\eta \in C_c(U_{n+m})$ by $\xi(x)=\xi_1(x)\xi_2(\sigma^n(x))$ and $\eta(x)=\eta_1(x)\eta_2(\sigma^n(x))$ for $x \in U_{n+m}$. Then we have \[ \ip{\xi_2}{\ip{\xi_1}{\eta_1}_n\eta_2}_m =\ip{\xi}{\eta}_{n+m} \] \end{lemma}
\begin{proof} For $x \in X$, we have \begin{align*} \ip{\xi_2}{\ip{\xi_1}{\eta_1}_n\eta_2}_m(x) &=\sum_{\sigma^m(y)=x}\overline{\xi_2}(y)\ip{\xi_1}{\eta_1}_n(y)\eta_2(y)\\ &=\sum_{\sigma^m(y)=x}\overline{\xi_2}(y)\Big(\sum_{\sigma^n(z)=y}\overline{\xi_1}(z)\eta_1(z)\Big)\eta_2(y)\\ &=\sum_{\sigma^m(y)=x}\sum_{\sigma^n(z)=y} \overline{\xi_2}(\sigma^n(z))\overline{\xi_1}(z)\eta_1(z)\eta_2(\sigma^n(z))\\ &=\sum_{\sigma^{n+m}(z)=x}\overline{\xi}(z)\eta(z)\\ &=\ip{\xi}{\eta}_{n+m}(x). \qedhere \end{align*} \end{proof}
\begin{lemma}\label{Lem:ip2} Let $n\in \mathbb{N}$. For $\xi_1,\ldots,\xi_n,\eta_1,\ldots,\eta_n \in C_c(U)$, define $\xi,\eta \in C_c(U_{n})$ by \begin{align*} \xi(x)&=\xi_1(x)\xi_2(\sigma(x))\cdots \xi_n(\sigma^{n-1}(x))\\ \eta(x)&=\eta_1(x)\eta_2(\sigma(x))\cdots \eta_n(\sigma^{n-1}(x)) \end{align*} for $x \in U_{n}$. Then we have \[ \ip{\xi_n}{\ip{\xi_{n-1}}{\cdots ,\ip{\xi_2}{\ip{\xi_1}{\eta_1}\eta_2} \cdots \eta_{n-1}}\eta_n} =\ip{\xi}{\eta}_{n} \] \end{lemma}
\begin{proof} Apply Lemma~\ref{Lem:ip1} $(n-1)$ times. \end{proof}
\begin{lemma}\label{Lem:ttt} Let $n\in \mathbb{N}$. For $\xi_1,\ldots,\xi_n,\eta_1,\ldots,\eta_n \in C_c(U)$, we have \begin{align*} \big(t^1(\xi_1)t^1(\xi_2)\cdots t^1(\xi_n)\big)^*& \big(t^1(\eta_1)t^1(\eta_2)\cdots t^1(\eta_n)\big)\\ &= t^0\big(\ip{\xi_n}{\ip{\xi_{n-1}}{\cdots ,\ip{\xi_2}{\ip{\xi_1}{\eta_1}\eta_2} \cdots \eta_{n-1}}\eta_n}\big) \end{align*} \end{lemma}
\begin{proof} This follows from direct computation. \end{proof}
\begin{lemma}\label{Lem:fctn} Let $n,m\in \mathbb{N}$. For $\xi \in C_c(U_{n+m})$, there exist $\xi_1 \in C_c(U_n)$ and $\xi_2 \in C_c(U_m)$ such that $\xi(x)=\xi_1(x)\xi_2(\sigma^n(x))$ for $x \in U_{n+m}$. \end{lemma}
\begin{proof} Take $\xi_1=\xi$, and choose $\xi_2 \in C_c(U_m)$ which is 1 on the compact set $\sigma^n(C) \subset U_m$ where $C \subset U_{n+m}$ is the compact support of $\xi$. \end{proof}
\begin{lemma}\label{Lem:fctn2} Let $n\in \mathbb{N}$. For $\xi \in C_c(U_{n})$, there exist $\xi_1,\xi_2, \ldots,\xi_n \in C_c(U)$ such that $\xi(x)=\xi_1(x)\xi_2(\sigma(x))\cdots \xi_n(\sigma^{n-1}(x))$ for $x \in U_{n}$. \end{lemma}
\begin{proof} Apply Lemma~\ref{Lem:fctn} $(n-1)$ times. \end{proof}
For each integer $n\geq 2$, we define a linear map $t^n\colon C_c(U_n)\to C^*(\Sigma)$. Take $\xi\in C_c(U_n)$. Take $\xi_1,\ldots,\xi_n\in C_c(U)$ such that \[ \xi(x)=\xi_1(x)\xi_2(\sigma(x))\cdots \xi_n(\sigma^{n-1}(x)) \] for all $x \in U_n$. Such functions exist by Lemma~\ref{Lem:fctn2}. We set $t^n(\xi)=t^1(\xi_1)t^1(\xi_2)\cdots t^1(\xi_n)$. This definition is well-defined because Lemma~\ref{Lem:ip2} and Lemma~\ref{Lem:ttt} imply $d^*d=0$ for the difference $d$ of two definitions. By the same reason, we see that $t^n$ is linear. We also see the following.
\begin{lemma}\label{Lem:t*t} For $\xi,\eta \in C_c(U_n)$, we have $t^n(\xi)^*t^n(\eta)=t^0(\ip{\xi}{\eta}_n)$. \end{lemma}
\begin{proof} This follows from Lemma~\ref{Lem:ip2} and Lemma~\ref{Lem:ttt}. \end{proof}
\begin{lemma}\label{Lem:tt=t} Let $n,m\in \mathbb{N}$. For $\xi_1 \in C_c(U_n)$ and $\xi_2 \in C_c(U_m)$, define $\xi \in C_c(U_{n+m})$ by $\xi(x)=\xi_1(x)\xi_2(\sigma^n(x))$ for $x \in U_{n+m}$. Then we have \[ t^n(\xi_1)t^m(\xi_2)=t^{n+m}(\xi) \] \end{lemma}
\begin{proof} When $n \geq 1$ and $m \geq 1$, this follows from the definition of $t^n$. When $n =0$ or $m=0$, this follows from a similar computation as the one after Definition~\ref{Def:C*S}. \end{proof}
\begin{lemma}\label{Lem:t^*t=t} Let $n,m\in \mathbb{N}$. For $\xi \in C_c(U_n)$ and $\eta \in C_c(U_{n+m}) \subset C_c(U_n)$, we have \[ t^n(\xi)^*t^{n+m}(\eta) =t^m(\ip{\xi}{\eta}_n). \] \end{lemma}
\begin{proof} By Lemma~\ref{Lem:fctn}, choose $\eta_1 \in C_c(U_n)$ and $\eta_2 \in C_c(U_m)$ such that $\eta(x)=\eta_1(x)\eta_2(\sigma^n(x))$ for $x \in U_{n+m}$. By Lemma~\ref{Lem:tt=t} and Lemma~\ref{Lem:t*t}, we have \begin{align*} t^n(\xi)^*t^{n+m}(\eta) &=t^n(\xi)^*t^{n}(\eta_1)t^{m}(\eta_2) =t^0(\ip{\xi}{\eta_1}_n)t^{m}(\eta_2)\\ &=t^m(\ip{\xi}{\eta_1}_n\eta_2) =t^m(\ip{\xi}{\eta_1(\eta_2\circ \sigma^n)}_n)=t^m(\ip{\xi}{\eta}_n). \qedhere \end{align*} \end{proof}
\begin{lemma}\label{Lem:cspa} The $C^*$-al\-ge\-bra $C^*(\Sigma)$ is the closure of the linear span of the set \[ \big\{t^n(\xi)t^m(\eta)^*\mid n,m\in\mathbb{N}, \xi\in C_c(U_n), \eta\in C_c(U_m)\}. \] \end{lemma}
\begin{proof} By Lemma~\ref{Lem:tt=t} and Lemma~\ref{Lem:t^*t=t}, the set above is closed under multiplication. This set contains the images of $t^0$ and $t^1$. Hence the closure of the linear span of the set is $C^*(\Sigma)$. \end{proof}
\section{Irreducible representations of $C^*(\Sigma)$}\label{Sec:irrep}
Take an SGDS $\Sigma=(X,\sigma)$. In this section, we construct irreducible representations $\pi_{x_0,\gamma}$ of the $C^*$-al\-ge\-bra $C^*(\Sigma)$.
\begin{definition} For $x_0\in X$, we define the {\em orbit} of $x_0$ by \[ \Orb(x_0)=\{x\in X\mid \text{$\sigma^n(x)=\sigma^m(x_0)$ for some $n,m\in\mathbb{N}$}\}. \] \end{definition}
Note that for $x \in \Orb(x_0)$ we have $\Orb(x)=\Orb(x_0)$. Hence two orbits $\Orb(x)$ and $\Orb(y)$ are either same or disjoint.
\begin{definition} For $x_0\in X$, let $H_{x_0}$ be the Hilbert space whose complete orthonormal system is given by $\{\delta_x\}_{x\in \Orb(x_0)}$. The inner product of $H_{x_0}$ is denoted by $\ip{\cdot}{\cdot}_{x_0}$ which is linear in the second variable. \end{definition}
\begin{definition} For $(x_0,\gamma_0)\in X\times\mathbb{T}$, we define two maps $t^0_{(x_0,\gamma_0)}\colon C_0(X)\to B(H_{x_0})$ and $t^1_{(x_0,\gamma_0)}\colon C_c(U)\to B(H_{x_0})$ by \begin{align*} t^0_{(x_0,\gamma_0)}(f)\delta_x&=f(x)\delta_x,& t^1_{(x_0,\gamma_0)}(\xi)\delta_x&=\gamma_0\sum_{\sigma(y)=x}\xi(y)\delta_y, \end{align*} for $x\in \Orb(x_0)$. \end{definition}
Take $(x_0,\gamma_0)\in X\times\mathbb{T}$ and fix it for a while. It is routine to check that $t^0_{(x_0,\gamma_0)}$ is a well-defined $*$-ho\-mo\-mor\-phism and $t^1_{(x_0,\gamma_0)}$ is a well-defined linear map.
\begin{lemma}\label{Lem:adjoint} For $\xi\in C_c(U)$ and $y\in \Orb(x_0)$, we have \[ t^1_{(x_0,\gamma_0)}(\xi)^*\delta_y =\begin{cases} \gamma_0^{-1}\overline{\xi}(y)\delta_{\sigma(y)}&\text{if $y\in U$}\\ 0&\text{otherwise.} \end{cases} \] \end{lemma}
\begin{proof} For $x,y\in \Orb(x_0)$, we have \begin{align*} \ip{\delta_x}{t^1_{(x_0,\gamma_0)}(\xi)^*\delta_y}_{x_0} &=\ip{t^1_{(x_0,\gamma_0)}(\xi)\delta_x}{\delta_y}_{x_0}\\ &=\bigg\langle \gamma_0\sum_{\sigma(y')=x}\xi(y')\delta_{y'},\delta_y\bigg\rangle_{x_0}\\ &=\sum_{\sigma(y')=x}\gamma_0^{-1}\overline{\xi(y')} \ip{\delta_{y'}}{\delta_y}_{x_0}\\ &=\begin{cases} \gamma_0^{-1}\overline{\xi}(y)& \text{if $x=\sigma(y)$,}\\ 0 & \text{otherwise}. \end{cases} \end{align*} This completes the proof. \end{proof}
\begin{proposition} The pair $(t^0_{(x_0,\gamma_0)}, t^1_{(x_0,\gamma_0)})$ satisfies (i) and (ii) in Definition~\ref{Def:C*S}. \end{proposition}
\begin{proof} Take $\xi,\eta \in C_c(U)$. For $x \in \Orb(x_0)$, we have \begin{align*} t^1_{(x_0,\gamma_0)}(\xi)^*t^1_{(x_0,\gamma_0)}(\eta)\delta_x &=t^1_{(x_0,\gamma_0)}(\xi)^* \Big(\gamma_0\sum_{\sigma(y)=x}\eta(y)\delta_y\Big)\\ &=\gamma_0\sum_{\sigma(y)=x}\eta(y)t^1_{(x_0,\gamma_0)}(\xi)^*\delta_y\\ &=\gamma_0\sum_{\sigma(y)=x}\eta(y) \gamma_0^{-1}\overline{\xi}(y)\delta_{\sigma(y)}\\ &=\sum_{\sigma(y)=x}\overline{\xi}(y)\eta(y)\delta_{x}\\ &=\ip{\xi}{\eta}(x)\delta_{x} \end{align*} by Lemma \ref{Lem:adjoint}. This shows $t^1_{(x_0,\gamma_0)}(\xi)^*t^1_{(x_0,\gamma_0)}(\eta) = t^0_{(x_0,\gamma_0)}(\ip{\xi}{\eta})$. Now take an open subset $V\subset U$ on which $\sigma$ is injective, and take $\xi,\eta\in C_c(V)$. For $y \in \Orb(x_0) \cap V$, we have \begin{align*} t^1_{(x_0,\gamma_0)}(\xi)t^1_{(x_0,\gamma_0)}(\eta)^*\delta_y &=t^1_{(x_0,\gamma_0)}(\xi)\big( \gamma_0^{-1}\overline{\eta}(y)\delta_{\sigma(y)}\big)\\ &=\gamma_0^{-1}\overline{\eta}(y) \gamma_0\sum_{\sigma(y')=\sigma(y)}\xi(y')\delta_{y'}\\ &=\overline{\eta}(y)\xi(y)\delta_{y} \end{align*} by Lemma \ref{Lem:adjoint}. For $y \in \Orb(x_0)\setminus V$, the same equation holds because the both sides become $0$. Hence we get $t^1_{(x_0,\gamma_0)}(\xi)t^1_{(x_0,\gamma_0)}(\eta)^* =t^0_{(x_0,\gamma_0)}(\xi\overline{\eta})$. \end{proof}
\begin{definition} We denote by $\pi_{(x_0,\gamma_0)}\colon C^*(\Sigma)\to B(H_{x_0})$ the $*$-ho\-mo\-mor\-phism induced by the pair $(t^0_{(x_0,\gamma_0)}, t^1_{(x_0,\gamma_0)})$. \end{definition}
\begin{definition} For $(x_0,\gamma_0)\in X\times\mathbb{T}$, we set $P_{(x_0,\gamma_0)}=\ker\pi_{(x_0,\gamma_0)}$. \end{definition}
\begin{lemma}\label{Lem:inj} Let $(t^0,t^1)$ be the universal pair for $C^*(\Sigma)$. Then both $t^0$ and $t^1$ are injective. \end{lemma}
\begin{proof} For each $x \in X$, we have $\ker t^0_{(x,1)} = C_0(X\setminus \overline{\Orb(x)})$. Since $t^0_{(x,1)}=\pi_{(x,1)} \circ t^0$, we have $\ker t^0 \subset \ker t^0_{(x,1)}$. Hence we have \begin{align*} \ker t^0 \subset \bigcap_{x \in X}\ker t^0_{(x,1)} =\bigcap_{x \in X} C_0(X\setminus \overline{\Orb(x)})=0. \end{align*} This shows that $t^0$ is injective. Take $\xi \in C_c(U)$ with $t^1(\xi)=0$. Then we have $t^0(\ip{\xi}{\xi})=t^1(\xi)^*t^1(\xi)=0$. Since $t^0$ is injective, we have $\ip{\xi}{\xi}=0$. This shows $\xi = 0$. Therefore $t^1$ is injective. \end{proof}
We are going to see that the representation $\pi_{(x_0,\gamma_0)}$ is irreducible, and hence $P_{(x_0,\gamma_0)}$ is a primitive ideal of $C^*(\Sigma)$.
\begin{lemma}\label{compute} For $n\in\mathbb{N}$, $\xi\in C_0(U_n)$ and $x\in \Orb(x_0)$, we have \begin{align*} \pi_{(x_0,\gamma_0)}(t^n(\xi))\delta_x &=\gamma_0^n\sum_{\sigma^n(y)=x}\xi(y)\delta_{y}\\ \pi_{(x_0,\gamma_0)}(t^n(\xi))^*\delta_x &=\begin{cases} \gamma_0^{-n}\overline{\xi}(x)\delta_{\sigma^n(x)}& \text{if $x\in U_n$,}\\ 0 & \text{otherwise.} \end{cases} \end{align*} \end{lemma}
\begin{proof} By Lemma~\ref{Lem:fctn2}, we can choose $\xi_1,\ldots,\xi_n\in C_c(U)$ such that \[ \xi(x)=\xi_1(x)\xi_2(\sigma(x))\cdots \xi_n(\sigma^{n-1}(x)) \] for all $x \in U_n$. Then we have \begin{align*} \pi_{(x_0,\gamma_0)}(t^n(\xi))\delta_x &=\pi_{(x_0,\gamma_0)}\big(t^1(\xi_1)t^1(\xi_2)\ldots t^1(\xi_{n-1})t^1(\xi_n)\big)\delta_x\\ &=\pi_{(x_0,\gamma_0)}\big(t^1(\xi_1)t^1(\xi_2)\ldots t^1(\xi_{n-1})\big) \gamma_0\sum_{\sigma(y_1)=x}\xi_n(y_1)\delta_{y_1}\\ &=\pi_{(x_0,\gamma_0)}\big(t^1(\xi_1)\ldots t^1(\xi_{n-2})\big) \gamma_0^2\sum_{\sigma(y_1)=x}\xi_n(y_1) \sum_{\sigma(y_2)=y_1}\xi_{n-1}(y_2)\delta_{y_2}\\ &=\pi_{(x_0,\gamma_0)}\big(t^1(\xi_1)\ldots t^1(\xi_{n-2})\big) \gamma_0^2\sum_{\sigma^2(y_2)=x}\xi_{n-1}(y_2)\xi_n(\sigma(y_2))\delta_{y_2}\\ &\ \,\vdots\\ &=\pi_{(x_0,\gamma_0)}\big(t^1(\xi_1)\big)\gamma_0^{n-1} \!\!\!\sum_{\sigma^{n-1}(y_{n-1})=x}\!\!\! \xi_2(y_{n-1})\xi_3(\sigma(y_{n-1}))\cdots\\ &\phantom{\pi_{(x_0,\gamma_0)}\big(t^1(\xi_1)\big)\gamma_0^{n-1} \!\!\!\sum_{\sigma^{n-1}(y_{n-1})=x}} \cdots\xi_{n-1}(\sigma^{n-3}(y_{n-1}))\xi_n(\sigma^{n-2}(y_{n-1}))\delta_{y_{n-1}}\\ &=\gamma_0^n\sum_{\sigma^n(y)=x}\xi(y)\delta_{y}. \end{align*} From this equation, we can compute $\pi_{(x_0,\gamma_0)}(t^n(\xi))^*\delta_x$ in a similar way to the proof of Lemma \ref{Lem:adjoint}. \end{proof}
\begin{proposition} For $(x_0,\gamma_0)\in X\times\mathbb{T}$, the representation $\pi_{(x_0,\gamma_0)}\colon C^*(\Sigma)\to B(H_{x_0})$ is irreducible. \end{proposition}
\begin{proof} Let $\{e_{x,y}\}_{x,y \in \Orb(x_0)}$ be the matrix units of $B(H_{x_0})$. Namely $e_{x,y} \in B(H_{x_0})$ satisfies \[ e_{x,y}\delta_z = \begin{cases} \delta_x& \text{if $z=y$,}\\ 0 & \text{otherwise} \end{cases} \] for $z \in \Orb(x_0)$. For all $x\in \Orb(x_0)$, it is standard to see that $e_{x,x}$ is in the weak closure of $\pi_{(x_0,\gamma_0)}(t^0(C_0(X)))=t^0_{(x_0,\gamma_0)}(C_0(X)) \subset B(H_{x_0})$. Take $x \in \Orb(x_0)$ with $x \in U$. Take $\xi \in C_0(U)$ with $\xi(x)=\gamma_0^{-1}$. Then we have $e_{x,x}t_{(x_0,\gamma_0)}^1(\xi)=e_{x,\sigma(x)}$. Hence $e_{x,\sigma(x)}$ is in the weak closure of $\pi_{(x_0,\gamma_0)}(C^*(\Sigma))$ for all $x\in \Orb(x_0)$ with $x \in U$. Take $x,y \in \Orb(x_0)$. Then there exist $n,m \in \mathbb{N}$ with $\sigma^n(x)=\sigma^m(y)$. We have that \begin{align*} e_{x,y} &= e_{x,\sigma^n(x)}(e_{y,\sigma^m(y)})^* \\ &= e_{x,x}e_{x,\sigma(x)}e_{\sigma(x),\sigma^2(x)}\ldots e_{\sigma^{n-1}(x),\sigma^n(x)} \big(e_{y,y}e_{y,\sigma(y)}e_{\sigma(y),\sigma^2(y)}\ldots e_{\sigma^{m-1}(y),\sigma^m(y)}\big)^* \end{align*} is in the weak closure of $\pi_{(x_0,\gamma_0)}(C^*(\Sigma))$. Hence the weak closure of $\pi_{(x_0,\gamma_0)}(C^*(\Sigma))$. is whole $B(H_{x_0})$. \end{proof}
\begin{corollary} For $(x_0,\gamma_0)\in X\times\mathbb{T}$, the ideal $P_{(x_0,\gamma_0)}$ is primitive. \end{corollary}
\begin{lemma}\label{inv0} For $x\in U$ and $\gamma\in\mathbb{T}$, we have $P_{(x,\gamma)}=P_{(\sigma(x),\gamma)}$. \end{lemma}
\begin{proof} Since $\Orb(x)=\Orb(\sigma(x))$, we have $\pi_{(x,\gamma)}=\pi_{(\sigma(x),\gamma)}$. Hence $P_{(x,\gamma)}=P_{(\sigma(x),\gamma)}$. \end{proof}
\begin{definition} Let $x\in X$. If there exist $k,n\in\mathbb{N}$ with $n \geq 1$ such that $\sigma^{k+n}(x)=\sigma^{k}(x)$, then we say that $x$ is {\em periodic}, and define its {\em period} $p(x)$ to be the smallest positive integer $n$ satisfying $\sigma^{k+n}(x)=\sigma^{k}(x)$ for some $k$. We also denote $l(x)$ be the smallest natural number $k$ satisfying $\sigma^{k+p(x)}(x)=\sigma^{k}(x)$. A point $x$ which is not periodic is said to be {\em aperiodic}. We set $p(x)=l(x)=\infty$ for an aperiodic point $x$. \end{definition}
It is fairly easy to see, but worth remarking, that we have $\sigma^k(x)=\sigma^l(x)$ for $k,l\in \mathbb{N}$ with $k>l$ if and only if $l\geq l(x)$ and $k-l\in p(x)\mathbb{N}$.
\begin{lemma}\label{aper} For an aperiodic point $x_0\in X$, we have $P_{(x_0,\gamma)}=P_{(x_0,1)}$ for all $\gamma\in\mathbb{T}$. \end{lemma}
\begin{proof} If $x_0\in X$ is aperiodic, we can define a map $c\colon \Orb(x_0)\to \mathbb{Z}$ such that $c(x_0)=0$ and $c(\sigma(x))=c(x)-1$ for $x\in U$. We define a unitary $u_\gamma\in B(H_{x_0})$ by $u_\gamma\delta_x=\gamma^{c(x)}\delta_x$ for $x\in \Orb(x_0)$. It is not difficult to check that \begin{align*} u_\gamma t^0_{(x_0,1)}(f)u_\gamma^* &=t^0_{(x_0,1)}(f)=t^0_{(x_0,\gamma)}(f),\\ u_\gamma t^1_{(x_0,1)}(\xi)u_\gamma^* &=\gamma t^1_{(x_0,1)}(\xi)=t^1_{(x_0,\gamma)}(\xi) \end{align*} for $f\in C_0(X)$ and $\xi\in C_c(U)$. Hence two representation $\pi_{(x_0,\gamma)}$ and $\pi_{(x_0,1)}$ are unitarily equivalent. This shows $P_{(x_0,\gamma)}=P_{(x_0,1)}$. \end{proof}
We denote the elements of $\mathbb{Z}/n\mathbb{Z}$ by $\{0,1,\ldots,n-1\}$, and sometimes consider them as elements in $\mathbb{Z}$.
\begin{lemma}\label{per1} For a periodic point $x_0\in X$ with period $n$, we have $P_{(x_0,\gamma)}=P_{(x_0,\mu)}$ if $\gamma^n=\mu^n$. \end{lemma}
\begin{proof} Similarly as in the proof of Lemma \ref{aper}, we can define a map $c\colon \Orb(x_0)\to \mathbb{Z}/n\mathbb{Z}$ such that $c(x_0)=0$ and $c(\sigma(x))=c(x)-1$ for $x\in U$. Set $\lambda :=\gamma\overline{\mu}\in\mathbb{T}$. We have $\lambda^n=1$. Hence we can define $u_\lambda\in B(H_{x_0})$ by $u_\lambda\delta_x=\lambda^{c(x)}\delta_x$ for $x\in \Orb(x_0)$. Similarly as in the proof of Lemma \ref{aper}, two representations $\pi_{(x_0,\gamma)}$ and $\pi_{(x_0,\mu)}$ are unitarily equivalent by the unitary $u_\lambda$. Hence we obtain $P_{(x_0,\gamma)}=P_{(x_0,\mu)}$. \end{proof}
\begin{definition} For an integer $n\geq 2$, we denote by $\zeta_n=e^{2\pi i/n}$ the $n$-th root of unity. \end{definition}
\begin{lemma}\label{Lem:gnmn} Let $x_0\in X$ be a periodic point with period $n$. If $x_0$ is isolated in $\Orb(x_0)$, then for $\gamma,\mu\in\mathbb{T}$, $P_{(x_0,\gamma)}=P_{(x_0,\mu)}$ if and only if $\gamma^n=\mu^n$. \end{lemma}
\begin{proof} Let $\Lambda=\{\sigma^{l(x_0)}(x_0),\ldots,\sigma^{l(x_0)+n-1}(x_0)\}$. Since $\sigma$ is a local homeomorphism, if $x_0$ is isolated in $\Orb(x_0)$ we can find an open subset $V$ of $X$ such that $V\cap \Orb(x_0)=\Lambda$. Hence there exists $\xi\in C_c(U)$ such that $\xi(x)=1$ for $x\in\Lambda$ and $\xi(x)=0$ for $x\in\Orb(x_0)\setminus \Lambda$. For each $\gamma\in\mathbb{T}$, non-zero elements of the spectrum of $t^1_{(x_0,\gamma)}(\xi)\in B(H_{x_0})$ are $\gamma,\gamma\zeta_n,\ldots,\gamma\zeta_n^{n-1}$. Hence those of the image of $t^1(\xi)$ via the natural surjection $C^*(\Sigma)\to C^*(\Sigma)/P_{(x_0,\gamma)}$ are also $\gamma,\gamma\zeta_n,\ldots,\gamma\zeta_n^{n-1}$. This shows that if $P_{(x_0,\gamma)}=P_{(x_0,\mu)}$ then $\gamma^n=\mu^n$. The converse follows from Lemma \ref{per1}. \end{proof}
\begin{remark} We will see in Proposition~\ref{Prop:notiso} that for a periodic point $x_0\in X$ such that $x_0$ is not isolated in $\Orb(x_0)$, we have $P_{(x_0,\gamma)}=P_{(x_0,1)}$ for all $\gamma\in\mathbb{T}$. \end{remark}
\begin{remark} We will see in Remark~\ref{Rem:2ndctbl} that if $X$ is second countable $P_{(x,\gamma)}$'s are whole primitive ideal. (See \cite{SW} for the case $U=X$.) In general, there is a primitive ideal which is not in the form $P_{(x,\gamma)}$ (see \cite[Example~13.2]{K3}). \end{remark}
It is probably impossible to list all primitive ideals in terms of elements of $X$ and $\mathbb{T}$. However it is possible to list all prime ideals in terms of subsets of $X$ and $\mathbb{T}$. From next section, we will do this. This will show that the set $\{P_{(x,\gamma)}\}$ is sufficiently large (Corollary~\ref{Cor:inter}).
\section{gauge invariant ideals and $\sigma$-invariant sets}\label{Sec:gii}
\begin{definition} A subset $X'$ of $X$ is said to be {\em $\sigma$-invariant} when $x\in X'$ if and only if $\sigma(x)\in X'$ for all $x\in U$. \end{definition}
An ideal $I$ of $C^*(\Sigma)$ is said to be {\em gauge-invariant} if $\beta_z(I)=I$ for all $z \in \mathbb{T}$ where $\beta$ is the gauge action defined in Appendix~\ref{Sec:UT}. We will see in Proposition~\ref{Prop:bij} that the set of gauge-invariant ideals corresponds bijectively to the set of closed $\sigma$-invariant subsets of $X$.
\begin{definition} For an ideal $I$ of $C^*(\Sigma)$, we define a closed set $X_I$ of $X$ by $t^0(C_0(X\setminus X_I))=t^0(C_0(X))\cap I$. \end{definition}
The following lemma is easy to see from the definition.
\begin{lemma}\label{Lem:Xeasy} For two ideals $I_1,I_2$ of $C^*(\Sigma)$, we have $X_{I_1\cap I_2}=X_{I_1} \cup X_{I_2}$. If $I_1 \subset I_2$, then $X_{I_1} \supset X_{I_2}$. \end{lemma}
\begin{proposition} For an ideal $I$ of $C^*(\Sigma)$, the closed set $X_I$ is $\sigma$-invariant. \end{proposition}
\begin{proof} Take $x\in U$. Suppose $x \notin X_I$. Choose an open subset $V$ such that $x \in V \subset U$, $V \cap X_I=\emptyset$ and $\sigma$ is injective on $V$. Take $\xi \in C_c(V)$ with $\xi(x)=1$. Then $\xi\overline{\xi} \in C_0(X \setminus X_I)$. Hence $t^1(\xi)t^1(\xi)^*=t^0(\xi\overline{\xi}) \in I$. This implies $t^0(\ip{\xi}{\xi})=t^1(\xi)^*t^1(\xi) \in I$. Hence $\ip{\xi}{\xi} \in C_0(X \setminus X_I)$.
Since $\ip{\xi}{\xi}(\sigma(x))=|\xi(x)|^2=1$, we have $\sigma(x) \notin X_I$. Now suppose $\sigma(x) \notin X_I$. Choose an open subset $V$ such that $x \in V \subset U$, $V \cap \sigma^{-1}(X_I)=\emptyset$ and $\sigma$ is injective on $V$. Take $\xi \in C_c(V)$ with $\xi(x)=1$. Then $\ip{\xi}{\xi} \in C_0(X \setminus X_I)$. Hence $t^1(\xi)^*t^1(\xi) = t^0(\ip{\xi}{\xi}) \in I$. This implies $t^0(\xi\overline{\xi})=t^1(\xi)t^1(\xi)^* \in I$. Hence $\xi\overline{\xi} \in C_0(X \setminus X_I)$.
Since $\xi\overline{\xi}(x)=|\xi(x)|^2=1$, we have $x \notin X_I$. Thus we have shown that $x\in X_I$ if and only if $\sigma(x)\in X_I$. \end{proof}
Take a closed $\sigma$-invariant subset $X'$ of $X$. We define an SGDS $\Sigma'=(X',\sigma')$ where $\sigma'$ is a restriction of $\sigma$ on $U':=X' \cap U$. Let $(t'^0,t'^1)$ be the universal pair for $C^*(\Sigma')$. We have the following.
\begin{lemma}\label{Lem:StoS'} There exists a surjection $\varPhi\colon C^*(\Sigma) \to C^*(\Sigma')$
such that $\varPhi(t^0(f))=t'^0(f|_{X'})$
and $\varPhi(t^1(\xi))=t'^1(\xi|_{U'})$. \end{lemma}
\begin{proof} It suffices to see that the pair of maps
$C_0(X)\ni f \mapsto t'^0(f|_{X'}) \in C^*(\Sigma')$
and $C_c(U)\ni \xi \mapsto t'^1(\xi|_{U'}) \in C^*(\Sigma')$ satisfy (i) and (ii) in Definition~\ref{Def:C*S} for $\Sigma$. This follows from the fact that $t'^0$ and $t'^1$ satisfy (i) and (ii) in Definition~\ref{Def:C*S} for $\Sigma'$. \end{proof}
\begin{definition} For a closed $\sigma$-invariant subset $X'$ of $X$, let $I_{X'}$ be the kernel of the surjection $\varPhi\colon C^*(\Sigma) \to C^*(\Sigma')$ in Lemma~\ref{Lem:StoS'}. \end{definition}
\begin{proposition}\label{Prop:XIX} For a closed $\sigma$-invariant subset $X'$ of $X$, we have $X_{I_{X'}}=X'$. \end{proposition}
\begin{proof} This follows from the fact that the kernel of $\varPhi \circ t^0$ in Lemma~\ref{Lem:StoS'} is $C_0(X\setminus X')$. \end{proof}
\begin{lemma}\label{Lem:IXinv} For a closed $\sigma$-invariant subset $X'$ of $X$, the ideal $I_{X'}$ is gauge-invariant. \end{lemma}
\begin{proof} This follows from the fact that the surjection $\varPhi\colon C^*(\Sigma) \to C^*(\Sigma')$ in Lemma~\ref{Lem:StoS'} commutes with the gauge actions. \end{proof}
\begin{lemma}\label{Lem:IXICI} For an ideal $I$ of $C^*(\Sigma)$, we have $I_{X_I}\subset I$. \end{lemma}
\begin{proof} Takes an deal $I$ of $C^*(\Sigma)$, and denote the natural surjection by $\pi\colon C^*(\Sigma) \to C^*(\Sigma)/I$. We set $\Sigma_I:=(X_I,\sigma_I)$ where $\sigma_I$ is the restriction of $\sigma$ to $U_I:=U\cap X_I$. By the definition of $X_I$, we can define a $*$-ho\-mo\-mor\-phism $t'^0\colon C_0(X_I) \to C^*(\Sigma)/I$
such that $t'^0(f|_{X_I})=\pi(t^0(f))$ for all $f \in C_0(X)$. We can also define a linear map $t'^1\colon C_c(U_I) \to C^*(\Sigma)/I$
such that $t'^1(\xi|_{U_I})=\pi(t^1(\xi))$ for all $\xi \in C_c(U)$
because for $\eta \in C_c(U)$ with $\eta|_{U_I}=0$ we have $\ip{\eta}{\eta} \in C_0(X \setminus X_I)$ and hence $t^1(\eta)\in I$. The pair $(t'^0,t'^1)$ satisfies (i) and (ii) in Definition~\ref{Def:C*S} for $\Sigma_I$. Hence we get a $*$-ho\-mo\-mor\-phism $\varPsi\colon C^*(\Sigma_I) \to C^*(\Sigma)/I$ with $\pi=\varPsi\circ \varPhi_I$ where $\varPhi_I\colon C^*(\Sigma) \to C^*(\Sigma_I)$ is the surjection whose kernel is $I_{X_I}$. Hence we get $I_{X_I} \subset I$. \end{proof}
\begin{proposition}\label{Prop:IXI} For an ideal $I$ of $C^*(\Sigma)$, we have $I_{X_I} = I$ if and only if $I$ is gauge-invariant. \end{proposition}
\begin{proof} The ``only if'' part follows from Lemma~\ref{Lem:IXinv}. Let $I$ be a gauge-invariant ideal $I$ of $C^*(\Sigma)$. Then we can define an action of $\mathbb{T}$ on $C^*(\Sigma)/I$ so that the natural surjection $\pi\colon C^*(\Sigma) \to C^*(\Sigma)/I$ becomes equivariant. Then the map $\varPsi\colon C^*(\Sigma_I) \to C^*(\Sigma)/I$ in the proof of Lemma~\ref{Lem:IXICI} becomes also equivariant. The map $\varPsi$ is injective on $t^0(C_0(X_I))$ by the definition of $X_I$. Hence by Proposition~\ref{Prop:GIUT} $\varPsi$ is injective. Therefore we have $I_{X_I} = I$. \end{proof}
\begin{proposition}\label{Prop:bij} Through the maps $I\mapsto X_I$ and $X'\mapsto I_{X'}$, the set of gauge-invariant ideals corresponds bijectively to the set of closed $\sigma$-invariant subsets of $X$. \end{proposition}
\begin{proof} This follows from Proposition~\ref{Prop:XIX} and Proposition~\ref{Prop:IXI}. \end{proof}
Now we have the following.
\begin{lemma}\label{Lem:Ieasy} For closed $\sigma$-invariant subsets $X_1$ and $X_2$ of $X$, we have $I_{X_1\cup X_2} = I_{X_1}\cap I_{X_2}$. If $X_1 \subset X_2$, then $I_{X_1}\supset I_{X_2}$. \end{lemma}
\begin{proof} By Lemma~\ref{Lem:Xeasy} and Proposition~\ref{Prop:XIX}, we have \begin{align*} X_{I_{X_1}\cap I_{X_2}}=X_{I_{X_1}}\cup X_{I_{X_2}}=X_1\cup X_2. \end{align*} Since $I_{X_1}\cap I_{X_2}$ is gauge-invariant, we have \[ I_{X_1}\cap I_{X_2}=I_{X_{I_{X_1}\cap I_{X_2}}}=I_{X_1\cup X_2} \] by Proposition~\ref{Prop:IXI}. If $X_1 \subset X_2$, then $X_2 = X_1\cup X_2$. Hence we have $I_{X_2}=I_{X_1\cup X_2}=I_{X_1}\cap I_{X_2}$. This shows $I_{X_1}\supset I_{X_2}$. \end{proof}
\begin{definition} For a closed $\sigma$-invariant set $X'$, we say $X'$ is essentially free
if the SGDS $\Sigma'=(X',\sigma|_{U\cap X'})$ is essentially free as defined in Definition~\ref{Def:esfree}. \end{definition}
\begin{proposition}\label{Prop:IXI2} For an ideal $I$ of $C^*(\Sigma)$, if $X_I$ is essentially free then we have $I_{X_I} = I$. \end{proposition}
\begin{proof} The proof goes similarly as in Proposition~\ref{Prop:IXI} using Proposition~\ref{Prop:CKUT} instead of Proposition~\ref{Prop:GIUT}. \end{proof}
\section{Prime ideals}\label{Sec:prime}
An ideal $I$ of a $C^*$-al\-ge\-bra $A$ is said to be {\em prime} if $I \neq A$ and for two ideals $I_1,I_2$ of a $C^*$-al\-ge\-bra $A$ with $I_1 \cap I_2 \subset I$ we have either $I_1 \subset I$ or $I_2 \subset I$. A primitive ideal is prime \cite[II.6.1.11]{B}. The converse is true when $X$ is second countable (cf.\ \cite[II.6.5.15]{B}), but not true in general (see \cite[Example~13.3]{K3} and \cite[Theorem~5.4]{Ksh}).
\begin{definition} A closed $\sigma$-invariant set $X$ is said to be {\em irreducible} if it is non-empty and we have either $X=X_1$ or $X=X_2$ for two closed $\sigma$-invariant sets $X_1,X_2$ satisfying $X=X_1 \cup X_2$. \end{definition}
\begin{proposition}\label{Prop:prir} If an ideal $I$ of $C^*(\Sigma)$ is prime, then $X_I$ is irreducible. \end{proposition}
\begin{proof} If $X_I = \emptyset$, then $I=A$ which contradicts that $I$ is prime. Hence $X_I \neq \emptyset$. Take two closed $\sigma$-invariant sets $X_1,X_2$ satisfying $X_I=X_1 \cup X_2$. Then \[ I_{X_1}\cap I_{X_2} = I_{X_1\cup X_2}=I_{X_I}\subset I \] by Lemma~\ref{Lem:Ieasy} and Lemma~\ref{Lem:IXICI}. Since $I$ is prime, we have either $I_{X_1} \subset I$ or $I_{X_2} \subset I$. When $I_{X_1} \subset I$, we have $X_I \subset X_{I_{X_1}} = X_1 \subset X_I$ by Lemma~\ref{Lem:Xeasy} and Proposition~\ref{Prop:XIX}, and hence $X_1=X_I$. Similarly we have $X_2=X_I$ when $I_{X_2} \subset I$. Thus $X_I$ is irreducible. \end{proof}
\begin{proposition}\label{Prop:XP} For $(x,\gamma)\in X\times\mathbb{T}$, we have $X_{P_{(x,\gamma)}}=\overline{\Orb(x)}$. \end{proposition}
\begin{proof} This follows from $\ker t^0_{(x,\gamma)} = C_0(X\setminus \overline{\Orb(x)})$. \end{proof}
\begin{proposition} For $x \in X$, $\overline{\Orb(x)}$ is an irreducible closed $\sigma$-invariant set. \end{proposition}
\begin{proof} This follows from Proposition~\ref{Prop:prir} and Proposition~\ref{Prop:XP}. One can also show this directly as follows. Since $\Orb(x)$ is $\sigma$-invariant and since $\sigma$ is locally homeomorphic, $\overline{\Orb(x)}$ is closed and $\sigma$-invariant. Take two closed $\sigma$-invariant sets $X_1,X_2$ satisfying $\overline{\Orb(x)}=X_1 \cup X_2$. Then either $x \in X_1$ or $x \in X_2$. When $x \in X_1$ we have $X_1=\overline{\Orb(x)}$, whereas when $x \in X_2$ we have $X_2=\overline{\Orb(x)}$. Thus $\overline{\Orb(x)}$ is irreducible. \end{proof}
\begin{proposition} Let $X'$ be an irreducible closed $\sigma$-invariant set. If $X'$ is essentially free, then $I_{X'}$ is prime. \end{proposition}
\begin{proof} Since $X' \neq \emptyset$, we have $I_{X'} \neq A$. Take ideals $I_1,I_2$ of $C^*(\Sigma)$ such that $I_1\cap I_2 \subset I_{X'}$. Then we have \[ X_{I_1+I_{X'}} \cup X_{I_2+I_{X'}} = X_{(I_1+I_{X'})\cap (I_2+I_{X'})} = X_{I_{X'}}=X' \] by Lemma~\ref{Lem:Xeasy} and Proposition~\ref{Prop:XIX}. Since $X'$ is irreducible, either we have $X_{I_1+I_{X'}}=X'$ or $X_{I_2+I_{X'}}=X'$. When $X_{I_1+I_{X'}}=X'$ we have $I_1+I_{X'}=I_{X'}$ by Proposition~\ref{Prop:IXI2} because $X'$ is essentially free. This shows $I_1 \subset I_{X'}$. Similarly, when $X_{I_2+I_{X'}}=X'$ we have $I_2 \subset I_{X'}$. Thus $I_{X'}$ is prime. \end{proof}
From now on, we investigate which irreducible closed $\sigma$-invariant set $X'$ becomes essentially free.
\begin{definition} We set \[ \Per(\Sigma) := \{x \in X\mid l(x)=0, \ \text{$x$ is isolated in $\Orb(x)$}\}. \] \end{definition}
\begin{definition} We denote by ${\mathcal A}(\Sigma)$ the set of all irreducible closed $\sigma$-invariant set which is not in the form $\overline{\Orb(x)}$ for $x \in \Per(\Sigma)$. \end{definition}
\begin{lemma}\label{Lem:notef} For $x \in \Per(\Sigma)$, an irreducible closed $\sigma$-invariant $\overline{\Orb(x)}$ is not essentially free. \end{lemma}
\begin{proof}
In the SGDS $\Sigma'=(X',\sigma|_{U\cap X'})$ where $X'=\overline{\Orb(x)}$, $\{x\}$ is an open subset and $l(x)=0$. Hence $\Sigma'$ is not essentially free. \end{proof}
\begin{proposition}[{cf.\ \cite[Proposition~11.3]{K3}}]\label{Prop:efA} For an irreducible closed $\sigma$-invariant set $X'$, $X'$ is essentially free if and only if $X' \in {\mathcal A}(\Sigma)$ \end{proposition}
\begin{proof} Take an irreducible closed $\sigma$-invariant set $X'$. By Lemma~\ref{Lem:notef}, if $X'$ is essentially free then $X' \in {\mathcal A}(\Sigma)$. Suppose $X'$ is not essentially free. Then there exists a non-empty open subset $V'$ of $X'$ such that $l(x)=0$ for all $x \in V'$. By Baire's category theorem (see, for example, \cite[Proposition 2.2]{T1}), there exist a non-empty open subset $V \subset V'$ of $X'$ and $n \in \mathbb{N}$ such that $p(x)=n$ for all $x \in V$. Take $x \in V$ arbitrarily. We will show that $V=\{x,\sigma(x),\ldots,\sigma^{n-1}(x)\}$. To the contrary, suppose there exists $y \in V\setminus \{x,\sigma(x),\ldots,\sigma^{n-1}(x)\}$. Choose open subsets $W_1,W_2$ of $V$ such that $x \in W_1$, $y \in W_2$ and $W_2\cap \bigcup_{k=0}^{n-1}\sigma^k(W_1)=\emptyset$. Then $\bigcup_{k=0}^{\infty}(\sigma^{k})^{-1}(W_1)$ is an open $\sigma$-invariant subset of $X'$ because for $l=1,2,\ldots$ we have $\sigma^l(W_1)\subset (\sigma^{k})^{-1}(W_1)$ for $k \in \mathbb{N}$ with $k+l \in n\mathbb{N}$. Hence $X_1:=X'\setminus \bigcup_{k=0}^{\infty}(\sigma^{k})^{-1}(W_1)$ is a closed $\sigma$-invariant subset of $X'$ with $X_1 \neq X'$. Similarly, $X_2:=X'\setminus \bigcup_{k=0}^{\infty}(\sigma^{k})^{-1}(W_2)$ is a closed $\sigma$-invariant subset of $X'$ with $X_2 \neq X'$. Since $X'$ is irreducible, we have $X_1 \cup X_2 \neq X'$. Hence we have $(\sigma^{k_0})^{-1}(W_1) \cap (\sigma^{l_0})^{-1}(W_2) \neq \emptyset$ for some $k_0,l_0 \in \mathbb{N}$. Thus we get $z \in X'$ such that $\sigma^{k_0}(z)\in W_1$ and $\sigma^{l_0}(z) \in W_2$. Then for $m \in \mathbb{N}$ with $l_0+mn\geq k_0$, $\sigma^{l_0+mn}(z)$ is in $W_2\cap \bigcup_{k=0}^{n-1}\sigma^k(W_1)$ which is empty. This is a contradiction. Hence we have shown $V=\{x,\sigma(x),\ldots,\sigma^{n-1}(x)\}$. Since $V$ is open in $X'$, $\{x\}$ is open in $X'$. Hence $x$ is isolated in $\Orb(x) \subset X'$. This shows $x \in \Per(\Sigma)$. Finally we show that $X' = \overline{\Orb(x)}$. To do so, take $y \in X'$. Take a neighborhood $W$ of $y$ arbitrarily. Then \[ X_1 := X'\setminus \bigcup_{k=0}^{\infty}(\sigma^{k})^{-1} \Big(\bigcup_{l=0}^{\infty}\sigma^{l}(W)\Big) \] is a closed $\sigma$-invariant subset of $X'$ with $X_1 \neq X'$. Since $\{x\}$ is open in $X'$, $X_2=X'\setminus \Orb(x)$ is also a closed $\sigma$-invariant subset of $X'$ with $X_2 \neq X'$. Since $X'$ is irreducible, we have $X_1 \cup X_2 \neq X'$. Hence we have \[ \Orb(x) \cap \bigcup_{k=0}^{\infty}(\sigma^{k})^{-1} \Big(\bigcup_{l=0}^{\infty}\sigma^{l}(W)\Big) \neq \emptyset. \] Therefore there exists $x' \in \Orb(x)$ and $k_0,l_0\in \mathbb{N}$ such that $\sigma^{k_0}(x') \in \sigma^{l_0}(W)$. Hence $W \cap \Orb(x) \neq \emptyset$. Since $W$ is arbitrary, we have $y \in \overline{\Orb(x)}$. Hence $X'=\overline{\Orb(x)}$. This shows $X' \notin {\mathcal A}(\Sigma)$. \end{proof}
\begin{proposition}\label{Prop:notiso} For a periodic point $x_0\in X$ such that $x_0$ is not isolated in $\Orb(x_0)$, we have $P_{(x_0,\gamma)}=P_{(x_0,1)}$ for all $\gamma\in\mathbb{T}$. \end{proposition}
\begin{proof} If $x_0$ is not isolated in $\Orb(x_0)$, then $x$ is not isolated in $\Orb(x_0)$ for all $x \in \Orb(x_0)$ because $\sigma$ is locally homeomorphic. Hence $X':=\overline{\Orb(x_0)}$ has no isolated point. Thus we have $X' \in {\mathcal A}(\Sigma)$. By Proposition~\ref{Prop:efA}, $X'$ is essentially free. By Proposition~\ref{Prop:IXI2} and Proposition~\ref{Prop:XP}, we have $P_{(x_0,\gamma)}=I_{X'}=P_{(x_0,1)}$ for all $\gamma\in\mathbb{T}$. \end{proof}
Take $x_0 \in \Per(\Sigma)$. We set $n_0 := p(x_0)$. We are going to show that prime ideals $I$ with $X_I=\overline{\Orb(x_0)}$ is parametrized as $P_{(x_0,e^{2\pi it})}$ for $0\leq t <1/n_0$. Let us set $X':=\overline{\Orb(x_0)}$ and $X'' := \overline{\Orb(x_0)}\setminus \Orb(x_0)$ which is a closed $\sigma$-invariant subset of $X'$.
\begin{lemma}[{cf.\ \cite[Lemma~11.10]{K3}}]\label{Lem:XI=O} For an ideal $I$ of $C^*(\Sigma)$, we have $X_I=X'$ if and only if $I_{X'} \subset I$ and $I_{X''} \not\subset I$. \end{lemma}
\begin{proof} Take an ideal $I$ of $C^*(\Sigma)$ with $X_I=X'$. Then $I_{X'} =I_{X_I}\subset I$ by Lemma~\ref{Lem:IXICI}. If $I_{X''} \subset I$ then we have $X_I \subset X_{I_{X''}} =X''$ by Proposition~\ref{Prop:XIX}. This contradicts $X_I=X'$. Thus we get $I_{X''} \not\subset I$.
Conversely, take an ideal $I$ of $C^*(\Sigma)$ with $I_{X'} \subset I$ and $I_{X''} \not\subset I$. By Lemma~\ref{Lem:Xeasy} and Proposition~\ref{Prop:XIX}, we have $X_I \subset X_{I_{X'}}=X'$. Suppose $x_0 \notin X_I$. Then $X_I \subset X''$ since $X_I$ is $\sigma$-invariant. This shows $I_{X''}\subset I_{X_I} \subset I$ by Lemma~\ref{Lem:Ieasy} and Lemma~\ref{Lem:IXICI}. This contradicts $I_{X''} \not\subset I$. Hence we have $x_0 \in X_I$. This shows $X' \subset X_I$. Therefore we get $X_I=X'$. \end{proof}
Let us set $\Sigma' = (X',\sigma|_{U\cap X'})$. Then we have $C^*(\Sigma')=C^*(\Sigma)/I_{X'}$ by the definition of $I_{X'}$. Let $(t^0,t^1)$ be the universal pair for $C^*(\Sigma')$, and define $t^n$ as in Section~\ref{Sec:C*S}. Let us define $f_0 \in C_c(U_{n_0})$ by \[ f_0(x)=\begin{cases}1 & x=x_0\\ 0 & x \neq x_0 \end{cases}. \] We set $p_0=t^0(f_0)\in C^*(\Sigma')$ and $u_0=t^{n_0}(f_0)\in C^*(\Sigma')$. Note that we have $u_0^*u_0=u_0u_0^*=p_0$.
\begin{lemma}[{cf.\ \cite[Lemma~11.12]{K3}}]\label{FullHered} The corner $p_0 C^*(\Sigma') p_0$ is the $C^*$-subalgebra $C^*(u_0)$ generated by $u_0$. \end{lemma}
\begin{proof} A corner $p_0 C^*(\Sigma') p_0$ is a closure of a linear span of the set \[ \big\{p_0t^n(\xi)t^m(\eta)^*p_0\mid n,m\in\mathbb{N}, \xi\in C_c(U_n), \eta\in C_c(U_m)\}. \] by Lemma~\ref{Lem:cspa}. For $n,m\in\mathbb{N}, \xi\in C_c(U_n), \eta\in C_c(U_m)$, the element $p_0t^n(\xi)t^m(\eta)^*p_0$ is $\alpha t^n(f_0)t^m(f_0)^*$ for some $\alpha \in \mathbb{C}$. We also have \[ t^n(f_0)t^m(f_0)^* =\begin{cases} p_0 & \text{if $n=m$}\\ u_0^k & \text{if $n-m = n_0k$ for some $k=1,2,\ldots$}\\ (u_0^k)^* & \text{if $m-n = n_0k$ for some $k=1,2,\ldots$}\\ 0 & \text{otherwise}. \end{cases} \] Hence the corner $p_0 C^*(\Sigma') p_0$ is $C^*(u_0)$. \end{proof}
\begin{lemma} The corner $p_0 C^*(\Sigma') p_0=C^*(u_0)$ is full in the ideal $I_{X''}/I_{X'} \subset C^*(\Sigma)/I_{X'} = C^*(\Sigma')$. \end{lemma}
\begin{proof} Let $I$ be the ideal of $C^*(\Sigma')$ generated by $p_0$. Then $I$ is gauge-invariant. Consider $X_I \subset X'$. Since $X_I$ does not contain $x_0$. we have $X_I \subset X''$. Therefore $I_{X''} \subset I_{X_I}=I$ by Lemma~\ref{Lem:Ieasy} and Proposition~\ref{Prop:IXI}. Since $p_0 \in I_{X''}$, we have $I \subset I_{X''}$. Hence we get $I =I_{X''}$.
The ideal $I_{X''}$ of $C^*(\Sigma')$ is equal to $I_{X''}/I_{X'} \subset C^*(\Sigma)/I_{X'}$ via the identification $C^*(\Sigma)/I_{X'} = C^*(\Sigma')$. This completes the proof. \end{proof}
The following lemma should be known, but the author could not find it in the literature. A similar result on primitive ideals can be found, for example, in \cite[II.6.5.4]{B}
\begin{lemma}\label{Lem:primeO} Let $A$ be a $C^*$-al\-ge\-bra , and $I',I''$ be ideals of $A$ with $I' \subset I''$. Then $I\mapsto (I\cap I'')/I'$ is a bijection from the set of prime ideals $I$ of $A$ with $I' \subset I$ and $I'' \not\subset I$ to the set of prime ideals of $I''/I'$. \end{lemma}
\begin{proof} We first show that $I \mapsto I\cap I''$ is a bijection from the set of prime ideals $I$ of $A$ with $I'' \not\subset I$ to the set of prime ideals of $I''$. Take a prime ideal $I$ of $A$ with $I'' \not\subset I$. We show that $I\cap I''$ is a prime ideal of $I''$ Since $I'' \not\subset I$, we have $I\cap I'' \neq I''$. Take ideals $I_1,I_2$ of $I''$ with $I_1 \cap I_2 \subset I\cap I''$. Then $I_1,I_2$ are ideals of $A$ with $I_1 \cap I_2 \subset I$. Since $I$ is prime, either $I_1 \subset I$ or $I_2 \subset I$ holds. Thus either $I_1 \subset I\cap I''$ or $I_2 \subset I\cap I''$ holds. This shows $I\cap I''$ is a prime ideal of $I''$. Next take two prime ideals $I_1,I_2$ of $A$ with $I'' \not\subset I_1$ and $I'' \not\subset I_2$. Suppose $I_1\cap I'' = I_2\cap I''$ and we will show $I_1=I_2$. We have $I_1 \supset I_1\cap I'' = I_2\cap I''$. Since $I_1$ is prime and $I'' \not\subset I_1$, we get $I_2 \subset I_1$. Similarly we get $I_1 \subset I_2$. Thus $I_1=I_2$. This shows that the map $I \mapsto I\cap I''$ is injective. Now take a prime ideal $P$ of $I''$. Define $I \subset A$ by \[ I := \{a \in A\mid \text{$ab \in P$ for all $b \in I''$}\}. \] Then $I$ is an ideal of $A$ satisfying $I \cap I''=P$. We will show $I$ is prime. Take two ideals $I_1,I_2$ of $A$ with $I_1\cap I_2 \subset I$. Then two ideals $I_1 \cap I'', I_2 \cap I''$ of $I''$ satisfy $(I_1 \cap I'')\cap (I_2 \cap I'') \subset I \cap I''=P$. Since $P$ is a prime ideal of $I''$, we have either $I_1 \cap I'' \subset P$ or $I_2 \cap I'' \subset P$. When $I_1 \cap I'' \subset P$, we have $I_1 \subset I$ by the definition of $I$. Similarly we have $I_2 \subset I$ when $I_2 \cap I'' \subset P$. Thus we have shown that $I$ is prime. This shows that the map $I \mapsto I\cap I''$ is surjective. Therefore the map $I \mapsto I\cap I''$ is bijective. Hence the map $I \mapsto I\cap I''$ is a bijection from the set of prime ideals $I$ of $A$ with $I' \subset I$ and $I'' \not\subset I$ to the set of prime ideals $P$ of $I''$ with $I' \subset P$.
It is well-known and clear that $P \mapsto P/I'$ is a bijection from the set of prime ideals $P$ of $I''$ with $I' \subset P$ to the set of prime ideals of $I''/I'$. This completes the proof. \end{proof}
\begin{proposition} The prime ideals $I$ with $X_I=\overline{\Orb(x_0)}$ is parametrized as $P_{(x_0,e^{2\pi it})}$ for $0\leq t <1/n_0$. \end{proposition}
\begin{proof} Since $C^*(u_0)$ is hereditary and full in the ideal $I_{X''}/I_{X'}$, the map $I \mapsto I \cap C^*(u_0)$ induces the bijection from the set of all ideals of $I_{X''}/I_{X'}$ to the one of $C^*(u_0)$ preserving the intersections and inclusion. Hence it induces the bijection between the set of prime ideals. Combine this with Lemma~\ref{Lem:XI=O} and Lemma~\ref{Lem:primeO}, we see that $I \mapsto I \cap C^*(u_0)$ induces the bijection from the set of prime ideals $I$ with $X_I=\overline{\Orb(x_0)}$ to the set of all prime ideals of $C^*(u_0)$. The prime ideals of $C^*(u_0)$ is parametrized as $Q_z$ for $z \in \mathbb{T}$, where $Q_z$ is the ideal generated by $p_0-zu_0$. Since we have $\pi_{(x_0,\gamma)}(u_0)=\gamma^{n_0}\pi_{(x_0,\gamma)}(p_0)$, we get $P_{(x_0,\gamma)} \cap C^*(u_0)=Q_{\gamma^{-n_0}}$. Hence we see that $P_{(x_0,e^{2\pi it})}$ for $0\leq t <1/n_0$ is the parametrization of the prime ideals $I$ with $X_I=\overline{\Orb(x_0)}$. \end{proof}
Note that for $x_0\in \Per(\Sigma)$ with $n_0=p(x_0)$, we have $P_{(x,\gamma)}=P_{(x,\mu)}$ if and only if $\gamma^{n_0}=\mu^{n_0}$ by Lemma~\ref{Lem:gnmn}.
Combine the discussion above, we get the following.
\begin{theorem} The set of all prime ideals of $C^*(\Sigma)$ is \[ \{I_{X'}\mid X'\in {\mathcal A}(\Sigma)\}\cup \{P_{(x,e^{2\pi it})}\mid x\in \Per(\Sigma), 0\leq t <1/p(x)\}. \] \end{theorem}
\begin{remark}\label{Rem:2ndctbl} One can show that when $X$ is second countable for $X'\in {\mathcal A}(\Sigma)$ there exists $x \in X$ such that $X'=\overline{\Orb(x)}$ (cf.\ \cite[Proposition~4.14]{K3}). Therefore when $X$ is second countable, $\{P_{(x,\gamma)}\}$ are all prime (primitive) ideals of $C^*(\Sigma)$. \end{remark}
\begin{proposition} Every prime ideals of $C^*(\Sigma)$ is an intersection of primitive ideals in the form $P_{(x,\gamma)}$. \end{proposition}
\begin{proof} Take $X'\in {\mathcal A}(\Sigma)$. We set $I = \bigcap_{x \in X', \gamma \in \mathbb{T}}P_{(x,\gamma)}$. Then $I$ is gauge-invariant. For $x \in X'$ and $\gamma \in \mathbb{T}$, we have $X_{P_{(x,\gamma)}}=\overline{\Orb(x)}\subset X'$. Hence $I_{X'}\subset I_{X_{P_{(x,\gamma)}}} \subset P_{(x,\gamma)}$ by Lemma~\ref{Lem:Ieasy} and Lemma~\ref{Lem:IXICI}. Therefore we get $I_{X'}\subset I$. For $x \in X'$ and $\gamma \in \mathbb{T}$, $I \subset P_{(x,\gamma)}$ implies $X_I \supset X_{P_{(x,\gamma)}}=\overline{\Orb(x)}$. Hence we get $X_I \supset X'$. By Proposition~\ref{Prop:IXI} and Lemma~\ref{Lem:Ieasy}, we get $I=I_{X_I} \subset I_{X'}$. Thus we have $I_{X'} = I$. \end{proof}
\begin{corollary}\label{Cor:inter} Every primitive ideals of $C^*(\Sigma)$ is an intersection of primitive ideals in the form $P_{(x,\gamma)}$. \end{corollary}
\section{$Y_I$, $I_Y$ and $I_{Y_I}=I$}\label{Sec:IYI}
\begin{definition} For an ideal $I$ of $C^*(\Sigma)$, we define $Y_I\subset X\times\mathbb{T}$ by \[ Y_I=\big\{(x,\gamma)\in X\times\mathbb{T} \mid I\subset P_{(x,\gamma)}\big\}. \] \end{definition}
\begin{lemma}\label{cap} For two ideals $I_1,I_2$ of $C^*(\Sigma)$, we have $Y_{I_1\cap I_2}=Y_{I_1}\cup Y_{I_2}$. If $I_1\subset I_2$, we have $Y_{I_1}\supset Y_{I_2}$. \end{lemma}
\begin{proof} To show $Y_{I_1\cap I_2}=Y_{I_1}\cup Y_{I_2}$, it suffices to show for $(x,\gamma)\in X\times\mathbb{T}$, $I_1\cap I_2 \subset P_{(x,\gamma)}$ if and only if $I_1\subset P_{(x,\gamma)}$ or $I_2\subset P_{(x,\gamma)}$. This follows from the fact that $P_{(x,\gamma)}$ is prime. The latter assertion is clear by definition. \end{proof}
\begin{definition} For a subset $Y$ of $X\times\mathbb{T}$, we define an ideal $I_Y$ of $C^*(\Sigma)$ by \[ I_Y=\bigcap_{(x,\gamma)\in Y}P_{(x,\gamma)}. \] \end{definition}
\begin{lemma} For two subsets $Y_1,Y_2$ of $X\times\mathbb{T}$, we have $I_{Y_1\cup Y_2}=I_{Y_1}\cap I_{Y_2}$. If $Y_1\subset Y_2$, we have $I_{Y_1}\supset I_{Y_2}$. \end{lemma}
\begin{proof} Clear by definition. \end{proof}
\begin{proposition}\label{Prop:IYI} For an ideal $I$ of $C^*(\Sigma)$, we have $I_{Y_I}=I$. \end{proposition}
\begin{proof} It is known that $I$ is the intersection of all primitive ideals of $C^*(\Sigma)$ containing $I$ (\cite[II.6.5.3]{B}). By Corollary~\ref{Cor:inter}, all primitive ideals of $C^*(\Sigma)$ are the intersection of ideals in the form $P_{(x,\gamma)}$. We are done. \end{proof}
Thus $I\mapsto Y_I$ is an injective map from the set of ideals of $C^*(\Sigma)$ to the set of subsets of $X\times\mathbb{T}$. We will determine the image of this map in order to get the complete description of ideals of $C^*(\Sigma)$.
\begin{definition} Let $Y$ be a subset of $X\times\mathbb{T}$. For each $x\in X$, we set \[ Y_x=\{\gamma\in\mathbb{T} \mid (x,\gamma)\in Y\}\subset\mathbb{T}. \] \end{definition}
\begin{definition}\label{Def:adm} We say a subset $Y$ of $X\times\mathbb{T}$ is said to be {\em admissible} if \begin{enumerate} \rom \item $Y$ is a closed subset of $X\times\mathbb{T}$ with respect to the product topology, \item $Y_x=Y_{\sigma(x)}$ for all $x\in U$, \item $Y_{x_0}\neq \emptyset,\mathbb{T}$ implies that $x_0$ is periodic, $\zeta_{p(x_0)}Y_{x_0}=Y_{x_0}$, and there exists a neighborhood $V$ of $x_0$ such that all $x\in V$ with $l(x)\neq l(x_0)$ satisfies $Y_x=\emptyset$. \end{enumerate} \end{definition}
\section{$Y_I$ is admissible}\label{Sec:YIad}
We will show that $Y_I$ is admissible for all ideal $I$ of $C^*(\Sigma)$ (Proposition~\ref{Prop:YIinv}).
For a periodic point $x_0\in X$ with $p:=p(x_0)$ and a positive integer $n$, we define a map $\sigma_n\colon \Orb(x_0)\times\mathbb{Z}/n\mathbb{Z} \to \Orb(x_0)\times\mathbb{Z}/n\mathbb{Z}$ as follows.
We set $y_0=\sigma^{l(x_0)+p-1}(x_0)$. We define $d\colon \Orb(x_0)\to \{0,1\}$ by $d(x)=0$ for $x\in \Orb(x_0)\setminus \{y_0\}$ and $d(y_0)=1$, and a map $\sigma_n\colon \Orb(x_0)\times\mathbb{Z}/n\mathbb{Z} \to \Orb(x_0)\times\mathbb{Z}/n\mathbb{Z}$ by $\sigma_n(x,j)=(\sigma(x),j+d(x))$. Let $H_{x_0}^{(n)}$ be the Hilbert space whose complete orthonormal system is given by $\{\delta_{(x,j)}\}_{(x,j)\in \Orb(x_0)\times\mathbb{Z}/n\mathbb{Z}}$. In a similar way to define representations $\pi_{(x_0,\gamma_0)}$, we can define a representation $\pi_{(x_0,\gamma_0)}^{(n)}\colon C^*(\Sigma)\to B(H_{x_0}^{(n)})$ such that \begin{align*} \pi_{(x_0,\gamma_0)}^{(n)}(t^0(f))\delta_{(x,j)} &=f(x)\delta_{(x,j)}\\ \pi_{(x_0,\gamma_0)}^{(n)}(t^1(\xi))\delta_{(x,j)} &=\gamma_0\sum_{\sigma_n((y,j'))=(x,j)}\xi(y)\delta_{(y,j')} =\gamma_0\sum_{\sigma(y)=x}\xi(y)\delta_{(y,j-d(y))}. \end{align*} By construction, we have $\pi_{(x_0,\gamma_0)}^{(1)}=\pi_{(x_0,\gamma_0)}$. We will see that $\pi_{(x_0,\gamma_0)}^{(n)}$ is not irreducible for $n\geq 2$ in Lemma~\ref{kerpi1}. For $x\in \Orb(x_0)$, let $c(x)\in\mathbb{N}$ be the smallest natural number satisfying $\sigma^{c(x)}(x)=y_0$. We have $c(\sigma(x))=c(x)-1$ for $x\in \Orb(x_0)\setminus \{y_0\}$, and $c(\sigma(y_0))=p-1=c(y_0)+p-1$. Hence we get $d(y)p=c(\sigma(y))-c(y)+1$ for all $y\in \Orb(x_0)$.
\begin{lemma}\label{iotak} For $k=0,1,\ldots,n-1$, the map $\iota_k\colon H_{x_0}\to H_{x_0}^{(n)}$ defined by \[ \iota_k(\delta_x) =\frac{1}{\sqrt{n}}\sum_{j=0}^{n-1} \zeta_{np}^{(jp-c(x))k}\delta_{(x,j)}, \] is an isometry satisfying $\iota_k\circ \pi_{(x_0,\zeta_{np}^{k}\gamma_0)}(a) =\pi_{(x_0,\gamma_0)}^{(n)}(a)\circ \iota_k$ for $a\in C^*(\Sigma)$. \end{lemma}
\begin{proof} Clearly $\iota_k$ is isometric. For every $f\in C_0(X)$ and $x\in\Orb(x_0)$, we have \begin{align*} \iota_k\big(\pi_{(x_0,\zeta_{np}^{k}\gamma_0)}(t^0(f))\delta_x\big) &=\iota_k (f(x)\delta_x) =f(x)\frac{1}{\sqrt{n}} \sum_{j=0}^{n-1}\zeta_{np}^{(jp-c(x))k}\delta_{(x,j)}, \end{align*} and \begin{align*} \pi_{(x_0,\gamma_0)}^{(n)}(t^0(f))\big(\iota_k(\delta_x)\big) &=\pi_{(x_0,\gamma_0)}^{(n)}(t^0(f))\bigg( \frac{1}{\sqrt{n}}\sum_{j=0}^{n-1} \zeta_{np}^{(jp-c(x))k}\delta_{(x,j)}\bigg)\\ &=\frac{1}{\sqrt{n}}\sum_{j=0}^{n-1} \zeta_{np}^{(jp-c(x))k}f(x)\delta_{(x,j)}. \end{align*} Hence $\iota_k\circ \pi_{(x_0,\zeta_{np}^{k}\gamma_0)}(a) =\pi_{(x_0,\gamma_0)}^{(n)}(a)\circ \iota_k$ for $a\in t^0(C_0(X))$. For every $\xi\in C_c(U)$ and $x\in\Orb(x_0)$, we have \begin{align*} \iota_k\big(\pi_{(x_0,\zeta_{np}^{k}\gamma_0)}(t^1(\xi))\delta_x\big) &=\iota_k \bigg(\zeta_{np}^{k}\gamma_0\sum_{\sigma(y)=x}\xi(y)\delta_y\bigg)\\ &=\zeta_{np}^{k}\gamma_0\sum_{\sigma(y)=x}\xi(y) \bigg(\frac{1}{\sqrt{n}} \sum_{j=0}^{n-1}\zeta_{np}^{(jp-c(y))k}\delta_{(y,j)}\bigg)\\ &=\frac{1}{\sqrt{n}}\gamma_0\sum_{\sigma(y)=x}\xi(y) \sum_{j=0}^{n-1} \zeta_{np}^{(jp-c(y)+1)k} \delta_{(y,j)}, \end{align*} and \begin{align*} \pi_{(x_0,\gamma_0)}^{(n)}(t^1(\xi))\big(\iota_k(\delta_x)\big) &=\pi_{(x_0,\gamma_0)}^{(n)}(t^1(\xi))\bigg(\frac{1}{\sqrt{n}} \sum_{j=0}^{n-1}\zeta_{np}^{(jp-c(x))k}\delta_{(x,j)}\bigg)\\ &=\frac{1}{\sqrt{n}}\sum_{j=0}^{n-1}\zeta_{np}^{(jp-c(x))k} \bigg(\gamma_0\sum_{\sigma(y)=x}\xi(y) \delta_{(y,j-d(y))}\bigg)\\ &=\frac{1}{\sqrt{n}}\gamma_0\sum_{\sigma(y)=x}\xi(y) \sum_{j=0}^{n-1}\zeta_{np}^{(jp-c(x))k} \delta_{(y,j-d(y))}\\ &=\frac{1}{\sqrt{n}}\gamma_0\sum_{\sigma(y)=x}\xi(y) \sum_{j=0}^{n-1} \zeta_{np}^{((j+d(y))p-c(x))k} \delta_{(y,j)}\\ &=\frac{1}{\sqrt{n}}\gamma_0\sum_{\sigma(y)=x}\xi(y) \sum_{j=0}^{n-1} \zeta_{np}^{(jp-c(y)+1)k} \delta_{(y,j)} \end{align*} where in the last equality we use the fact that $d(y)p=c(x)-c(y)+1$ for $y\in\Orb(x_0)$ with $\sigma(y)=x$. Hence $\iota_k\circ \pi_{(x_0,\zeta_{np}^{k}\gamma_0)}(a) =\pi_{(x_0,\gamma_0)}^{(n)}(a)\circ \iota_k$ for $a\in t^1(C_c(U))$. Since $C^*(\Sigma)$ is generated by $t^0(C_0(X))\cup t^1(C_c(U))$, we have $\iota_k\circ \pi_{(x_0,\zeta_{np}^{k}\gamma_0)}(a) =\pi_{(x_0,\gamma_0)}^{(n)}(a)\circ \iota_k$ for every $a\in C^*(\Sigma)$. \end{proof}
\begin{lemma}\label{kerpi1} The representation $\pi_{(x_0,\gamma_0)}^{(n)}$ is unitarily equivalent to $\bigoplus_{k=0}^{n-1}\pi_{(x_0,\zeta_{np}^{k}\gamma_0)}$. Hence we have $\ker \pi_{(x_0,\gamma_0)}^{(n)} =\bigcap_{k=0}^{n-1}P_{(x_0,\zeta_{np}^{k}\gamma_0)}$. \end{lemma}
\begin{proof} For $k=0,1,\ldots,n-1$, let $\iota_k$ be the isometies in Lemma \ref{iotak}. Then for each $x\in \Orb(x_0)$, $\{\iota_k(\delta_x)\}_{k=0}^{n-1}$ are complete orthonormal system of the linear space spanned by $\{\delta_{(x,j)}\mid j\in \mathbb{Z}/n\mathbb{Z}\}$. Hence $\bigoplus_{k=0}^{n-1}\iota_k$ is a unitary from $\bigoplus_{k=0}^{n-1}H_{x_0}$ to $H_{x_0}^{(n)}$ which intertwines $\bigoplus_{k=0}^{n-1}\pi_{(x_0,\zeta_{np}^{k}\gamma_0)}$ and $\pi_{(x_0,\gamma_0)}^{(n)}$. This completes the proof. \end{proof}
Note that Lemma \ref{per1} implies $P_{(x_0,\zeta_{np}^{k}\gamma_0)}=P_{(x_0,\zeta_{np}^{k+n}\gamma_0)}$. Hence we get $\ker \pi_{(x_0,\gamma_0)}^{(n)} =\bigcap_{k\in\mathbb{Z}}P_{(x_0,\zeta_{np}^{k}\gamma_0)}$.
\begin{lemma}\label{weakconv1} Let $l,m\in\mathbb{N}$ with $m \geq 1$. Let $\{(x_\lambda,\gamma_\lambda)\}_{\lambda\in\Lambda}$ be a net in $X\times\mathbb{T}$ converges to $(x_0,\gamma_0)$ such that $l(x_\lambda)=l$ and $p(x_\lambda)=m$ for all $\lambda\in \Lambda$. Then we have the following. \begin{enumerate} \renewcommand{{\rm (\roman{enumi})}}{{\rm (\roman{enumi})}} \item $l(x_0)=l$ and $p(x_0)=m/n$ for some $n\in \mathbb{N}$. \item for each $(x,j)\in \Orb(x_0)\times \mathbb{Z}/n\mathbb{Z}$, there exist $\lambda_{(x,j)}\in \Lambda$ and $\varphi_\lambda(x,j)\in \Orb(x_\lambda)$ for $\lambda\succeq\lambda_{(x,j)}$ satisfying \begin{itemize} \item $\lim_{\lambda}\varphi_\lambda(x,j)=x$ for all $(x,j)$, \item $\sigma(\varphi_\lambda(x,j))=\varphi_\lambda(\sigma_n(x,j))$ for all $(x,j)$ and all $\lambda$ with $\lambda\succeq\lambda_{(x,j)}$ and $\lambda\succeq\lambda_{\sigma_n(x,j)}$, \item for two distinct elements $(x,j),(x',j')\in \Orb(x_0)\times \mathbb{Z}/n\mathbb{Z}$, we have $\varphi_\lambda(x,j)\neq \varphi_\lambda(x',j')$ for sufficiently large $\lambda$. \end{itemize} \item $\bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)}\subset \ker\pi_{(x_0,\gamma_0)}^{(n)}$. \end{enumerate} \end{lemma}
\begin{proof} (i) For each $\lambda$, we have $\sigma^{l+m}(x_\lambda)=\sigma^{l}(x_\lambda)$. Hence we get $\sigma^{l+m}(x_0)=\sigma^{l}(x_0)$. This implies that $x_0$ is a periodic point whose period $p(x_0)$ divides $m$. Thus we can find $n\in\mathbb{N}$ with $p(x_0)=m/n$. We clearly have $l(x_0)\leq l$. To drive a contradiction, assume $l(x_0)=k<l$. Then we have $\sigma^{k+m}(x_{0})=\sigma^{k}(x_{0})$. Since $\sigma^{l-k}$ is locally homeomorphic, there exists a neighborhood $V$ of $\sigma^{k+m}(x_{0})=\sigma^{k}(x_{0})$ such that $\sigma^{l-k}$ is injective on $V$. We can find $\lambda$ such that $\sigma^{k+m}(x_{\lambda}),\sigma^{k}(x_{\lambda})\in V$. Since $k<l$, we have $\sigma^{k+m}(x_{\lambda})\neq \sigma^{k}(x_{\lambda})$. However, we have \[ \sigma^{l-k}\big(\sigma^{k+m}(x_{\lambda})\big) =\sigma^{l+m}(x_{\lambda})=\sigma^{l}(x_\lambda) =\sigma^{l-k}\big(\sigma^{k}(x_{\lambda})\big). \] This is a contradiction. Hence $l(x_0)=l$.
(ii) Let us set $p=p(x_0)$, and $x^{(i)}=\sigma^{l+i}(x_0)$ for $i=0,\ldots,p-1$. Take $(x,j)\in \Orb(x_0)\times \mathbb{Z}/n\mathbb{Z}$. Let $k:=l(x)$ which is the smallest natural number with $\sigma^{k+p}(x)=\sigma^{k}(x)$. Choose $i\in\{0,1,\ldots,p-1\}$ such that $\sigma^{k}(x)=x^{(i)}$. Choose a neighborhood $V$ of $x$ such that $\sigma^{k}$ is injective on $V$. Since $\sigma^{l+i+jp}(x_0)=x^{(i)}$, there exists $\lambda_{(x,j)}\in\Lambda$ such that we have $\sigma^{l+i+jp}(x_\lambda)\in \sigma^{k}(V)$ for all $\lambda\succeq \lambda_{(x,j)}$. For $\lambda\succeq \lambda_{(x,j)}$, define $\varphi_\lambda(x,j)$ to be the element in $V$ with $\sigma^{k}(\varphi_\lambda(x,j))=\sigma^{l+i+jp}(x_\lambda)$. Since the restriction of $\sigma^{k}$ to $V$ is a homeomorphism and $\lim_{\lambda}\sigma^{l+i+jp}(x_\lambda)=\sigma^{l+i+jp}(x_0)=x^{(i)}$, we get $\lim_{\lambda}\varphi_\lambda(x,j)=x$. By construction, it is easy to see that $\sigma(\varphi_\lambda(x,j))=\varphi_\lambda(\sigma_n(x,j))$ for $(x,j)\in (\Orb(x_0)\setminus\{x^{(p-1)}\})\times \mathbb{Z}/n\mathbb{Z}$ and $\lambda\in\Lambda$ with $\lambda\succeq \lambda_{(x,j)}$ and $\lambda\succeq \lambda_{\sigma_n(x,j)}$. For $(x^{(p-1)},j)\in \Orb(x_0)\times \mathbb{Z}/n\mathbb{Z}$, we have $\sigma_n(x^{(p-1)},j)=(x^{(0)},j+1)$. We get $\varphi_\lambda(x^{(p-1)},j)=\sigma^{l+p-1+jp}(x_\lambda)$ and $\varphi_\lambda(x^{(0)},j+1)=\sigma^{l+0+(j+1)p}(x_\lambda)$. Hence the equation $\sigma(\varphi_\lambda(x^{(p-1)},j)) =\varphi_\lambda(\sigma_n(x^{(p-1)},j))$ is easy to see when $j\neq n-1$, and follows from the fact that $\sigma^{l+np}(x_\lambda)=\sigma^{l}(x_\lambda)$ when $j=n-1$. Take two distinct elements $(x,j),(x',j')\in \Orb(x_0)\times \mathbb{Z}/n\mathbb{Z}$. If $x\neq x'$, then $\varphi_\lambda(x,j)\neq \varphi_\lambda(x',j')$ for sufficiently large $\lambda$ because $\lim_{\lambda}\varphi_\lambda(x,j)=x$ and $\lim_{\lambda}\varphi_\lambda(x',j')=x'$. If $x=x'$, then $j\neq j'$. Take $k:=l(x)$ and $i\in\{0,1,\ldots,p-1\}$ with $\sigma^{k}(x)=x^{(i)}$. For every $\lambda$ we have $\sigma^{l+i+jp}(x_\lambda)\neq \sigma^{l+i+j'p}(x_\lambda)$
because $0<|jp-j'p|<m$. Hence $\varphi_\lambda(x,j)\neq \varphi_\lambda(x',j')$ when they are defined because $\sigma^{k}(\varphi_\lambda(x,j))=\sigma^{l+i+jp}(x_\lambda)$ and $\sigma^{k}(\varphi_\lambda(x',j'))=\sigma^{l+i+j'p}(x_\lambda)$.
(iii) Take $(x,j),(x',j')\in \Orb(x_0)\times \mathbb{Z}/n\mathbb{Z}$. We will show that for all $a\in C^*(\Sigma)$ we have \[ \lim_{\lambda} \ip{\delta_{\varphi_\lambda(x',j')}}{\pi_{(x_\lambda,\gamma_\lambda)}(a) \delta_{\varphi_\lambda(x,j)}}_{x_\lambda} = \ip{\delta_{(x',j')}}{\pi_{(x_0,\gamma_0)}^{(n)}(a)\delta_{(x,j)}}_{x_0}^{(n)}, \] where $\ip{\cdot}{\cdot}_{x_0}^{(n)}$ is the inner product of $H_{x_0}^{(n)}$ which is linear in the second variable. To do so, it suffices to assume that $a=\sum_{k=1}^K t^{n_k}(\xi_k)t^{m_k}(\eta_k)^*\in C^*(\Sigma)$ for some $K, n_k,m_k\in\mathbb{N}$, $\xi_k\in C_c(U_{n_k})$ and $\eta_k\in C_c(U_{m_k})$ for $k=1,\ldots,K$ by Lemma~\ref{Lem:cspa}.
We can find $\lambda_0\in\Lambda$ such that \begin{itemize} \item $\lambda_0\succeq\lambda_{(x,j)}$ and $\lambda_0\succeq\lambda_{(x',j')}$, \item $\lambda_0\succeq\lambda_{\sigma_n^{m_k}(x,j)}$ and $\lambda_0\succeq\lambda_{\sigma_n^{n_k}(x',j')}$ for $k=1,\ldots,K$, \item $\varphi_\lambda$ is injective on $\{\sigma_n^{m_k}(x,j),\sigma_n^{n_k}(x',j')\mid k=1,\ldots,K\}$ for all $\lambda\succeq \lambda_0$. \end{itemize} For $\lambda\succeq \lambda_0$, we have \begin{align*} \pi_{(x_\lambda,\gamma_\lambda)}(a)\delta_{\varphi_{\lambda(x,j)}} &=\sum_{k=1}^K \pi_{(x_\lambda,\gamma_\lambda)} \big(t^{n_k}(\xi_k)t^{m_k}(\eta_k)^*\big)\delta_{\varphi_\lambda(x,j)}\\ &=\sum_{k=1}^K \pi_{(x_\lambda,\gamma_\lambda)}(t^{n_k}(\xi_k)) \big(\gamma_\lambda^{-m_k}\overline{\eta_k}(\varphi_\lambda(x,j)) \delta_{\sigma^{m_k}(\varphi_\lambda(x,j))}\big)\\ &=\sum_{k=1}^K \sum_{\sigma^{n_k}(y)=\sigma^{m_k}(\varphi_\lambda(x,j))}\!\! \gamma_\lambda^{n_k}\xi_k(y) \gamma_\lambda^{-m_k}\overline{\eta_k}(\varphi_\lambda(x,j)) \delta_{y}. \end{align*} Hence we get \[ \ip{\delta_{\varphi_\lambda(x',j')}}{\pi_{(x_\lambda,\gamma_\lambda)}(a) \delta_{\varphi_\lambda(x,j)}}_{x_\lambda} =\sum_{k\in N_\lambda} \gamma_\lambda^{n_k-m_k}\xi_k(\varphi_\lambda(x',j')) \overline{\eta_k}(\varphi_\lambda(x,j)), \] where \[
N_\lambda=\big\{k\in\{1,\ldots,K\}\ \big|\ \sigma^{n_k}(\varphi_\lambda(x',j'))=\sigma^{m_k}(\varphi_\lambda(x,j))\big\}. \] Similarly, we get \[ \ip{\delta_{(x',j')}}{\pi_{(x_0,\gamma_0)}^{(n)}(a)\delta_{(x,j)}}_{x_0}^{(n)} =\sum_{k\in N_0} \gamma_0^{n_k-m_k}\xi_k(x')\overline{\eta_k}(x), \] where \[
N_0=\big\{k\in\{1,\ldots,K\}\ \big|\ \sigma_n^{n_k}(x',j')=\sigma_n^{m_k}(x,j)\big\}. \] Since \[ \sigma^{n_k}(\varphi_\lambda(x',j')) =\varphi_\lambda(\sigma_n^{n_k}(x',j')),\quad \sigma^{m_k}(\varphi_\lambda(x,j)) =\varphi_\lambda(\sigma_n^{m_k}(x,j)) \] and $\varphi_\lambda$ is injective on $\{\sigma_n^{m_k}(x,j),\sigma_n^{n_k}(x',j')\mid k=1,\ldots,K\}$, we have $N_\lambda=N_0$ for $\lambda\succeq \lambda_0$. Since $\lim_{\lambda}\varphi_\lambda(x,j)=x$ and $\lim_{\lambda}\varphi_\lambda(x',j')=x'$, we have \[ \lim_{\lambda} \ip{\delta_{\varphi_\lambda(x',j')}}{\pi_{(x_\lambda,\gamma_\lambda)}(a) \delta_{\varphi_\lambda(x,j)}}_{x_\lambda} =\ip{\delta_{(x',j')}}{\pi_{(x_0,\gamma_0)}^{(n)}(a)\delta_{(x,j)}}_{x_0}^{(n)}. \] Thus we get this equality for all $(x,j),(x',j')\in\Orb(x_0)\times \mathbb{Z}/n\mathbb{Z}$ and all $a\in C^*(\Sigma)$. Hence for $a\in \bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)}$ we have $\pi_{(x_0,\gamma_0)}^{(n)}(a)=0$. We are done. \end{proof}
\begin{corollary}\label{closed} Under the condition in Lemma \ref{weakconv1}, we have \[ \bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)} \subset P_{(x_0,\gamma_0)}. \] \end{corollary}
\begin{proof} This follows from Lemma \ref{weakconv1} (iii) and Lemma \ref{kerpi1}. \end{proof}
For a periodic point $x_0\in X$ with $p := p(x_0)$, we define a representation $\pi_{(x_0,\gamma_0)}^{(\infty)}\colon C^*(\Sigma)\to B(H_{x_0}^{(\infty)})$ in a similar way to the definition of $\pi_{(x_0,\gamma_0)}^{(n)}$ where $H_{x_0}^{(\infty)}$ is the Hilbert space whose complete orthonormal system is given by $\{\delta_{(x,j)}\}_{(x,j)\in \Orb(x_0)\times\mathbb{Z}}$ using the map $\sigma_\infty\colon \Orb(x_0)\times\mathbb{Z}\to \Orb(x_0)\times\mathbb{Z}$ defined by $\sigma_\infty(x,j)=(\sigma(x),j+d(x))$. Let us define a probability space \[ [1,\zeta_p)=\{e^{2\pi i\theta/p}\mid 0\leq \theta <1\} \] on which we consider the measure $d\gamma$ defined from the Lebesgue measure via the bijection $[0,1)\ni\theta\mapsto e^{2\pi i\theta/p}\in [1,\zeta_p)$. The Hilbert space $L^2([1,\zeta_p),H_{x_0})$ of the all $H_{x_0}$-valued second power integrable functions on $[1,\zeta_p)$ has spanned by the elements in the form $\eta\delta_x$ with $\eta\in L^2([1,\zeta_p))$ and $x\in\Orb(x_0)$. We define a representation $\pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}\colon C^*(\Sigma)\to B(L^2([1,\zeta_p),H_{x_0}))$ by \[ \big(\pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}(a) \varPsi \big)(\gamma) =\pi_{(x_0,\gamma\gamma_0)}(a)\varPsi(\gamma)\in H_{x_0} \] for $\varPsi\in L^2([1,\zeta_p),H_{x_0})$ and $\gamma\in [1,\zeta_p)$.
\begin{lemma}\label{iota} The map $\iota\colon L^2([1,\zeta_p),H_{x_0})\to H_{x_0}^{(\infty)}$ defined by \[ \iota(\eta\delta_x) =\sum_{j\in\mathbb{Z}}\bigg( \int_{[1,\zeta_p)} \gamma^{jp-c(x)}\eta(\gamma)d\gamma \bigg)\delta_{(x,j)}, \] for $\eta\in L^2([1,\zeta_p))$ and $x\in\Orb(x_0)$ is a unitary satisfying $\iota\circ \pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}(a) =\pi_{(x_0,\gamma_0)}^{(\infty)}(a)\circ \iota$ for $a\in C^*(\Sigma)$. \end{lemma}
\begin{proof} For each $x\in\Orb(x_0)$, the map $L^2([1,\zeta_p))\ni\eta(\cdot) \mapsto (\cdot)^{-c(x)}\eta(\cdot)\in L^2([1,\zeta_p))$ is a unitary. Since $[1,\zeta_p)\ni \gamma \to \gamma^p\in\mathbb{T}$ is an isomorphism between probability spaces, the natural isomorphism $L^2(\mathbb{T})\cong\ell^2(\mathbb{Z})$ of the Fourier transform shows that the map \[ L^2([1,\zeta_p))\ni\eta\mapsto \sum_{j\in\mathbb{Z}}\bigg( \int_{[1,\zeta_p)} \gamma^{jp-c(x)}\eta(\gamma)d\gamma \bigg)\delta_{(x,j)} \] is an isomorphism onto the Hilbert subspace spanned by $\{\delta_{(x,j)}\}_{j\in\mathbb{Z}}$. Thus $\iota$ is a unitary.
Take $\xi\in C_c(U)$, $\eta\in L^2([1,\zeta_p))$ and $x\in\Orb(x_0)$. For each $\gamma\in [1,\zeta_p)$, we obtain \begin{align*} \Big(\pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}(t^1(\xi)) \eta\delta_x\Big)(\gamma) &=\pi_{(x_0,\gamma\gamma_0)}(t^1(\xi))(\eta(\gamma)\delta_x)\\ &=\gamma\gamma_0\sum_{\sigma(y)=x}\xi(y)\eta(\gamma)\delta_y. \end{align*} If we denote by $z\in L^2([1,\zeta_p))$ the identity function $\gamma\mapsto\gamma$, then we have \[ \pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}(t^1(\xi)) \eta\delta_x =\gamma_0\sum_{\sigma(y)=x}\xi(y) \eta z \delta_y. \] Hence \begin{align*} \iota\big(\pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}(t^1(\xi))\eta\delta_x\big) &=\iota \bigg(\gamma_0\sum_{\sigma(y)=x}\xi(y) \eta z \delta_y\bigg)\\ &=\gamma_0\sum_{\sigma(y)=x}\xi(y) \sum_{j\in\mathbb{Z}}\bigg( \int_{[1,\zeta_p)} \gamma^{jp-c(y)}\eta(\gamma)\gamma d\gamma \bigg)\delta_{(y,j)}\\ &=\gamma_0\sum_{\sigma(y)=x}\sum_{j\in\mathbb{Z}} \bigg( \int_{[1,\zeta_p)} \gamma^{jp-c(y)+1}\eta(\gamma)d\gamma \bigg)\xi(y)\delta_{(y,j)}. \end{align*} On the other hand, we get \begin{align*} \pi_{(x_0,\gamma_0)}^{(\infty)}(t^1(\xi))\big(\iota(\eta\delta_x)\big) &=\pi_{(x_0,\gamma_0)}^{(\infty)}(t^1(\xi)) \bigg(\sum_{j\in\mathbb{Z}} \bigg(\int_{[1,\zeta_p)} \gamma^{jp-c(x)}\eta(\gamma)d\gamma \bigg)\delta_{(x,j)}\bigg)\\ &=\sum_{j\in\mathbb{Z}} \bigg(\int_{[1,\zeta_p)} \gamma^{jp-c(x)}\eta(\gamma)d\gamma \bigg) \gamma_0\sum_{\sigma(y)=x}\xi(y)\delta_{(y,j-d(y))}\\ &=\gamma_0\sum_{\sigma(y)=x}\sum_{j\in\mathbb{Z}} \bigg(\int_{[1,\zeta_p)} \gamma^{jp-c(x)}\eta(\gamma)d\gamma \bigg) \xi(y)\delta_{(y,j-d(y))}\\ &=\gamma_0\sum_{\sigma(y)=x}\sum_{j\in\mathbb{Z}} \bigg(\int_{[1,\zeta_p)} \gamma^{(j+d(y))p-c(x)}\eta(\gamma)d\gamma \bigg) \xi(y)\delta_{(y,j)}\\ &=\gamma_0\sum_{\sigma(y)=x}\sum_{j\in\mathbb{Z}} \bigg(\int_{[1,\zeta_p)} \gamma^{jp-c(y)+1}\eta(\gamma)d\gamma \bigg) \xi(y)\delta_{(y,j)}. \end{align*} Hence we have $\iota\circ \pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}(a) =\pi_{(x_0,\gamma_0)}^{(\infty)}(a)\circ \iota$ for all $a\in t^1(C_c(U))$. The proof of this equality for $a\in t^0(C_0(X))$ is much easier. Therefore we get the equality for all $a\in C^*(\Sigma)$. \end{proof}
\begin{corollary}\label{kerinf} We have $\ker \pi_{(x_0,\gamma_0)}^{(\infty)} =\bigcap_{\gamma\in\mathbb{T}}P_{(x_0,\gamma)}$. \end{corollary}
\begin{proof} By Lemma \ref{iota}, we have \[ \ker \pi_{(x_0,\gamma_0)}^{(\infty)} =\ker \pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))}. \] Since $\gamma\mapsto \pi_{(x_0,\gamma)}$ is pointwise norm continuous, we have \[ \ker \pi_{(x_0,[\gamma_0,\zeta_p\gamma_0))} =\bigcap_{\gamma\in [\gamma_0,\zeta_p\gamma_0)}P_{(x_0,\gamma)}. \] By Lemma \ref{per1}, we get \[ \bigcap_{\gamma\in [\gamma_0,\zeta_p\gamma_0)}P_{(x_0,\gamma)} =\bigcap_{\gamma\in\mathbb{T}}P_{(x_0,\gamma)}. \] This completes the proof. \end{proof}
As usual, we consider $\mathbb{N}\cup\{\infty\}$ as the one-point compactification of the discrete set $\mathbb{N}$.
\begin{lemma}\label{weakconv2} Let $\{(x_\lambda,\gamma_\lambda)\}_{\lambda\in\Lambda}$ be a net in $X\times\mathbb{T}$ converges to $(x_0,\gamma_0)$ such that $l(x_\lambda)+p(x_\lambda)$ converges to $\infty$. Then we have $\bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)}\subset P_{(x_0,\gamma)}$ for all $\gamma\in\mathbb{T}$. \end{lemma}
\begin{proof} We divide the proof into two cases; the case that $x_0$ is periodic and the case that $x_0$ is aperiodic. In the both cases, the proofs go in a similar way to the proof of Lemma \ref{weakconv1}, and so we just sketch the proofs.
First consider the case that $x_0$ is periodic with period $p(x_0)=p$. We first show that for each $(x,j)\in \Orb(x_0)\times \mathbb{Z}$, there exist $\lambda_{(x,j)}\in \Lambda$ and $\varphi_\lambda(x,j)\in \Orb(x_\lambda)$ for $\lambda\succeq\lambda_{(x,j)}$ satisfying \begin{itemize} \item $\lim_{\lambda}\varphi_\lambda(x,j)=x$, \item $\sigma(\varphi_\lambda(x,j))=\varphi_\lambda(\sigma_\infty(x,j))$. \item for two distinct elements $(x,j),(x',j')\in \Orb(x_0)\times \mathbb{Z}$, we have $\varphi_\lambda(x,j)\neq \varphi_\lambda(x',j')$ for sufficiently large $\lambda$. \end{itemize} We set $x'_0 := \sigma^{l(x_0)}(x_0)$. For $(x'_0,0)\in \Orb(x_0)\times \mathbb{Z}$, we set $\varphi_\lambda(x'_0,0)=\sigma^{l(x_0)}(x_\lambda)$ for all $\lambda$. There exists $\lambda_{(x'_0,1)} \in \Lambda$ such that $\varphi_\lambda(x'_0,0) \in U$ for all $\lambda\succeq \lambda_{(x'_0,0)}$. Set $\varphi_\lambda(\sigma(x'_0),0)=\sigma(\varphi_\lambda(x'_0,0))$ for all $\lambda\succeq \lambda_{(x'_0,0)}$. Similarly for $k=2,3,\ldots,p-1$ $\varphi_\lambda(\sigma^k(x'_0),0)= \sigma(\varphi_\lambda(\sigma^{k-1}(x'_0),0))$ for sufficiently large $\lambda$. We also set for sufficiently large $\lambda$, $\varphi_\lambda(x'_0,1)= \sigma(\varphi_\lambda(\sigma^{p-1}(x'_0),0))$ and $\varphi_\lambda(\sigma^k(x'_0),1)= \sigma(\varphi_\lambda(\sigma^{k-1}(x'_0),1))$ for $k=1,2,\ldots,p-1$. For $j=2,3,\ldots$, we set for sufficiently large $\lambda$, $\varphi_\lambda(x'_0,j)= \sigma(\varphi_\lambda(\sigma^{p-1}(x'_0),j-1))$ and $\varphi_\lambda(\sigma^k(x'_0),j)= \sigma(\varphi_\lambda(\sigma^{k-1}(x'_0),j))$ for $k=1,2,\ldots,p-1$. Next we choose a neighborhood $V$ of $x'_0$ such that $\sigma^{p}$ is injective on $V$. We set $\lambda_{(x'_0,-1)} \in \Lambda$ such that $\varphi_\lambda(x'_0,0) \in \sigma^p(V)$ for all $\lambda\succeq \lambda_{(x'_0,-1)}$. We set $\varphi_\lambda(x'_0,-1) \in V$ such that $\sigma^p(\varphi_\lambda(x'_0,-1))=\varphi_\lambda(x'_0,0)$ and set $\varphi_\lambda(\sigma^k(x'_0),-1)= \sigma(\varphi_\lambda(\sigma^{k-1}(x'_0),-1))$ for $k=1,2,\ldots,p-1$. We set $\lambda_{(x'_0,-2)} \in \Lambda$ such that $\varphi_\lambda(x'_0,-1) \in \sigma^p(V)$ for all $\lambda\succeq \lambda_{(x'_0,-2)}$. We set $\varphi_\lambda(x'_0,-2) \in V$ such that $\sigma^p(\varphi_\lambda(x'_0,-2))=\varphi_\lambda(x'_0,-1)$ and set $\varphi_\lambda(\sigma^k(x'_0),-2)= \sigma(\varphi_\lambda(\sigma^{k-1}(x'_0),-2))$ for $k=1,2,\ldots,p-1$. For $j=3,4,\ldots$, we set $\lambda_{(x'_0,-j)} \in \Lambda$ such that $\varphi_\lambda(x'_0,-j+1) \in \sigma^p(V)$ for all $\lambda\succeq \lambda_{(x'_0,-j)}$. We set $\varphi_\lambda(x'_0,-j) \in V$ such that $\sigma^p(\varphi_\lambda(x'_0,-j))=\varphi_\lambda(x'_0,-j+1)$ and set $\varphi_\lambda(\sigma^k(x'_0),-j)= \sigma(\varphi_\lambda(\sigma^{k-1}(x'_0),-j))$ for $k=1,2,\ldots,p-1$. We have defined $\varphi_\lambda(x,j)$ for all $(x,j)\in \Orb(x_0)\times \mathbb{Z}$ with $l(x)=0$. For $(x,j)\in \Orb(x_0)\times \mathbb{Z}$ with $l(x)\geq 1$, we choose a neighborhood $V$ of $x$ such that $\sigma^{k}$ is injective on $V$ where $k:=l(x)$. We set $\lambda_{(x,j)} \in \Lambda$ such that $\varphi_\lambda(\sigma^k(x),j) \in \sigma^k(V)$ for all $\lambda\succeq \lambda_{(x,j)}$. We set $\varphi_\lambda(x,j) \in V$ such that $\sigma^k(\varphi_\lambda(x,j))=\varphi_\lambda(\sigma^k(x),j)$. It is very similar as in the proof of Lemma \ref{weakconv1} (ii) to check that $\varphi_\lambda(x,j)$'s satisfy the desired conditions except the proof of the statement that for $x\in\Orb(x_0)$ and $j,j'\in\mathbb{Z}$ with $j < j'$, we have $\varphi_\lambda(x,j)\neq \varphi_\lambda(x,j')$ for sufficiently large $\lambda$. This follows from the fact that, we have \begin{align*} \sigma^{l(x)+N}(\varphi_\lambda(x,j))&=\sigma^{N'}(x_\lambda),& \sigma^{l(x)+N}(\varphi_\lambda(x,j'))&=\sigma^{N'+p(j'-j)}(x_\lambda) \end{align*} for sufficiently large $N$ and some $N'$. Since $l(x_\lambda)+p(x_\lambda)$ converges to $\infty$ we have $\sigma^{N'}(x_\lambda) \neq \sigma^{N'+p(j'-j)}(x_\lambda)$ for sufficiently large $\lambda$. This shows $\varphi_\lambda(x,j)\neq \varphi_\lambda(x,j')$ for sufficiently large $\lambda$.
Now, in a similar to the proof of Lemma \ref{weakconv1} (iii), we can show that for all $(x,j),(x',j')\in \Orb(x_0)\times \mathbb{Z}$ and all $a\in C^*(\Sigma)$, we get \[ \lim_{\lambda} \ip{\delta_{\varphi_\lambda(x',j')}}{\pi_{(x_\lambda,\gamma_\lambda)}(a) \delta_{\varphi_\lambda(x,j)}}_{x_\lambda} =\ip{\delta_{(x',j')}}{\pi_{(x_0,\gamma_0)}^{(\infty)}(a)\delta_{(x,j)}}_{x_0}^{(\infty)} \] where $\ip{\cdot}{\cdot}_{x_0}^{(\infty)}$ is the inner product of $H_{x_0}^{(\infty)}$. Hence we have $\bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)} \subset \ker\pi_{(x_0,\gamma_0)}^{(\infty)}$. We finish the proof for the case that $x_0$ is periodic by Corollary \ref{kerinf}.
For the case that $x_0$ is aperiodic, we can similarly construct $\varphi_\lambda(x)\in \Orb(x_\lambda)$ for $x\in \Orb(x_0)$ and sufficiently large $\lambda\in\Lambda$, so that we get \[ \lim_{\lambda} \ip{\delta_{\varphi_\lambda(x')}}{\pi_{(x_\lambda,\gamma_\lambda)}(a) \delta_{\varphi_\lambda(x)}}_{x_\lambda} =\ip{\delta_{x'}}{\pi_{(x_0,\gamma_0)}(a)\delta_{x}}_{x_0}, \] for all $x,x'\in \Orb(x_0)$ and all $a\in C^*(\Sigma)$. Hence we have $\bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)}\subset P_{(x_0,\gamma_0)}$. Since $P_{(x_0,\gamma)}=P_{(x_0,\gamma_0)}$ for all $\gamma\in\mathbb{T}$ by Lemma~\ref{aper}, we finish the case that $x_0$ is aperiodic. \end{proof}
\begin{proposition}\label{Prop:YIinv} For an ideal $I$, the set $Y_I$ is admissible. \end{proposition}
\begin{proof} Take a net $\{(x_\lambda,\gamma_\lambda)\}_{\lambda\in\Lambda}$ in $Y_I$ converges to $(x_0,\gamma_0)\in X\times\mathbb{T}$. By replacing the net to a subnet if necessary, we may assume that either $l(x_\lambda)$ and $p(x_\lambda)$ are finite and constant or $l(x_\lambda)+p(x_\lambda)$ converges $\infty$. When $l(x_\lambda)$ and $p(x_\lambda)$ are finite and constant, we have \[ I\subset \bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)} \subset P_{(x_0,\gamma_0)} \] by Corollary \ref{closed}. When $l(x_\lambda)+p(x_\lambda)$ converges $\infty$, we have \[ I\subset \bigcap_{\lambda}P_{(x_\lambda,\gamma_\lambda)} \subset P_{(x_0,\gamma_0)} \] by Lemma \ref{weakconv2}. Hence we get $(x_0,\gamma_0)\in Y_I$. This shows that $Y_I$ is closed. By Lemma \ref{inv0}, $Y_I$ satisfies the condition (ii) in Definition \ref{Def:adm}.
Take $x_0\in X$ with $(Y_I)_{x_0}\neq \emptyset,\mathbb{T}$. By Lemma \ref{aper}, we see that $x_0$ is periodic, and by Lemma \ref{per1} we see that $\zeta_{p(x_0)}(Y_I)_{x_0}=(Y_I)_{x_0}$. To the contrary, suppose that for any neighborhood $V$ of $x_0$ we can find $x\in V$ with $l(x)\neq l(x_0)$ satisfying $(Y_I)_x\neq\emptyset$. Then we can find a net $\{(x_\lambda,\gamma_\lambda)\}_{\lambda\in\Lambda}$ in $Y_I$ with $l(x_\lambda) \neq l(x_0)$ which converges $(x_0,\gamma_0)$ for some $\gamma_0\in\mathbb{T}$. By taking a subnet if necessary, we may assume that either $l(x_\lambda)$ and $p(x_\lambda)$ are finite and constant or $l(x_\lambda)+p(x_\lambda)$ converges to $\infty$ . If $l(x_\lambda)$ and $p(x_\lambda)$ are finite and constant, then $l(x_0)=l(x_\lambda)$ by Lemma~\ref{weakconv1} (i). This is a contradiction. If $l(x_\lambda)+p(x_\lambda)$ converges to $\infty$ then we have $(Y_I)_{x_0}=\mathbb{T}$ by Lemma~\ref{weakconv2}. This is also a contradiction. Therefore we can find a neighborhood $V$ of $x_0$ such that all $x\in V$ with $l(x)\neq l(x_0)$ satisfies $(Y_I)_x=\emptyset$. We have proved that $Y_I$ is admissible. \end{proof}
\section{A proof of $Y_{I_Y}=Y$}\label{Sec:YIY}
Take an admissible subset $Y$ of $X\times\mathbb{T}$. In this section we will prove that $Y_{I_Y}=Y$. By definition, we have $Y_{I_Y}\supset Y$. Thus all we have to do is to prove the other inclusion $Y_{I_Y}\subset Y$. We set $X' :=\{x\in X\mid Y_x\neq\emptyset\}$.
\begin{lemma} The set $X'$ is closed and $\sigma$-invariant. \end{lemma}
\begin{proof} The set $X'$ is closed because $Y$ is closed and $\mathbb{T}$ is compact. By the condition (ii) in Definition~\ref{Def:adm}, $X'$ is $\sigma$-invariant. \end{proof}
Take $x_0\notin X'$ and $\gamma_0\in\mathbb{T}$. We can find $f\in C_0(X)$ such that $f(x_0)=1$ and $f(x)=0$ for all $x\in X'$. Then we have $t^0(f)\in I_Y$ because $\pi_{(x,\gamma)}(t^0(f))=0$ for all $x\in X'$ and all $\gamma\in\mathbb{T}$. However $t^0(f)\notin P_{(x_0,\gamma_0)}$ because $\pi_{(x_0,\gamma_0)}(t^0(f))\delta_{x_0}=\delta_{x_0}$. This implies $(x_0,\gamma_0)\notin Y_{I_Y}$. Therefore we have $Y_{I_Y}\subset X'\times \mathbb{T}$.
Take $(x_0,\gamma_0)\in (X'\times \mathbb{T})\setminus Y$, and we will show that $(x_0,\gamma_0)\notin Y_{I_Y}$ which is equivalent to $I_Y \not\subset P_{(x_0,\gamma_0)}$. To do so, it suffices to construct $a \in C^*(\Sigma)$ such that $\pi_{(x_0,\gamma_0)}(a)\neq 0$ and $\pi_{(x,\gamma)}(a) = 0$ for all $(x,\gamma) \in Y$. Note that the representation $\pi_{(x_0,\gamma_0)}$ factors through the natural surjection $C^*(\Sigma)\to C^*(\Sigma')$
where $\Sigma'=(X',\sigma|_{U\cap X'})$. The induced representation of $C^*(\Sigma')$ can be denoted by the same notation $\pi_{(x_0,\gamma_0)}$ considered as $x_0 \in X'$. Similarly for each $(x,\gamma) \in Y$, the representation $\pi_{(x,\gamma)}\colon C^*(\Sigma) \to B(H_{x_0})$ factors through the representation $\pi_{(x,\gamma)}\colon C^*(\Sigma') \to B(H_{x_0})$ considered as $x \in X'$. Hence to finish the proof it suffices to construct $a \in C^*(\Sigma')$ such that $\pi_{(x_0,\gamma_0)}(a)\neq 0$ and $\pi_{(x,\gamma)}(a) = 0$ for all $(x,\gamma) \in Y$.
We have $Y_{x_0}\neq\emptyset,\mathbb{T}$. Hence by the condition (iii) in Definition~\ref{Def:adm}, there exists an open neighborhood $V$ of $x_0$ such that $l(x)=l(x_0)$ for all $x\in X'\cap V$. We set $x_0'=\sigma^{l(x_0)}(x_0)$ which satisfies $l(x_0')=0$. Set $V'=\sigma^{l(x_0)}(V)\cap X'$ which is an open neighborhood of $x_0'\in X'$ such that all $x'\in V'$ satisfies $l(x')=0$. We have $(x_0',\gamma_0)\notin Y$ because $(x_0,\gamma_0)\notin Y$. Since $Y$ is closed, we can find an open neighborhood $W'$ of $x_0'\in X'$ and an open neighborhood $Z$ of $\gamma_0\in\mathbb{T}$ such that $(W'\times Z)\cap Y=\emptyset$. We may assume that $W' \subset V'$. For each $x\in W'$, we have $Y_x\cap Z=\emptyset$ and $\zeta_{p(x)}Y_x=Y_x$. Hence $\sup_{x\in W'}p(x)<\infty$. Thus we can find $N\in\mathbb{N}$ such that $\sigma^N(x)=x$ for all $x\in W'$ (recall that $l(x)=0$ for $x\in W'$). Let us set $W=\bigcup_{k=0}^{N-1}\sigma^k(W')$ which is an open subset of $X'$ satisfying $\sigma(W)=W$, and we denote by $\sigma_W\colon W\to W$ the restriction of $\sigma$ to $W$, which is a homeomorphism satisfying $\sigma_W^N=\id_{W}$. By the condition (ii) in Definition~\ref{Def:adm}, we have $(W\times Z)\cap Y=\emptyset$.
For $f\in C_0(W)$ and a negative integer $n$, we define $t^{n}(f)\in C^*(\Sigma)$ by $t^{n}(f)=t^{-n}(\overline{f}\circ\sigma_W^{-n})^*$.
\begin{lemma}\label{Lem:tnf1} Let $f\in C_0(W)$. For $(x,\gamma)\in X'\times\mathbb{T}$, $y\in \Orb(x)$ and $n\in\mathbb{Z}$, we have \begin{equation*} \pi_{(x,\gamma)}\big(t^{n}(f)\big)\delta_y =\begin{cases} \gamma_0^n f(\sigma_W^{-n}(y))\delta_{\sigma_W^{-n}(y)} &\text{if $y\in W$}\\ 0&\text{otherwise}. \end{cases} \end{equation*} \end{lemma}
\begin{proof} We first consider the case $n\geq 0$. By Lemma \ref{compute}, we have \[ \pi_{(x,\gamma)}(t^n(f))\delta_y =\gamma_0^n\sum_{\sigma^n(z)=y}f(z)\delta_{z}. \] We have $f(z)\neq 0$ only when $z\in W$. There exists $z\in W$ with $\sigma^n(z)=y$ only when $y\in W$, and in this case $z=\sigma_W^{-n}(y)$ is the only element in $W$ satisfying $\sigma^n(z)=y$. This proves the case $n\geq 0$. Next, we consider the case $n<0$. By Lemma \ref{compute}, we have \[ \pi_{(x,\gamma)}\big(t^{n}(f)\big)\delta_y =\pi_{(x,\gamma)}\big(t^{-n}(\overline{f}\circ\sigma_W^{-n})^*\big)\delta_y =\begin{cases} \gamma_0^{n}(f\circ\sigma_W^{-n})(y) \delta_{\sigma^{-n}(y)}& \text{if $y\in U_n$,}\\ 0 & \text{otherwise.} \end{cases} \] When $y \notin W$, we have $(f\circ\sigma_W^{-n})(y)=0$. This proves the case $n<0$. \end{proof}
Choose $f\in C_0(W)$ with $f\geq 0$ and $f(x_0')\neq 0$ and set \[ f_0=\frac{1}{n}\sum_{k=0}^{n-1}f\circ \sigma_W^{k}. \] Then we have $f_0 \geq 0$, $f_0(x_0')\neq 0$ and $f_0\circ \sigma_W=f_0$.
Take $(x,\gamma)\in X'\times\mathbb{T}$ and $y\in \Orb(x)$ with $y \in W$. For $k=0,1,\ldots,p(y)-1$, we set $\xi_{y,k} \in H_{x}$ by \[ \xi_{y,k} := \frac{1}{\sqrt{p(y)}}\sum_{j=0}^{p(y)-1} \zeta_{p(y)}^{jk}\delta_{\sigma_W^j(y)}. \] Then $\{\xi_{y,k}\}_{k=0}^{p(y)-1}$ is a basis of the span of $\{\delta_{\sigma_W^k(y)}\}_{k=0}^{p(y)-1}$.
In these situation, we have the following.
\begin{lemma}\label{Lem:tnf2} For $k=0,1,\ldots,p(y)-1$ and $n\in\mathbb{Z}$, we have \begin{equation*} \pi_{(x,\gamma)}\big(t^{n}(f_0)\big)\xi_{y,k} =(\gamma\zeta_{p(y)}^k)^{n}f_0(y)\xi_{y,k}. \end{equation*} \end{lemma}
\begin{proof} By Lemma~\ref{Lem:tnf1}, we have \begin{align*} \pi_{(x,\gamma)}\big(t^{n}(f_0)\big)\xi_{y,k} &=\frac{1}{\sqrt{p(y)}}\sum_{j=0}^{p(y)-1} \zeta_{p(y)}^{jk}\pi_{(x,\gamma)}\big(t^{n}(f_0)\big)\delta_{\sigma_W^j(y)}\\ &=\frac{1}{\sqrt{p(y)}}\sum_{j=0}^{p(y)-1} \zeta_{p(y)}^{jk}\gamma^{n}(f_0\circ\sigma_W^{-n})(\sigma_W^j(y)) \delta_{\sigma_W^{-n}(\sigma_W^j(y))}\\ &=\gamma^{n}\frac{1}{\sqrt{p(y)}}\sum_{j=0}^{p(y)-1} \zeta_{p(y)}^{jk}f_0(y)\delta_{\sigma_W^{j-n}(y)}\\ &=\gamma^{n}f_0(y)\frac{1}{\sqrt{p(y)}}\sum_{j=0}^{p(y)-1} \zeta_{p(y)}^{(j+n)k}\delta_{\sigma_W^{j}(y)}\\ &=(\gamma\zeta_{p(y)}^k)^{n}f_0(y)\xi_{y,k}. \qedhere \end{align*} \end{proof}
\begin{lemma}\label{Lem:pf} For a trigonometric polynomial $q(z)=\sum_{n=-N}^N\alpha_nz^n$, we set $q(f_0) =\sum_{n=-N}^N\alpha_nt^n(f_0)$. Then for $k=0,1,\ldots,p(y)-1$, we have \begin{equation*} \pi_{(x,\gamma)}\big(q(f_0)\big)\xi_{y,k} =q(\gamma\zeta_{p(y)}^k)f_0(y)\xi_{y,k}. \end{equation*} \end{lemma}
\begin{proof} This follows from Lemma~\ref{Lem:tnf2}. \end{proof}
Take $g \in C_c(Z) \subset \mathbb{C}(\mathbb{T})$ with $g(\gamma_0)\neq 0$. Choose a sequence $q_1,q_2,\ldots$ of trigonometric polynomials converging to $g$ uniformly on $\mathbb{T}$. We have the following.
\begin{lemma}\label{Lem:gf} The sequence $q_1(f_0),q_2(f_0),\ldots$ converges to an element $a \in C^*(\Sigma')$. We also have \begin{equation*} \pi_{(x,\gamma)}\big(a\big)\xi_{y,k} =g(\gamma_0\zeta_{p(y)}^k)f_0(y)\xi_{y,k}. \end{equation*} for $(x,\gamma)\in X'\times\mathbb{T}$, $y\in \Orb(x) \cap W$ and $k=0,1,\ldots,p(y)-1$. \end{lemma}
\begin{proof} Let $B=C^*(W,\sigma_W)=C_0(W)\rtimes_{\sigma_W}\mathbb{Z}$ be the universal $C^*$-al\-ge\-bra generated by the products of the elements in $C_0(W)\subset B$ and a unitary $u\in M(B)$ which satisfy $u fu^*=f\circ \sigma_W$ for $f\in C_0(W)$ where $M(B)$ is the multiplier algebra of $B$. By the universality, there exists a $*$-ho\-mo\-mor\-phism $\iota\colon B\to C^*(\Sigma')$ such that $\iota(f)=t^0(f)$ and $\iota(fu)=t^1(f)$. (We know that $\iota$ is injective by Proposition~\ref{Prop:GIUT}.) For $n \in \mathbb{N}$, we have $\iota(f_0u^n)=t^n(f_0)$. To see this for $n \geq 0$, choose $\xi_1,\ldots,\xi_n\in C_c(U)$ such that \[ f_0(x)=\xi_1(x)\xi_2(\sigma(x))\cdots \xi_n(\sigma^{n-1}(x)) \] for all $x \in U_n$ by Lemma~\ref{Lem:fctn2}, and compute \begin{align*} \iota(f_0u^n) &=\iota(\xi_1 (\xi_2\circ\sigma) \cdots (\xi_n\circ \sigma^{n-1}) u^n)\\ &=\iota(\xi_1 u \xi_2 u \cdots \xi_{n-1} u \xi_n u)\\ &=\iota(\xi_1 u) \iota(\xi_2 u) \cdots \iota(\xi_{n-1} u)\iota( \xi_n u)\\ &=t_1(\xi_1)t_1(\xi_2)\cdots t_1(\xi_{n-1})t_1(\xi_n)\\ &=t^n(f_0). \end{align*} For $n \leq -1$, we also have $\iota(f_0u^n)=t^n(f_0)$ because \begin{align*} \iota(f_0u^n) =\iota(u^{-n}\overline{f_0})^* &=\iota((\overline{f_0}\circ \sigma_W^{-n}) u^{-n})^*\\ &=t^{-n}(\overline{f_0}\circ \sigma_W^{-n})^* =t^n(f_0) \end{align*} Thus for a trigonometric polynomial $q$, we have $\iota(f_0q(u))=q(f_0)$. Since $(f_0q_k(u))_k$ converges $f_0g(u)$ in $B$, $(q_k(f_0))_k$ converges $\iota(f_0g(u))$ in $C^*(\Sigma')$. Thus we get $a = \iota(f_0g(u))$. The last equation follows easily from Lemma~\ref{Lem:pf}. \end{proof}
\begin{proposition} Let $a \in C^*(\Sigma')$ be as in Lemma~\ref{Lem:gf}. We have $\pi_{(x_0,\gamma_0)}(a)\neq 0$ and $\pi_{(x,\gamma)}(a)= 0$ for all $(x,\gamma) \in Y$. \end{proposition}
\begin{proof} We have $\pi_{(x_0,\gamma_0)}(a)\neq 0$ because $\pi_{(x_0,\gamma_0)}(a)\xi_{x_0,0} =g(\gamma_0)f_0(x_0)\xi_{x_0,0}\neq 0$ by Lemma~\ref{Lem:gf}.
Take $(x,\gamma) \in Y$. Take $y \in \Orb(x)$. If $y \notin W$, then we can see $\pi_{(x,\gamma)}(a)\delta_y=0$ by Lemma~\ref{Lem:tnf1}. Suppose $y \in W$. By the condition (ii) in Definition~\ref{Def:adm}, we have $(y,\gamma) \in Y$. By the condition (iii) in Definition~\ref{Def:adm}, we have $(y,\zeta_{p(y)}^k\gamma) \in Y$ for $k=0,1,\ldots, p(y)-1$. Hence $\zeta_{p(y)}^k\gamma \notin Z$ for $k=0,1,\ldots, p(y)-1$. By Lemma~\ref{Lem:gf} we have $\pi_{(x,\gamma)}(a)\xi_{y,k}=0$ for $k=0,1,\ldots,p(y)-1$. Hence we get $\pi_{(x,\gamma)}(a)\delta_y=0$. These show that $\pi_{(x,\gamma)}(a)=0$. \end{proof}
We have shown the following.
\begin{proposition}\label{Prop:YIY} For an admissible subset $Y$ of $X\times\mathbb{T}$, we have $Y_{I_Y}=Y$. \end{proposition}
\begin{theorem}\label{MainThm} The set of ideals of $C^*(\Sigma)$ corresponds bijectively to the set of all admissible subsets of $X\times\mathbb{T}$ via the maps $I\mapsto Y_I$ and $Y\mapsto I_Y$. \end{theorem}
\begin{proof} This follows from Proposition~\ref{Prop:IYI}, Proposition~\ref{Prop:YIinv} and Proposition~\ref{Prop:YIY}. \end{proof}
\appendix
\section{Uniqueness theorems}\label{Sec:UT}
Let $\Sigma=(X,\sigma)$ be an SGDS. By the universality of $C^*(\Sigma)$, we have an action $\beta$ of the group
$\mathbb{T} := \{z \in \mathbb{C}\mid |z|=1\}$ on $C^*(\Sigma)$ so that for $z \in \mathbb{T}$ we have $\beta_z(t^0(f))=t^0(f)$ for $f \in C_0(X)$ and $\beta_z(t^1(\xi))=zt^1(\xi)$ for $\xi \in C_c(U)$. This action $\beta$ is called the {\em gauge} action (see \cite[Section~4]{K1}).
\begin{proposition}\label{Prop:GIUT} Let $A$ be a $C^*$-al\-ge\-bra . A $*$-ho\-mo\-mor\-phism $\varPhi\colon C^*(\Sigma) \to A$ is injective if $\varPhi \circ t^0$ is injective and there exists an action of $\mathbb{T}$ on $A$ with which and the gauge action on $C^*(\Sigma)$, $\varPhi$ is equivariant. \end{proposition}
\begin{proof} This follows from \cite[Theorem~4.5]{K1}. \end{proof}
\begin{definition}[{cf.\ \cite[Definition~2.5]{R2}}]\label{Def:esfree} An SGDS $\Sigma$ is said to be {\em essentially free} if the interior of the set $\{x \in X\mid l(x)=0\}$ is empty. \end{definition}
By Baire's category theorem, $\Sigma$ is essentially free if and only if the interior of the set $\{x \in X\mid \sigma^n(x)=x\}$ is empty for every positive integer $n$. Thus this definition coincides with \cite[Definition~2.5]{R2}.
\begin{proposition}\label{Prop:CKUT} Suppose an SGDS $\Sigma$ is essentially free. Let $A$ be a $C^*$-al\-ge\-bra . A $*$-ho\-mo\-mor\-phism $\varPhi\colon C^*(\Sigma) \to A$ is injective if $\varPhi \circ t^0$ is injective \end{proposition}
\begin{proof} An SGDS $\Sigma$ is essentially free if and only if the associate topological graph $E=(X,U,\sigma,\iota)$ is topologically free in the sense of \cite[Definition~5.4]{K1}. Thus this proposition follows from \cite[Theorem~5.12]{K1}. \end{proof}
\end{document} | arXiv |
A system of expressing the number of rows and columns of a matrix in mathematical form is called the order of a matrix.
The order of a matrix denotes the arrangement of elements as number of rows and columns in a matrix. So, it is known as dimension of a matrix. It is usually expressed as the number of rows is multiplied by the number of columns in mathematics but it is read as number of rows by number of columns.
One important factor is, the dimension of the matrix tells the number of elements of the matrix. It can be obtained by multiplying the rows by columns.
In general form matrix, the elements are arranged as $m$ rows and $n$ columns. So, the order of the matrix is $m \times n$ and it is read as $m$ by $n$.
There is only one row and column in matrix $A$. The matrix $A$ is called a matrix of order $1 \times 1$ and read as one by one matrix. Simply, it is called as a matrix of order $1$.
There is one row and three columns in matrix $B$. So, it is called a matrix of order $1 \times 3$ and read as one by three matrix.
The elements are arranged in matrix $C$ in $2$ rows and $2$ columns. It is called a matrix of order $2 \times 2$ and read as two by two matrix but simply a matrix of order $2$.
Matrix $D$ is a matrix and formed by $3$ rows and $4$ columns. Therefore, the order of the matrix is $3 \times 4$. It is read as three by four matrix.
The total number of elements in matrix $D = 3 \times 4 = 12$. | CommonCrawl |
Quadrature of the Parabola
Quadrature of the Parabola (Greek: Τετραγωνισμὸς παραβολῆς) is a treatise on geometry, written by Archimedes in the 3rd century BC and addressed to his Alexandrian acquaintance Dositheus. It contains 24 propositions regarding parabolas, culminating in two proofs showing that the area of a parabolic segment (the region enclosed by a parabola and a line) is ${\tfrac {4}{3}}$ that of a certain inscribed triangle.
It is one of the best-known works of Archimedes, in particular for its ingenious use of the method of exhaustion and in the second part of a geometric series. Archimedes dissects the area into infinitely many triangles whose areas form a geometric progression.[1] He then computes the sum of the resulting geometric series, and proves that this is the area of the parabolic segment. This represents the most sophisticated use of a reductio ad absurdum argument in ancient Greek mathematics, and Archimedes' solution remained unsurpassed until the development of integral calculus in the 17th century, being succeeded by Cavalieri's quadrature formula.[2]
Main theorem
A parabolic segment is the region bounded by a parabola and line. To find the area of a parabolic segment, Archimedes considers a certain inscribed triangle. The base of this triangle is the given chord of the parabola, and the third vertex is the point on the parabola such that the tangent to the parabola at that point is parallel to the chord. Proposition 1 of the work states that a line from the third vertex drawn parallel to the axis divides the chord into equal segments. The main theorem claims that the area of the parabolic segment is ${\tfrac {4}{3}}$ that of the inscribed triangle.
Structure of the text
Conic sections such as the parabola were already well known in Archimedes' time thanks to Menaechmus a century earlier. However, before the advent of the differential and integral calculus, there were no easy means to find the area of a conic section. Archimedes provides the first attested solution to this problem by focusing specifically on the area bounded by a parabola and a chord.[3]
Archimedes gives two proofs of the main theorem: one using abstract mechanics and the other one by pure geometry. In the first proof, Archimedes considers a lever in equilibrium under the action of gravity, with weighted segments of a parabola and a triangle suspended along the arms of a lever at specific distances from the fulcrum.[4] When the center of gravity of the triangle is known, the equilibrium of the lever yields the area of the parabola in terms of the area of the triangle which has the same base and equal height.[5] Archimedes here deviates from the procedure found in On the Equilibrium of Planes in that he has the centers of gravity at a level below that of the balance.[6] The second and more famous proof uses pure geometry, particularly the sum of a geometric series.
Of the twenty-four propositions, the first three are quoted without proof from Euclid's Elements of Conics (a lost work by Euclid on conic sections). Propositions 4 and 5 establish elementary properties of the parabola. Propositions 6–17 give the mechanical proof of the main theorem; propositions 18–24 present the geometric proof.
Geometric proof
Dissection of the parabolic segment
The main idea of the proof is the dissection of the parabolic segment into infinitely many triangles, as shown in the figure to the right. Each of these triangles is inscribed in its own parabolic segment in the same way that the blue triangle is inscribed in the large segment.
Areas of the triangles
In propositions eighteen through twenty-one, Archimedes proves that the area of each green triangle is ${\tfrac {1}{8}}$ the area of the blue triangle, so that both green triangles together sum to ${\tfrac {1}{4}}$ the area of the blue triangle. From a modern point of view, this is because the green triangle has ${\tfrac {1}{2}}$ the width and ${\tfrac {1}{4}}$ the height of the blue triangle:[7]
Following the same argument, each of the $4$ yellow triangles has ${\tfrac {1}{8}}$ the area of a green triangle or ${\tfrac {1}{64}}$ the area of the blue triangle, summing to ${\tfrac {4}{64}}={\tfrac {1}{16}}$ the area of the blue triangle; each of the $2^{3}=8$ red triangles has ${\tfrac {1}{8}}$ the area of a yellow triangle, summing to ${\tfrac {2^{3}}{8^{3}}}={\tfrac {1}{64}}$ the area of the blue triangle; etc. Using the method of exhaustion, it follows that the total area of the parabolic segment is given by
${\text{Area}}\;=\;T\,+\,{\frac {1}{4}}T\,+\,{\frac {1}{4^{2}}}T\,+\,{\frac {1}{4^{3}}}T\,+\,\cdots .$
Here T represents the area of the large blue triangle, the second term represents the total area of the two green triangles, the third term represents the total area of the four yellow triangles, and so forth. This simplifies to give
${\text{Area}}\;=\;\left(1\,+\,{\frac {1}{4}}\,+\,{\frac {1}{16}}\,+\,{\frac {1}{64}}\,+\,\cdots \right)T.$
Sum of the series
To complete the proof, Archimedes shows that
$1\,+\,{\frac {1}{4}}\,+\,{\frac {1}{16}}\,+\,{\frac {1}{64}}\,+\,\cdots \;=\;{\frac {4}{3}}.$
The formula above is a geometric series—each successive term is one fourth of the previous term. In modern mathematics, that formula is a special case of the sum formula for a geometric series.
Archimedes evaluates the sum using an entirely geometric method,[8] illustrated in the adjacent picture. This picture shows a unit square which has been dissected into an infinity of smaller squares. Each successive purple square has one fourth the area of the previous square, with the total purple area being the sum
${\frac {1}{4}}\,+\,{\frac {1}{16}}\,+\,{\frac {1}{64}}\,+\,\cdots .$
However, the purple squares are congruent to either set of yellow squares, and so cover ${\tfrac {1}{3}}$ of the area of the unit square. It follows that the series above sums to ${\tfrac {4}{3}}$ (since $1+{\tfrac {1}{3}}={\tfrac {4}{3}}$).
See also
Wikimedia Commons has media related to Quadrature of the Parabola.
• History of calculus
Notes
1. Swain, Gordon; Dence, Thomas (1998). "Archimedes' Quadrature of the Parabola Revisited". Mathematics Magazine. 71 (2): 123–130. doi:10.2307/2691014. ISSN 0025-570X. JSTOR 2691014.
2. Cusick, Larry W. (2008). "Archimedean Quadrature Redux". Mathematics Magazine. 81 (2): 83–95. doi:10.1080/0025570X.2008.11953535. ISSN 0025-570X. JSTOR 27643090. S2CID 126360876.
3. Towne, R. (2018). "Archimedes in the Classroom". Master's Thesis. John Carroll University.
4. "Quadrature of the parabola, Introduction". web.calstatela.edu. Retrieved 2021-07-03.
5. "The Illustrated Method of Archimedes". Scribd. Retrieved 2021-07-03.
6. Dijksterhuis, E. J. (1987). "Quadrature of the Parabola". Archimedes. pp. 336–345.{{cite web}}: CS1 maint: url-status (link)
7. The green triangle has ${\tfrac {1}{2}}$ the width of blue triangle by construction. The statement about the height follows from the geometric properties of a parabola, and is easy to prove using modern analytic geometry.
8. Strictly speaking, Archimedes evaluates the partial sums of this series, and uses the Archimedean property to argue that the partial sums become arbitrarily close to ${\tfrac {4}{3}}$. This is logically equivalent to the modern idea of summing an infinite series.
Further reading
• Ajose, Sunday and Roger Nelsen (June 1994). "Proof without Words: Geometric Series". Mathematics Magazine. 67 (3): 230. doi:10.2307/2690617. JSTOR 2690617.
• Ancora, Luciano (2014). "Quadrature of the parabola with the square pyramidal number". Archimede. 66 (3).
• Bressoud, David M. (2006). A Radical Approach to Real Analysis (2nd ed.). Mathematical Association of America. ISBN 0-88385-747-2..
• Dijksterhuis, E.J. (1987) "Archimedes", Princeton U. Press ISBN 0-691-08421-1
• Edwards Jr., C. H. (1994). The Historical Development of the Calculus (3rd ed.). Springer. ISBN 0-387-94313-7..
• Heath, Thomas L. (2011). The Works of Archimedes (2nd ed.). CreateSpace. ISBN 978-1-4637-4473-1.
• Simmons, George F. (2007). Calculus Gems. Mathematical Association of America. ISBN 978-0-88385-561-4..
• Stein, Sherman K. (1999). Archimedes: What Did He Do Besides Cry Eureka?. Mathematical Association of America. ISBN 0-88385-718-9.
• Stillwell, John (2004). Mathematics and its History (2nd ed.). Springer. ISBN 0-387-95336-1..
• Swain, Gordon and Thomas Dence (April 1998). "Archimedes' Quadrature of the Parabola Revisited". Mathematics Magazine. 71 (2): 123–30. doi:10.2307/2691014. JSTOR 2691014.
• Wilson, Alistair Macintosh (1995). The Infinite in the Finite. Oxford University Press. ISBN 0-19-853950-9..
External links
Look up quadrature in Wiktionary, the free dictionary.
• Casselman, Bill. "Archimedes' quadrature of the parabola". Archived from the original on 2012-02-04. Full text, as translated by T.L. Heath.
• Xavier University Department of Mathematics and Computer Science. "Archimedes of Syracuse". Archived from the original on 2016-01-13.. Text of propositions 1–3 and 20–24, with commentary.
• http://planetmath.org/ArchimedesCalculus
Archimedes
Written works
• Measurement of a Circle
• The Sand Reckoner
• On the Equilibrium of Planes
• Quadrature of the Parabola
• On the Sphere and Cylinder
• On Spirals
• On Conoids and Spheroids
• On Floating Bodies
• Ostomachion
• The Method of Mechanical Theorems
• Book of Lemmas (apocryphal)
Discoveries and inventions
• Archimedean solid
• Archimedes's cattle problem
• Archimedes' principle
• Archimedes's screw
• Claw of Archimedes
Miscellaneous
• Archimedes' heat ray
• Archimedes Palimpsest
• List of things named after Archimedes
• Pseudo-Archimedes
Related people
• Euclid
• Eudoxus of Cnidus
• Apollonius of Perga
• Hero of Alexandria
• Eutocius of Ascalon
• Category
Ancient Greek mathematics
Mathematicians
(timeline)
• Anaxagoras
• Anthemius
• Archytas
• Aristaeus the Elder
• Aristarchus
• Aristotle
• Apollonius
• Archimedes
• Autolycus
• Bion
• Bryson
• Callippus
• Carpus
• Chrysippus
• Cleomedes
• Conon
• Ctesibius
• Democritus
• Dicaearchus
• Diocles
• Diophantus
• Dinostratus
• Dionysodorus
• Domninus
• Eratosthenes
• Eudemus
• Euclid
• Eudoxus
• Eutocius
• Geminus
• Heliodorus
• Heron
• Hipparchus
• Hippasus
• Hippias
• Hippocrates
• Hypatia
• Hypsicles
• Isidore of Miletus
• Leon
• Marinus
• Menaechmus
• Menelaus
• Metrodorus
• Nicomachus
• Nicomedes
• Nicoteles
• Oenopides
• Pappus
• Perseus
• Philolaus
• Philon
• Philonides
• Plato
• Porphyry
• Posidonius
• Proclus
• Ptolemy
• Pythagoras
• Serenus
• Simplicius
• Sosigenes
• Sporus
• Thales
• Theaetetus
• Theano
• Theodorus
• Theodosius
• Theon of Alexandria
• Theon of Smyrna
• Thymaridas
• Xenocrates
• Zeno of Elea
• Zeno of Sidon
• Zenodorus
Treatises
• Almagest
• Archimedes Palimpsest
• Arithmetica
• Conics (Apollonius)
• Catoptrics
• Data (Euclid)
• Elements (Euclid)
• Measurement of a Circle
• On Conoids and Spheroids
• On the Sizes and Distances (Aristarchus)
• On Sizes and Distances (Hipparchus)
• On the Moving Sphere (Autolycus)
• Optics (Euclid)
• On Spirals
• On the Sphere and Cylinder
• Ostomachion
• Planisphaerium
• Sphaerics
• The Quadrature of the Parabola
• The Sand Reckoner
Problems
• Constructible numbers
• Angle trisection
• Doubling the cube
• Squaring the circle
• Problem of Apollonius
Concepts
and definitions
• Angle
• Central
• Inscribed
• Axiomatic system
• Axiom
• Chord
• Circles of Apollonius
• Apollonian circles
• Apollonian gasket
• Circumscribed circle
• Commensurability
• Diophantine equation
• Doctrine of proportionality
• Euclidean geometry
• Golden ratio
• Greek numerals
• Incircle and excircles of a triangle
• Method of exhaustion
• Parallel postulate
• Platonic solid
• Lune of Hippocrates
• Quadratrix of Hippias
• Regular polygon
• Straightedge and compass construction
• Triangle center
Results
In Elements
• Angle bisector theorem
• Exterior angle theorem
• Euclidean algorithm
• Euclid's theorem
• Geometric mean theorem
• Greek geometric algebra
• Hinge theorem
• Inscribed angle theorem
• Intercept theorem
• Intersecting chords theorem
• Intersecting secants theorem
• Law of cosines
• Pons asinorum
• Pythagorean theorem
• Tangent-secant theorem
• Thales's theorem
• Theorem of the gnomon
Apollonius
• Apollonius's theorem
Other
• Aristarchus's inequality
• Crossbar theorem
• Heron's formula
• Irrational numbers
• Law of sines
• Menelaus's theorem
• Pappus's area theorem
• Problem II.8 of Arithmetica
• Ptolemy's inequality
• Ptolemy's table of chords
• Ptolemy's theorem
• Spiral of Theodorus
Centers
• Cyrene
• Mouseion of Alexandria
• Platonic Academy
Related
• Ancient Greek astronomy
• Attic numerals
• Greek numerals
• Latin translations of the 12th century
• Non-Euclidean geometry
• Philosophy of mathematics
• Neusis construction
History of
• A History of Greek Mathematics
• by Thomas Heath
• algebra
• timeline
• arithmetic
• timeline
• calculus
• timeline
• geometry
• timeline
• logic
• timeline
• mathematics
• timeline
• numbers
• prehistoric counting
• numeral systems
• list
Other cultures
• Arabian/Islamic
• Babylonian
• Chinese
• Egyptian
• Incan
• Indian
• Japanese
Ancient Greece portal • Mathematics portal
| Wikipedia |
\begin{document}
\title{Distillation protocols: Output entanglement and local mutual information}
\author{ Micha{\l} Horodecki\(^1\), Jonathan Oppenheim\(^{1,2}\), Aditi Sen(De)\(^3\), and Ujjwal Sen\(^3\) }
\affiliation{$^{1}$Institute of Theoretical Physics and Astrophysics, University of Gda\'nsk, 80-952 Gda\'nsk, Poland}
\affiliation{$^{2}$Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, U.K.}
\affiliation{\(^{3}\)Institut f\"ur Theoretische Physik, Universit\"at Hannover, D-30167 Hannover, Germany}
\begin{abstract}
A complementary behavior between local mutual information and average output entanglement is derived for arbitrary bipartite ensembles. This leads to bounds on the yield of entanglement in distillation protocols that involve disinguishing. This bound is saturated in the hashing protocol for distillation, for Bell-diagonal states. \end{abstract}
\maketitle
\newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{theorem}{Theorem}
\font\Bbb =msbm10 \font\eufm =eufm10 \def{\hbox{\Bbb R}}} \def\C{{\hbox {\Bbb C}}{{\hbox{\Bbb R}}} \def\C{{\hbox {\Bbb C}}} \defR_{\alpha\beta}{R_{\alpha\beta}} \def{\hbox{\Bbb I}}{{\hbox{\Bbb I}}} \def{\bf k}{{\bf k}} \def{\bf l}{{\bf l}} \def<\kern-.7mm<{<\kern-.7mm<} \def>\kern-.7mm>{>\kern-.7mm>} \def\mathop{\int}\limits{\mathop{\int}\limits} \def\textbf#1{{\bf #1}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\end{array}{\end{array}} \def\begin{array}{\begin{array}} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{\begin{enumerate}}{\begin{enumerate}} \newcommand{\end{enumerate}}{\end{enumerate}}
\def\rangle{\rangle} \def\langle{\langle} \def\vrule height 4pt width 3pt depth2pt{\vrule height 4pt width 3pt depth2pt} \def{\cal L}{{\cal L}} \def{\cal D}{{\cal D}} \def{\cal P}{{\cal P}} \def{\cal H}{{\cal H}} \def\mbox{Tr}{\mbox{Tr}} \def\dt#1{{{\kern -.0mm\rm d}}#1\,}
\def{\rm Tr}{{\rm Tr}} \def{\rm I}{{\rm I}} \def\rangle{\rangle} \def\langle{\langle} \def\rangle{\rangle} \def\langle{\langle} \def\vrule height 4pt width 3pt depth2pt{\vrule height 4pt width 3pt depth2pt} \defI_{coh}{I_{coh}} \def\otimes{\otimes} \def\varrho_{AB}{\varrho_{AB}} \def\sigma_{AB}{\sigma_{AB}} \def\varrho_{A}{\varrho_{A}} \def\sigma_{A}{\sigma_{A}} \def\varrho_{B}{\varrho_{B}} \def\sigma_{B}{\sigma_{B}} \defprivate distribution {private distribution }
\defp_{a,b \ldots (n)}{p_{a,b \ldots (n)}}
\emph{Introduction.}-- Distillation of entanglement \cite{IBMdistillation,IBMhuge} is a key issue in attaining nonclassical tasks in quantum communication protocols \cite{NC}. In a typical communication protocol, entanglement must be shared between distant partners (Alice and Bob). Since channels are invariably noisy, the partners usually end up with mixed state entanglement, which must then be distilled into pure form via local operations and classical communication (LOCC), to make them amenable to the envisaged quantum communication protocol.
The aim of this paper is two-fold. We obtain an upper bound on local mutual information, \(I^{LOCC}\), of arbitrary bipartite ensembles. We then use this bound to provide bounds on the yield of entanglement in any distillation protocol, that use local distinguishing of ensembles of states.
The obtained bounds are then compared with the yield in the existing distillation protocols (e.g. \cite{IBMdistillation, IBMhuge, Werner-Wolf-hashing}) and similar generalizations thereof, and also in some other cases, in which the distillation is based on a distinguishability protocol \cite{Walgate-twoent,ebar-thekey}. As a spin-off, we obtain a complementarity relation between local mutual information and average output entanglement.
\emph{Generalized universal Holevo-like upper bound on local mutual information.}-- To begin, we obtain a generalized Holevo-like bound on local mutual information for arbitrary bipartite ensembles. Suppose then that a source prepares the ensemble \({\cal R} = \{p_x, \varrho_x^{AB}\}\) and sends the \(A\) part to Alice and the \(B\) part to Bob. The task of Alice and Bob is to estimate the identity \(x\) of the sent state. If Alice and Bob are together, so that they are allowed to perform global operations, the mutual information is bounded by the Holevo quantity \cite{Holevo}, \(\chi_{\cal R} = S(\varrho)- \sum_x p_x S(\varrho_x)\),
where \(\varrho\) is the average ensemble state \(\sum_x p_x \varrho_x\). \(S(\cdot)\) is the von Neumann entropy and is defined for a state \(\varrho\) as \(S(\varrho) = -\mbox{tr} \varrho \log_2 \varrho\). We will however
need the following result \cite{Schumacher,iacc}, which is a generalization of the Holevo bound on mutual information.
\begin{lemma} \label{important} If a measurement on ensemble $Q=\left\{ p_{x},\varrho_{x}\right\}$ produces result $y$ with probability \(p_y\), and leaves a post-measurement ensemble
$Q^{y}=\left\{ p_{x|y},\varrho_{x|y}\right\} $, then the mutual information \(I\) (between the identity of state in the ensemble and measurement outcome) extracted from the measurement has the following bound: \begin{equation} I\leq\chi_{Q}-\overline{\chi}_{{Q}^{y}}. \label{prothhom} \end{equation} Here $\overline{\chi}_{{Q}^{y}}$ is the average Holevo bound for the possible post-measurement ensembles, i.e. \(\sum_y p_y \chi_{{Q}^{y}}\). \end{lemma}
Suppose now that Alice and Bob are far apart, so that they are able to perform only local operations and communicate classically between the operations.
In this scenario, universal Holevo-like upper bound on local mutual information for an arbitrary bipartite ensemble \(\{p_x,\varrho_x^{AB}\}\) was obtained in \cite{iacc}: \begin{eqnarray} \label{sei} I^{LOCC} \leq S(\varrho^A) + S(\varrho^B) - \max_{Z=A,B}\sum_x p_x S(\varrho^Z_x). \end{eqnarray} Here \(\varrho_x^{A(B)} = \mbox{tr}_{B(A)} (\varrho_x^{AB})\), and \(\varrho^{A(B)} = \mbox{tr}_{B(A)}\sum_xp_x\varrho_x^{AB}\).
In this paper, we will prove a generalization of this bound. Precisely, we show that
\begin{eqnarray} \label{asol} I^{LOCC}
\leq S(\varrho^A) + S(\varrho^B) - \sum_x p_x S(\varrho^B_x) \nonumber \\
- \sum_{a,b, \ldots, (n)} p_{a,b \ldots (n)}
S\left(\sum_x p_{x|ab\ldots(n)} \varrho^A_{x|ab \ldots (n)}\right).
\end{eqnarray}
Here \(\{p_{x|ab\ldots(n)}, \varrho^{AB}_{x|ab \ldots (n)}\}\) is the post-measurement ensemble obtained after the measurement in the \(n\)th step, and \(p_{a,b \ldots (n)}\)
is the probability of the sequence of measurement outcomes in steps 1, 2, \(\ldots\), \(n\). Our generalization in (\ref{asol}) is related to the previous bound in (\ref{sei}), in a similar way as Lemma \ref{important} is related to the original Holevo bound.
We will now prove the inequality in (\ref{asol}). To start the protocol for obtaining the identity \(x\) of the given ensemble \({\cal R} = \{p_x, \varrho_x^{AB}\}\),
Alice makes a measurement \cite{first}, and suppose that she obtains an outcome \(a\), with probability \(p_a\). Suppose that the post-measurement ensemble (for outcome \(a\) at Alice) is \({\cal R}_a = \{p_{x|a}, \varrho_{x|a}^{AB}\}\).
The results presented in this paper are in terms of mutual information, which when maximized over all measurement strategies gives the ``accessible information''. All the results are of course true for the extreme case of the best measurement strategy (for attaining maximal mutual information), but are true also for any other nonextreme measurement strategy. The mutual information gathered from the measurement of Alice has the following bound due to Lemma \ref{important}: \( I_1^A \leq \chi_{{\cal R}^A} - \overline{\chi}_{{\cal R}^A_a}
\). Here \(\chi_{{\cal R}^A}\) is the Holevo quantity of the \(A\) part of the ensemble \({\cal R}\), i.e. of the ensemble \({\cal R}^A = \{p_x, \varrho_x^A\}\). And \(\chi_{{\cal R}^A_a}\) is the Holevo quantity of the \(A\) part of the ensemble \({\cal R}_a\).
The subscript \(1\) in \(I_1^A\) indicates that the information is extracted from the first measurement.
After Alice communicates her result to Bob, his ensemble is
\({\cal R}_a^B = \{p_{x|a}, \varrho_{x|a}^{B}\}\), with \(\varrho_x^B = \mbox{tr}_A (\varrho_x^{AB})\).
Suppose now that Bob performs a measurement and obtains outcome \(b\) with probability \(p_b\), so that the post-measurement ensemble (at his part) is \({\cal R}^B_{ab} = \{p_{x|ab}, \varrho^B_{x|ab}\}\), where
\(\varrho^B_{x|ab} = \mbox{tr}_A\left(\varrho^{AB}_{x|ab}\right)\). So (again due to Lemma \ref{important}), the information extracted in Bob's measurement has the following bound: \( I_2^B \leq \overline{\chi}_{{\cal R}^B_a} - \overline{\chi}_{{\cal R}^B_{ab}}
\).
This procedure of measuring and communicating the result goes on for an arbitrary number of steps, and by the chain rule for mutual information (see e.g. \cite{CoverThomas}), the mutual information obtained in all steps is \( I^{LOCC} = I^A_1 + I^B_2 + I^A_3 + \ldots \).
Note that this quantity depends on the measurement strategy followed by Alice and Bob.
Now we (repeatedly) use the following facts:
(i) The von Neumann entropy is concave (i.e. \(S(p_1\varrho_1 + p_2 \varrho_2) \geq p_1S(\varrho_1) + p_2 S(\varrho_2)\), for arbitrary density matrices \(\varrho_1\) and \(\varrho_2\), and probabilities \(p_1\) and \(p_2\)), and positive.
(ii)
A measurement on one subsystem cannot change the state at a distant subsystem.
(iii)
The average change (initial minus final) of von Neumann entropy due to a measurement on one subsystem cannot be less than the average change in a distant subsystem. So for example, after the first measurement by Alice, we have
\(\sum_xp_xS(\varrho_x^A) - \sum_a p_a \sum_x p_{x|a} S(\varrho_{x|a}^A)
\geq \sum_xp_xS(\varrho_x^B) - \sum_a p_a \sum_x p_{x|a} S(\varrho_{x|a}^B)\).
(iv)
The Holevo quantity is positive.
Then after \(n\) steps of measurements, we obtain the inequality (\ref{asol}).
We have assumed that the last measurement is performed by Alice.
The last term of the bound (\ref{asol}) is a contribution from this last measurement by Alice. We will see below that the final result is free from this asymmetry. Moreover, for the same measurements, but using the above items (i)-(iv) in a different way, one can reach the inequality (\ref{asol}), but with \(A\) and \(B\) interchanged, i.e., we also have \begin{eqnarray} \label{asol25} I^{LOCC}
\leq S(\varrho^A) + S(\varrho^B) - \sum_x p_x S(\varrho^A_x) \nonumber \\
- \sum_{a,b, \ldots, (n-1)} p_{ab\ldots (n-1)}
S\left(\sum_x p_{x|ab\ldots(n-1)} \varrho^B_{x|ab \ldots (n-1)}\right).
\end{eqnarray}
Note that now the last term is a contribution from the next to last measurement, which (due to the assumption that Alice performed the last measurement) is performed by Bob. Inequalities (\ref{asol}) and (\ref{asol25}) give us upper bounds on local mutual information, for \emph{arbitrary} bipartite ensembles. These inequalities are true for any measurement strategy of Alice and Bob. In particular, they are true for the one which maximizes \(I^{LOCC}\). This is then the so-called locally accessible information (\(I_{acc}^{LOCC}\)).
The last terms in the bounds on local mutual information in inequalities (\ref{asol}) and (\ref{asol25}) respectively are negative quantities, due to the positivity of von Neumann entropy. Leaving it out, we have the inequality (\ref{sei}).
\emph{Input and output entanglements.}-- We now try to write the bounds on local mutual information in (\ref{asol}) and (\ref{asol25}) in a more revealing form. To that end, note that the von Neumann entropy of either of the the local density matrices of a bipartite state
is no smaller than the entanglement of formation \cite{IBMhuge}, and the entanglement of formation is a lower bound for any asymptotically consistent measure of bipartite entanglement \cite{HHHDonald}.
Then, the last term in the upper bound of Eq. (\ref{asol25}) is
\(
\leq
- \sum\limits_{a,b, \ldots, (n-1)} p_{ab\ldots (n-1)}
E\left(\sum_x p_{x|ab \ldots (n-1)} \varrho_{x|ab \ldots (n-1)}^{AB}\right) \),
which in turn
(by the fact that entanglement cannot increase (on average) under LOCC)
is no greater than \begin{eqnarray} \label{char}
- \sum_{a,b,\ldots,(n)} p_{a,b \ldots (n)}
E\left(\sum_x p_{x|ab\ldots (n)}
\varrho_{x|ab\ldots (n)}^{AB}\right),
\end{eqnarray}
where \(E\) denotes any asymptotically consistent measure of bipartite entanglement. The last term of (\ref{asol}) is directly \(\leq\) the right-hand-side (rhs) of (\ref{char}), by the fact that the von Neumann entropy of local density matrix is \(\geq\) any asymptotic entanglement measure. The rhs of (\ref{char}) (without the minus sign) is just the average entanglement that we obtain at the output in the \(n\) step local measurement protocol between Alice and Bob. We denote it by \(\overline{E}_{out}\). Note that from here on, the results are independent of whether it was Alice or Bob who ended the protocol.
Refering back to the inequalities (\ref{asol}) and (\ref{asol25}), we have \begin{eqnarray} \label{asol1} I^{LOCC} \leq S(\varrho^A) + S(\varrho^B) - \max_{Z=A,B}\sum_x p_x S(\varrho^Z_x) - \overline{E}_{out}. \nonumber \\ \end{eqnarray}
It is possible to write Eq. (\ref{asol}) in an even more revealing way. Note that \(S(\varrho^A) + S(\varrho^B) \leq N\), where \(N\) is the number of qubits (two-dimensional quantum systems) in the Alice-Bob system. I.e. \(N = \log_2 d_A d_B\), where \(d_A\) and \(d_B\) are respectively the dimensions of the Hilbert spaces of Alice's and Bob's particles. Moreover, we have \(S(\varrho^B_x) \geq {\cal E}(\varrho_x^{AB})\),
where again \({\cal E}\) denotes any asymptotically consistent measure of bipartite entanglement \cite{IBMhuge, HHHDonald}. The quantity \(\sum_x p_x {\cal E}(\varrho_x^{AB})\) is the average input (initial) entanglement in the Alice-Bob system. We denote it by \(\overline{{\cal E}}_{in}\). We use a separate notation for the asymptotic entanglement measure for the input states than that in the output states, to underline the fact that they can be different measures. It is known that there exist several asymptotically consistent measures of bipartite entanglement (see \cite{michalQIC}). We will come back to this point later. So finally we have \begin{eqnarray} \label{ghyama} I^{LOCC} \leq N - \overline{{\cal E}}_{in} - \overline{E}_{out}. \end{eqnarray} Eq. (\ref{ghyama}) can also be obtained from Eq. (\ref{asol25}), with the additional assumption of monotonicity under LOCC of \(E\). Before connecting above bounds on local mutual information with entanglement distilled in distillation protocols, let us note some interesting features of these inequalities.
\emph{Complementarity between extracted and unused information.}-- One way of interpreting the result in Eq. (\ref{ghyama}) is to note that the terms \(I^{LOCC}\) and \(\overline{E}_{out}\) depend on the measurement protocol followed by Alice and Bob. The other two terms (\(N\) and \(\overline{{\cal E}}_{in}\)) are fixed for a given ensemble. So, writing the inequality as
\(I^{LOCC} + \overline{E}_{out} \leq N - \overline{{\cal E}}_{in}\),
we see that the left hand side can be interpreted as a sum of ``extracted information'' (\(I^{LOCC}\)) and ``unused information'' (\(\overline{E}_{out}\)). Independently (i.e. considered separately), the extracted and unused informations depend on the measurement strategy followed by Alice and Bob. However for all strategies, the sum of the extracted and unused informations is bounded by \(N - \overline{{\cal E}}_{in}\).
\emph{On bound entanglement with nonpositive partial transpose.}-- Another interesting feature of the inequality (\ref{ghyama}) is that
the entanglement measures \(E\) and \({\cal E}\) need not be the same measures.
They must only be be no greater than the von Neumann entropy of either of the local density matrices. In particular, any asymptotically consistent measure of bipartite entanglement satisfy such conditions (see \cite{michalQIC}). This may have nontrivial consequences. For example, we may require that \({\cal E}\) must be a convex function, and keep \(E\) to be such that it need not necessarily be convex \cite{convex}. The only entanglement measure for which there is some evidence for nonconvexity is for distillable entanglement \cite{IBMhuge}, and this is related to the phenomenon of bound entanglement \cite{HHHbound}. Precisely, it was shown in Ref. \cite{Shor} that distillable entanglement can be proven to be nonconvex, if there exists a certain bound entangled state \cite{NPTbound}, having nonpositive partial transpose (NPT) \cite{PPT}. Bound entanglement, and more particularly NPT bound entanglement
is not a well understood phenomenon of quantum mechanics. We believe that the inequality (\ref{ghyama}), may have important consequences for NPT bound entangled states. The point that we make here is also to be seen with respect to the fact that, below we actually relate the output entanglement \(E_{out}\) to entanglement distilled in different distillation protocols, and bound entanglement is precisely that entanglement which cannot be distilled.
\emph{Bound on entanglement distillable via protocols correcting all errors.}-- We will now consider distillation protocols based on full distinction between the possible pure states in a decomposition of \(m\) copies a bipartite state \(\rho\). Suppose therefore that Alice and Bob share \(m\) copies of the state $\rho$ given by \begin{equation}
\rho=\sum_i p_i |\psi_i\rangle\langle\psi_i|.
\end{equation} where $|\psi_i\rangle$ are eigenvectors of \(\rho\). Alice and Bob can imagine that they actually share some string of the form $\psi_{i_1}\otimes \ldots \otimes\psi_{i_m}$. Now we propose the following strategy for distillation. Alice and Bob try to fully distinguish between all strings. I.e. they apply some LOCC operation, that tells them what is the string that they share. Usually during such distinguishing, they destroy the string to some degree. For example, the protocol of distinguishing two pure orthogonal states given in \cite{Walgate-twoent}, destroys the states completely. Yet in the hashing protocol for distilling entanglement, Alice and Bob are able to distinguish strings without destroying all entanglement they share \cite{IBMhuge}.
In the case of full distinguishing (in some distillation protocol \(P\)), the accessible information is $mS(\rho)$. The initial entanglement per input pair is equal to $\overline S_A\equiv \sum_ip_i S(\rho^A_i)$, where $\rho_i^A$ is the local density matrix of
$|\psi_i\rangle$. Since we have full distinguishing, the final entanglement
is pure entanglement, so that it can be converted reversibly by LOCC, into singlets \(|\psi^-\rangle
= \frac{1}{\sqrt{2}}(|01\rangle - |10\rangle)\) \cite{BBPS1996}. Thus the output entanglement is the entanglement \(D_P\) that has been distilled in such protocol $P$. Using the inequality (\ref{asol1}) we have then \begin{equation} S \leq S_A +S_B - \overline S_A - D_P, \end{equation} where for ease of notation, we have used the notations \(S \equiv S(\rho)\), \(S_A \equiv S({\rm Tr}_B \rho)\),
and \(S_B \equiv S({\rm Tr}_A \rho)\). This gives \begin{equation} \label{Sagor29} D_P\leq S_A + S_B - S -\overline S_A.
\end{equation} Note that since \(|\psi_i\rangle\) are pure,
\(\overline S_A = \sum_i p_i S({\rm Tr}_B|\psi_i\rangle\langle\psi_i|)
= \sum_i p_i S({\rm Tr}_A|\psi_i\rangle\langle\psi_i|) = \overline S_B \). So the last term in the above inequality (\ref{Sagor29}) can be replaced by \(\overline S_B \). For the case of Bell diagonal states (i.e. states that are diagonal in the canonical maximally entangled basis \cite{Bell-diagonal}), we have $S_A= S_B =\overline S_A = \log_2 d$ so that in that case, inequality (\ref{Sagor29}) gives us \begin{equation} D_P (\rho) \leq \log_2 d -S(\rho). \end{equation} This result is compatible with the fact that the quantity $\log_2 d-S(\rho)$ can be attained by hashing methods that reveal all errors \cite{IBMhuge,Werner-Wolf-hashing}.
It is also instructive to consider a hypothetical protocol, in which Alice and Bob would divide their \(m\) systems into two groups $G_1$ and $G_2$ of length $m_1$ and $m - m_1$ respectively. Now by applying some LOCC actions, Alice and Bob would aim to get to know the identities of the states of systems from $G_1$, while $G_2$ would serve as a resource to do this and would be destroyed during protocol. The protocol differs from the previous one, as in the present case, Alice and Bob does not aim to distinguish between states of systems from this latter group.
Suppose now that such a protocol (\(P'\)) exists. Then the output entanglement is $m_1 \overline S_A$, the input one is $m \overline S_A$, while the mutual information is equal to $m_1S(\rho)$. The entanglement $D_{P'}$ distillable in this protocol is therefore equal to the output entanglement divided by $m$:
\[D_{P'}= \frac{m_1 \overline S_A}{m}.\] We obtain the following constraint for $r\equiv {m_1\over m}$: \begin{equation} r\leq {S_A+S_B - \overline S_A \over S + \overline S_A} \end{equation} which finally leads to \begin{equation} D_{P'}\leq {S_A+S_B - \overline S_A \over S + \overline S_A} \overline S_A. \end{equation} (We remember that \(\overline S_A = \overline S_B\).) For Bell diagonal states it gives the following bound: \begin{equation} \label{Mana-di} D_{P'}(\rho)\leq {(\log_2d)^2 \over \log_2d + S(\rho)} \end{equation} (For Bell diagonal states in \(2 \otimes 2\), this reduces to \(D_{P'}(\rho)\leq {1 \over 1 + S(\rho)} \).) The bound is always nonzero, even for separable states. This means that the inequality (\ref{asol1}) is not the only restriction on local mutual information in this complicated situation. This is however not surprising, as in the considered protocol, we assumed that using a part of the string, we can get the whole information about the rest of the string, but nothing about the used part. What one expects is that at the some point, one perhaps would also gain some information about the used part. Note here that the bound in (\ref{Mana-di}) is for those distillation protocols in which one bases on a distinguishing protocol.
\emph{Conclusions.}-- We have shown that it is possible to obtain bounds on the yield in distillation protocols, basing on distinguishability, of bipartite states,
from a complementarity connecting local mutual information with average output entanglement, for the case of bipartite ensembles. For Bell-diagonal states, saturation of this bound is obtained in the hashing protocol for distillation. It is consistent with results of \cite{shor-smolin}, where to beat hashing bound, degenerate codes were applied. Whether any distillation protocol is a distinguishing process remains an open question.
\emph{Note added.}-- After completion of our work, we came to know of the recent related work in Ref. \cite{India}.
MH is supported by the Polish Ministry of Scientific Research and Information Technology under the (solicited) grant No. PBZ-MIN-008/P03/2003 and by EC grants RESQ and QUPRODIS.
JO is supported by EC grant PROSECCO.
AS and US acknowledge support from the Alexander von Humboldt Foundation.
\end{document} | arXiv |
Does the edge effect impact on the measure of spatial accessibility to healthcare providers?
Fei Gao1,3,5,
Wahida Kihal2,
Nolwenn Le Meur1,3,5,
Marc Souris4 &
Séverine Deguen1,6
Spatial accessibility indices are increasingly applied when investigating inequalities in health. Although most studies are making mentions of potential errors caused by the edge effect, many acknowledge having neglected to consider this concern by establishing spatial analyses within a finite region, settling for hypothesizing that accessibility to facilities will be under-reported. Our study seeks to assess the effect of edge on the accuracy of defining healthcare provider access by comparing healthcare provider accessibility accounting or not for the edge effect, in a real-world application.
This study was carried out in the department of Nord, France. The statistical unit we use is the French census block known as 'IRIS' (Ilot Regroupé pour l'Information Statistique), defined by the National Institute of Statistics and Economic Studies. The geographical accessibility indicator used is the "Index of Spatial Accessibility" (ISA), based on the E2SFCA algorithm. We calculated ISA for the pregnant women population by selecting three types of healthcare providers: general practitioners, gynecologists and midwives. We compared ISA variation when accounting or not edge effect in urban and rural zones. The GIS method was then employed to determine global and local autocorrelation. Lastly, we compared the relationship between socioeconomic distress index and ISA, when accounting or not for the edge effect, to fully evaluate its impact.
The results revealed that on average ISA when offer and demand beyond the boundary were included is slightly below ISA when not accounting for the edge effect, and we found that the IRIS value was more likely to deteriorate than improve. Moreover, edge effect impact can vary widely by health provider type. There is greater variability within the rural IRIS group than within the urban IRIS group. We found a positive correlation between socioeconomic distress variables and composite ISA. Spatial analysis results (such as Moran's spatial autocorrelation index and local indicators of spatial autocorrelation) are not really impacted.
Our research has revealed minor accessibility variation when edge effect has been considered in a French context. No general statement can be set up because intensity of impact varies according to healthcare provider type, territorial organization and methodology used to measure the accessibility to healthcare. Additional researches are required in order to distinguish what findings are specific to a territory and others common to different countries. It constitute a promising direction to determine more precisely healthcare shortage areas and then to fight against social health inequalities.
Equitable distribution of health resources is a key priority for health professionals and policy makers worldwide; reducing health inequalities has long been of concern to community and public health planners [1,2,3,4]. Access to healthcare, as one potential driver of health inequalities, is at the heart of public health policy and is internationally recognized as a key goal in meeting the essential health needs of individuals [5,6,7,8].
Access to healthcare varies across space due to the uneven distribution of both healthcare providers and consumers, and the impact of geographical location on health is increasingly being examined. Various studies in Europe (including France) have shown unequal distribution of health service resources [9]. With heightened interest in residential neighborhood the characteristics that could influence health behaviors and outcomes, spatial accessibility and availability indices are being used in epidemiological studies more and more [10,11,12,13,14,15]. As a measure for determining those areas having inadequate levels of health service provision, spatial accessibility of health services refers to relative access to health services in a given location, which is influenced primarily by travel distance (or travel time) and the spatial distribution of health service providers and consumers [16,17,18]. Most studies examining the geographical accessibility of healthcare and health-related services have suggested a growing range of indices, including Physician Population Ratio, nearest distance, shortest time, cumulative opportunity and the gravity model [5, 19,20,21,22,23,24,25,26]. Recent methodological developments in this field have emerged in international research, including Enhanced 2-Step Floating Catchment Area method (E2SFCA) [27], which provides a summary measure of two important and related components of access: volume of services provided relative to population size, and proximity of services provided relative to population location.
In addition, one methodological limitation often mentioned in research considering accessibility concerns the fact that studies failed to include behavior outside the study area [17, 28,29,30,31,32,33,34,35,36,37,38,39]. Known as the edge effect, it is central to this paper. Edge effect occurs "when the study area is defined by a border which does not actually prevent travel across the border" [40] and people are free to travel beyond that border to receive healthcare goods and services. Arbitrary administrative boundaries (such as census tracts or block groups) are often used without consideration that resources beyond a given boundary are likely to affect behaviors within a given spatial unit [35]. This means that any geographic distribution or spatial interaction occurring within the spatial unit may extend beyond its boundaries [30]. More precisely, edge effects manifest when the boundaries of the study area affect a given spatial measurement and lead to the distortion of estimates [35, 41]. Interestingly, although most studies do mention potential errors caused by the edge effect, many acknowledge their mistake in neglecting to consider this in the spatial analyses they have undertaken within a finite region [42]. Because this can result in areas close to the boundary being classified as having poor geographic access even though they may in fact be proximate to resources across the boundary, many research projects have hypothesized that failure to accounting for edge effect will lead to considerable biases [34,35,36,37], even under-reporting [17, 28, 29, 31, 43] of accessibility to facilities.
Although edge effect is a well-documented phenomenon, researches choosing this issue as the main subject used for most of the time distance/travel time measure [34, 35], or availability measures such cumulative index [28, 34, 35, 38, 43]. Focusing on E2SFCA method, the edge effect is frequently observed in studies measuring the spatial accessibility to healthcare providers. More and more studies have corrected for edge effects [32, 33]. However, to the best of our knowledge, very few studies based on E2SFCA have focused on edge effect in a real-world application with a view to quantifying its effect on the accuracy of defining health service access.
In this context, our study compares health service accessibility when accounting or not for the edge effect, taking into account that patients may overcome geographical boundaries, choosing to consult health professionals in neighboring departments. The geographical accessibility indicator used to quantify spatial accessibility is the Index of spatial accessibility (ISA), based on the E2SFCA algorithm. ISA was previously developed by our team for the pregnant women population, focusing on the three types of healthcare professionals (GP, midwife and gynecologist) involved during the pregnancy [44]. Conducted in the department of Nord at French census block spatial scale, our study aimed to quantify edge effect bias using the ISA index, and investigate the impact on spatial analysis results.
Besides, it is well documented that levels of accessibility and utilization of healthcare are related with socio-economic distress level and geographical factors [45,46,47,48]. Consequently, in our study, we investigated the urban–rural disparity of ISA as well as the relationship of ISA with socioeconomic distress variables, both when offer and demand beyond the area of study are excluded or included. The underlying questions are: Would the association between socioeconomic factor and accessibility be biased by ignoring spatial interaction occurring between the spatial unit and its neighborhood? Would the difference of accessibly between urban/rural areas be accentuated?
Data and measures
Study setting and statistical unit
This study was carried out in the department of Nord, located in the north of France, close to the Belgian border. Analysis was conducted at French census block level (known as IRIS: "Ilots Regroupés pour l'Information Statistique") defined by the National Institute of Statistics and Economic Studies (INSEE) [49], which is the smallest infra-urban level for which census data is available. There are 1346 IRIS in the department of Nord.
Neighborhood characteristics
Two types of neighborhood characteristic were used at census-block level:
Degree of urbanization (rural/urban) Each IRIS was classified as urban or rural according to the classification established by the national census bureau. These data are openly available from (https://www.insee.fr/fr/information/2017499) [49].
Level of socioeconomic distress According to previous work on social health inequality [50], we selected five variables from the 2006 French census (https://www.insee.fr/fr/information/2017499) [49] to characterize the neighborhood socioeconomic level: low level of educational attainment, women's unemployment rate, single parent families, non-homeowner, and insecure employment situation (see variables definition in "Appendix I").
The postal addresses of GPs, midwives and gynecologists were obtained from the French state health insurance website (http://www.ameli-sante.fr) in 2014 [51]. To assess the edge effect, we considered the health professional offer both within and outside of the department of Nord. Service providers were represented by their geocoded professional addresses (latitude, longitude), obtained through Batch Geocoder (http://dehaese.free.fr/Gmaps/testGeocoder.htm). Eight general practitioners and one obstetrical gynecologist were excluded from the analysis due to low quality of professional postal addresses. No georeferencing quality difference was detected between adjacent department and Nord Department. Further methodological details are available elsewhere [44].
Index of spatial accessibility (ISA)
ISA is an indicator which measures healthcare service accessibility.
The ISA is based on the E2SFCA method, a method which maintains the advantages of a gravity model while being easier to interpret, since it represents a derived form of a Physician Population Ratio. As the name suggests, two steps must be performed:
Step 1 For each provider in location k, look up all population locations of IRIS i within a catchment, and within a predefined distance d ik from location k. A distance decay function is applied within a catchment. w(d ik ) is the weight quantifying travel time between IRIS i and healthcare provider k. Sum up all population sizes (Pi) within that catchment area to compute the provider-to-population ratio (R k ):
$$R_{k} = \frac{1}{{\mathop \sum \nolimits_{{d_{jk} < d_{max} }} P_{i} *w\left( {d_{ik} } \right)}}$$
Step 2 For each population location i, look up all provider locations k that are within the catchment from location i. Sum up all R k for the catchment area to calculate the Index of spatial accessibility (ISA i ) at location i:
$$ISA_{i} = \mathop \sum \limits_{{{\text{d}}_{\text{ij}} \le {\text{d}}_{ \text{max} } }} w\left( {d_{ij} } \right)R_{k}$$
ISA takes into account:
The latitude and longitude of each healthcare professional.
The centroids of residential buildings for each IRIS (Residential buildings came from BD TOPO® and was provided by the Institut National de l'Information Géographique et Forestière (French National Geographic Institute) [52]). And
Car travel time, calculated by Google Maps. We used the FILENAME statement and the URL access method within SAS to access Google Maps, and extracted both the driving time and distance each time the site was accessed [44, 53].
We estimated an ISA for GPs, gynecologists and midwives, separately. A composite ISA relying on principal component analysis was also calculated, describing overall accessibility of the three types of healthcare professionals. Further details of the method developed for ISA estimation are given in [44].
Decay function and travel time threshold
We defined the time threshold according to figures already published by the French Institute for research and information in health economics for general practitioners [54]:
less than 5 min' travel: fully access to healthcare providers (w = 1)
more than 15 min' travel: no access to healthcare providers (w = 0).
between 5 and 15 min: partial access to healthcare providers (w is defined by a continuous decay function [Eq. (3)] with the weighting factor equal to 1.5 [55])
$$w = \frac{{\left( {15 - d} \right)}}{{\left( {15 - 5} \right)}}e^{1.5}$$
We based the threshold of the two other healthcare professionals on general practitioners' results: the nearest travel time to general practitioner is lower than 5 min and between 5 and 15 min for 88 and 12% of the population, respectively; we used these proportions to define the threshold for two other health professionals: 15 and 34 min for gynecologists and 17 and 34 min for midwives.
Figure 1 provides an illustration of the impact of including offers and demands outside the study area defining what we call the "patient area" or catchment. This illustration deals with gynecologists only for the IRIS named "Fournes-en-Weppes" (IRIS no. 592 500 000), with keys for reading.
Definition of "patient area" when including and excluding offer and demand outside. Focus on the IRIS named "Fournes-en-Weppes"- (IRIS no. 592 500 000), the Nord department are circled in blue, whereas neighboring IRIS from the three departments of Somme, Aisne and Pas-de-Calais are yellow. a) without consideration of offer and demand beyond the boundary; b) with consideration of offer and demand beyond the boundary
Keys for reading Fig. 1 and fully understanding the principle of edge effect:
Figure 1a—study area without consideration of offer and demand beyond the boundary
All 218 gynecologists are represented by dark purple dots. The IRIS "Fournes-en-Weppes" is highlighted in fuchsia and circled in orange. We count 146 gynecologists accessible by car within 34 min of Fournes-en-Weppes, within the study area. The 1201 IRIS are highlighted in purple forms the "patient area" of the 146 gynecologists (circled in orange). Figure 1b—study area with consideration of offer and demand beyond the boundary
With edge effect, the residents of Fournes-en-Weppes could reach 181 gynecologists (an additional 35 from outside) within 34 min by car. However, they must share these with 2203 IRIS (1001 IRIS from outside). "Patient area" IRIS are colored purple.
GIS methods
We began by quantifying a global ISA spatial autocorrelation, separately with, and without, consideration of offer and demand beyond the department of Nord, based on Moran's I statistic (calculated by means of the distance matrix) [56,57,58]. Spatial autocorrelation can be defined as the coincidence of value similarity and locational similarity [59]. Positive spatial autocorrelation therefore exists where the high or low values of a random variable tend to be spatially clustered, with negative spatial autocorrelation existing where geographical areas tend to be surrounded by neighbors having highly dissimilar values. The values of the Moran's I statistic range from − 1 to + 1.
Next, a Local Indicator of Spatial Autocorrelation (LISA) was applied. More precisely, Moran's diagram was produced in order to reveal the types of spatial relationship between a geographic unit and its neighboring area.
Four types of LISA can be detected: High–High (HH): high level of ISA in both a given IRIS and in its neighbors and Low–Low (LL): low level of ISA in both a given IRIS and in its neighbors, characterizing a positive association; High–Low (HL): high level of ISA in a given IRIS, whereas its neighbors have a low level of ISA and Low–High (LH): low level of ISA in a given IRIS, whereas its neighbors have high level of ISA, characterizing a negative association.
In order to analyze ISA variations when offer and demand outside are included, the 1346 IRIS making up the Nord department are divided into three classes, named improved, unchanged and deteriorated. These classes were constructed according to the results obtained using the simple linear regression model, where Y and X correspond to the ISA estimated with and without taking into account offer and demand across the boundary, respectively (see "Appendix II").
Statistical associations
ISA's composite values when offer and demand beyond the boundary were then cross-referenced with the individual variables of socioeconomic distress mentioned in the data section. The statistical significance of the relation was tested using a simple linear regression where Y and X were the ISA index and one of the socioeconomic variables, respectively. The α-risk was set at 5%.
Strategy and the statistical analysis plan
Preliminary work was carried out to study ISA variation when offer and demand outside are excluded or included, and the spatial distribution of this variation. To quantify overall and local autocorrelation of ISA in the two cases, the GIS method was then applied. Following this, we analyzed the ISA variation for urban and rural zones, separately. Finally, we compared the relationship between the socioeconomic distress variable and ISA, to find out whether there is an impact when studying the association, both when excluding and including healthcare offer and demand outside the area of study, to account for a deficiency in analysis termed the "edge effect".
Descriptive results
When excluding healthcare providers outside the department boundary, we geolocalized 2590 GPs, 143 midwives and 218 gynecologists. In order to include offer and demand beyond outside, we added 493 GPs, 60 midwives and 78 gynecologists from the neighboring area who were capable of providing services to those residing in the department of Nord. Ignoring the offer beyond the department led to an 18% decrease in the total number of health professionals potentially available; this decrease reaches 30% when focusing on midwives (Table 1).
Table 1 Number of health professionals by medical specialty
After calculation of travel time via Google Maps, when including offer and demand beyond the boundary, "patient area" is not restricted to the 1346 IRIS of the department of Nord. In all, 1362, 2425 and 2583 IRIS in the departments of Pas-de-Calais, Oise, Somme, Aisne and Ardennes are added to the ISA calculation for GPs, midwives and gynecologists respectively (Table 1). The "average population" columns show that les IRIS neighboring have lower population density than IRIS Nord.
The descriptive statistics of the ISA when offer and demand beyond the study area are included or excluded are presented in Table 2. Mean and standard deviation are slightly below when offer and demand outside are taken into account, whichever health professionals are included. The two-means comparison is only statistically different for ISA gynecologist (p < 0.00).
Table 2 Descriptive statistics of ISA when accounting or not for the edge effect—North department
Spatial distribution of ISA at IRIS level
Spatial distributions of ISA for GPs (a), midwives (b) and gynecologists (c) considered separately, and combined in the composite index (d) when offer and demand beyond the department of Nord are included or not (Fig. 2). The maps show minor changes: ISA distributions in the two cases are fairly similar. Changes appear mainly in those IRIS located close to boundaries.
Spatial distribution of ISA when offer and demand outside are included or excluded. ISA distribution is showed for GPs (a), midwives (b) and gynecologists (c) and combined in the composite index (d). For each map, neighboring departments are colored in yellow and the department of Nord is colored using a graduated approach (according to Jenks' Natural Breaks), showing different ISA scales at IRIS level, expressed per 100,000 inhabitants. The 1362, 2425 and 2583 Neighboring IRIS added to the ISA calculation for GPs, midwives and gynecologists when edge effect included are colored purple, green and khaki respectively
Accounting for edge effect
In order to focus on ISA variation when offer and demand beyond the study area were included, we distributed the 1346 IRIS into three classes: improved, unchanged and deteriorated according to simple linear regression results (presented with more detail in "Appendix II").
Figure 3 shows that when accounting for healthcare provider source and patient needs outside the area of Nord the percentage of IRIS having decreased ISA is larger than those with increased ISA (13.15 vs. 5.50% for GPs; 29.79 vs. 15.68% for midwives and 30.46 vs. 9.88% for gynecologists). Many past researches have hypothesized that failure to accounting for edge effect will lead to considerable under-reporting of accessibility to facilities. We obtain the exact opposite findings. The composite ISA which give an overall view of accessibility to various types of health professionals is subject to a slight edge effect (25.33% deteriorated and 21.55% improved). Those IRIS too far from boundaries to be affected are colored in grey ("outside service area" in the key).
Percentage of residential IRIS having improved/unchanged/deteriorated accessibility when accounting for edge effect
It can be observed in Fig. 4 that IRIS where GPs ISA changed are mainly located close to the boundaries. Conversely, only 36 IRIS where midwives ISA are not impacted as a result of distance, and 2 IRIS for gynecologists ISA. The white zone does not mean that they are not subject to edge effect, but rather reveal the existence of a kind of "balance": people from this zone could reach more healthcare professionals beyond the department of Nord but at the same time they must share health resources with residents from neighboring departments. Their accessibility score therefore remains relatively stable.
Spatial variation of ISA when including offer and demand beyond the department of Nord. Variation is displayed for GPs (a), midwives (b) and gynecologists (c) considered separately, along with the composite index (d). All IRIS that are too far from boundaries (by car travel time) to be affected are shaded grey
When focusing on composite ISA, results reveal that all IRIS are subject to edge effect. Most of the IRIS located close to the border and in the agglomeration area (such as Roubaix, Anzin, Maubeuge and Saint-Pol-sur-Mer) saw their ISA improved. However, more IRIS have a deteriorated ISA (25.3%) than an improved ISA (21.5%).
Spatial analysis of ISA
The result of Moran's test for the composite ISA reveal significant spatial autocorrelation (I = 0.73 when offer and demand beyond the study area are included, and I = 0.74 when excluded—p = 0.0001, pseudo-significance values based on a permutation approach [56]). This means that the IRIS which have a high level of healthcare accessibility are more often located close to other IRIS having a high ISA score in the two cases than they were if this distribution were random.
Figure 5 shows the mapped results of the LISA statistics calculations. According to the results obtained from the LISA statistics, when excluding the offer and demand beyond the boundary, the 1346 IRIS are distributed as follows (Table 3): 287 HH-type (high level surrounded by high levels), 273 LL-type (low level surrounded by low levels). Despite some minor differences, we found similar distribution of LISA statistics: 277 HH-type, 264 LL-type.
LISA cluster map of composite ISA when accounting or not for the edge effect. When not accounting for the edge effect (a) and when accounting for the edge effect (b)
Table 3 Descriptive statistics of composite ISA in the IRIS types obtained by LISA statistics
Comparative analysis of urban and rural ISA variation with edge effect
Figure 6 shows the ISA variation when accounting for the edge effect and the distribution of urban IRIS and rural (hatched) IRIS. Most IRIS in the department of Nord are urban (1030 urban vs. 336 rural), concentrated around several densely-populated areas close to major cities such as Lille, Roubaix, Tourcoing and Villeneuve d'Ascq (Fig. 6). Using a 10 km buffer zone around the boundaries, we estimated that 180 rural IRIS (54% of total rural IRIS) and 304 urban IRIS (just 29% of total urban IRIS) were near the Nord Pas-de-Calais and the Nord Aisne border.
Spatial variation of ISA and the distribution of urban/rural IRIS. ISA variation is showed for GPs (a), midwives (b) and gynecologists (c) and the composite index (d). Rural IRIS are hatched. We created a 10KM buffer zone around the boundaries
Figure 7 shows the percentage of urban/rural IRIS variation separately, when offer and demand beyond the boundary were included. Overall, for ISA midwives and gynecologists, there is more variation in rural IRIS: only 16.14 and 26.25% of rural IRIS remain unchanged for ISA midwives and gynecologists respectively, compared with 48.35 and 62.82% of urban IRIS. Moreover, a sharp downward trend was observed in the rural zone; about 53.80% of rural IRIS have a deteriorated ISA midwife and gynecologist value.
Percentage of Urban/Rural IRIS having improved/unchanged/deteriorated accessibility. ISA variation with edge problem corrected. The p value is determined by Chi-square test
Spatial variation of ISA according to socioeconomic distress level
The strength of the associations between the socioeconomic distress variable and composite ISA when offer and demand beyond the boundary were included or exclude are quite similar (Table 4): the association between socioeconomic factor and accessibility is therefore not impacted when offer and demand beyond the boundary Included. All the associations are positive and statistically significant (p < 0.0001) with the exception of the level of education; the association with women's unemployment is close to reaching statistical significance. Population residing in the more deprived neighborhoods have the highest level of accessibility to healthcare providers, suggesting that there is no systematic absence of healthcare providers in impoverished areas.
Table 4 Simple linear regression between Socioeconomic variables and composite ISA when accounting or not edge effect
This work highlights the impacts of edge effect on spatial modelling of accessibility to healthcare professionals; this has been a matter of some concern to spatial analysts. Edge effect is one of the most commonly mentioned problems in studies dealing with spatial accessibility. We were interested in exploring the role of edge effect, to determine whether or not it has a relevant impact on healthcare provider accessibility in the department of Nord, using the "Index of Spatial Accessibility" previously developed by our team [44]. Our study has shown that it is difficult to reach a general conclusion. Firstly, in many published studies, authors have argued that accessibility to facilities (including healthcare providers) will lead to considerable biases [34,35,36,37], even under-reporting [17, 28, 29, 31, 43] when not accounting for the edge effect. Our work has revealed that on average, the Index of Spatial Accessibility is only slightly lower with edge effect accounted, than without. In addition, when accounting for the edge effect, our study suggests that more IRIS see their value reduced than see it improved. Indeed, when spatial analyses are not limited within a finite region, not only are facilities beyond the border disregarded, but the fact that patients from the neighboring area are also able to overcome geographical boundaries and consult a healthcare professional within the department of Nord is also ignored.
More specifically, the role of edge effect is largely linked to the method used to estimate accessibility. A range of methods exists for measurement of spatial accessibility to healthcare professionals—including Physician Population Ratio, distance/time (Euclidean, Manhattan, or network) to the nearest healthcare professional, average distance/time to a certain number of healthcare professionals, cumulative opportunity (which counts the number of opportunities that can be reached within a travel time) [22, 54] and the gravity model [23, 24]. When the accessibility indicator is based on availability or proximity (such as distance/time or cumulative opportunity) taking facilities beyond the border into account can improve the accessibility score. However, when the availability measure is weighted by population size (as our ISA indicator is), so that the volume of services available (relative to the population's size and the proximity of services available relative to the location of the population) is taken into account, it is also important to consider demand from the population on the other side of the border. The population living either side of the study border must share the healthcare supply. As a result, the impact of edge effect on this type of accessibility indicator is more subtle; variation occurs in a balanced way, and should not be subject to arbitrary conclusions.
Secondly, our study shows that depending on health professional type, edge effect impact may vary considerably. We found that changes to GPs ISA are mainly in those IRIS located close to the boundaries. One explanation is that the "patient area" of GPs is limited (≤ 15 min) [44, 54]. Moreover, GP numbers are much higher than specialist doctor numbers, leading to more homogenous distribution. Consequently, supply and demand beyond the border will not have a very significant impact. Conversely, midwife and gynecologist numbers are very limited. People may be willing to travel further/longer to access a specialist doctor. This is why almost all IRIS are impacted by distance. Yet variations in ISA values are minor, because of the 'balance' of edge effects.
Healthcare accessibility is especially vital for rural populations; a matter that has long been of concern to community and health planners [17, 31,32,33]. Typically, these populations experience restricted access to healthcare and other resources due to the spatial inequality of living in rural or impoverished areas. ISA comparisons between urban and rural zones reveal a greater variability within the group of rural IRIS than within the group of urban IRIS. This finding may be partially explained by the spatial distribution of the rural IRIS located close to the border of the study area: 54% of rural IRIS are located within ten kilometers (as the crow flies) from the frontier (as against only 29% of urban IRIS). However, the fact that a steep downward trend was observed in the rural zone when offer and demand beyond the boundary were included is both unexpected and related specifically to the distribution of healthcare providers and consumers in the department of Nord and its neighboring areas. This result should therefore be analyzed and interpreted with caution, since it is study-area dependent. One of the explanations is that the physicians' density of district Nord (436.2 per 100,000 inhabitants) is greater than its neighboring districts: 307.2 for Pas-de-Calais, 271.1 for Oise, 401.1 for Somme, 280.2 for Aisne and 288.5 for Ardennes [60]. On the other hand, in most cities in the Nord department, when edge effect is corrected, the ISA score is mainly classified as 'unchanged'—thanks to well-balanced offer and demand.
We found a positive correlation between socioeconomic distress levels and composite ISA. This finding suggests that areas of high socioeconomic distress tended to have better access than low socioeconomic distress areas. This result is not surprising, given the spatial planning of the Nord department: lower-income residents are more likely to live in urban areas in which social housing and services are concentrated. This significant association is quite similar to the result when offer and demand beyond the boundary were excluded: inclusion of offer and demand beyond the boundary did not impact the relationship between distress levels and composite ISA within our study area. These findings tend to demonstrate that the impact of edge effect is dependent on both the spatial distribution of healthcare providers and territorial organization.
Our study aims to provide additional evidence to the existing scientific literature in the field of spatial accessibility to healthcare by carrying out a detailed examination of the impact of edge effect. To our knowledge, this is the first work assessing edge effect based on algorithm E2SFCA. No research has explicitly demonstrated access differences when outside healthcare sources and patient demand are excluded or included. This study highlights the fact that there is a inaccuracy in hypothesizing that accessibility will be considerably and systematically under-reported where external healthcare providers are excluded. Indeed, our study found IRIS in which the ISA was reduced when offer and demand beyond the boundary were included. The result of this study will be useful to both health resource planners and other researchers in the public health field.
Several limitations of this study should be addressed here. Despite its relative popularity of algorithm, the E2SFCA method remained highly debated. The choice of the best decay function or the right size for catchment areas needs rigorous modeling to derive the best fitting parameters [61]. In the absence of appropriate empirical evidence, it was necessary to make a number of estimations during the definition of distance-decay function and the threshold for healthcare professionals other than general practitioners.
Another limitation is aggregation error, which arises when measuring distance from aggregated areal units to facilities, and results from the use of a single point as a proxy for the locations of individuals within the area units [5]. We have attempted to reduce aggregation error by considering the spatial distribution of the living building, since it better reflects the spatial distribution of individuals [5, 62].
In this study, we were not interested in the interaction across the border between France and Belgium. Even though the European Health Insurance Card (EHIC) gives the right to access state-provided healthcare during a temporary stay in another European Economic Area, a pregnant woman have make a specific request. This request must then be accepted to be able to benefit from health care during the pregnancy and to avoid advancing their own funds to cover expenses, which do add an extra layer of administrative complexity. We assumed therefore that the offer and demand of pregnancy-related healthcare across this border is limited.
In addition, it is also worth noting that (as in many other studies dealing with spatial accessibility) our method concerns only potential spatial accessibility, rather than revealed access (actual utilization of healthcare). Only complex and expensive investigations would be capable of providing the complementary information that would allow us to distinguish the difference between spatial and real access and use of healthcare services. Finally, our study addresses difficulties arising from the use of a large amount of data and distance calculation prior to application of the algorithm, which is time consuming and calls for technical know-how. However, this is the price to be paid for a more accurate indicator.
Access to healthcare services will continue to be one of the most important public health preoccupations, especially in the context of the increase of social health inequalities worldwide. Our study gave a real illustration of what could be the impact of edge effect in healthcare access in a French context. Our results did not support the "under-report" hypothesis discussed in many published studies. On the whole, our research has revealed only minor average value variations of ISA as a result of including interactions across the border. One explanation is that a kind of balance patient and healthcare professionals when considering neighboring department. However, it is not possible to set up general statement because intensity of impact varies according to healthcare provider type, urbanization level and territorial organization; in addition, we also know that the methodology implemented to measure the healthcare access combined with the size of the spatial unit may influence how the edge effect could impact the measure of healthcare accessibility. For these reason, we plan to carry out this study for another study area with a different territorial organization, to compare ISA variation in two cases in order to get a conclusion more general, at the France scale. Additional researches are required in different countries in order to improve our level of understanding about the influence of the edge effect on the accessibility to healthcare. Following the same methodology to measure the accessibility to healthcare, these different studies will help to distinguish what findings are specific to the characteristics and organization of the country and what findings are common to the different counties. It constitute a promising direction to determine more precisely healthcare shortage areas and then to fight against social health inequalities.
In conclusion, edge effect must be considered on a case-by-case basis, because it relies on choice of indicator, spatial distribution of facilities and urban organization of the territory studied.
This study represents an important step. It will serve not only to assist current researchers by identifying the common methodological hypothesis bias of edge effect in spatial accessibility studies, but will also be helpful to planners and other researchers in the public health field. This paper has presented high-quality geographic data and advanced GIS techniques. In order to examine whether the results are generalizable to different spatial scales and distribution, we hope to contribute to other areas of study in the near future.
E2FCA:
enhanced two-step floating catchment area
INSEE:
National Institute of Statistics and Economic Studies
ISA:
spatial accessibility index
L'Îlot Regroupé pour des Indicateurs Statistiques
Sasaki S, Comber AJ, Suzuki H, Brunsdon C. Using genetic algorithms to optimise current and future health planning-the example of ambulance locations. Int J Health Geogr. 2010;9:4. https://doi.org/10.1186/1476-072X-9-4.
Walsh SJ, Page PH, Gesler WM. Normative models and healthcare planning: network-based simulations within a geographic information system environment. Health Serv Res. 1997;32:243–60.
Patel AB, Waters NM, Ghali WA. Determining geographic areas and populations with timely access to cardiac catheterization facilities for acute myocardial infarction care in Alberta, Canada. Int J Health Geogr. 2007;6:47. https://doi.org/10.1186/1476-072X-6-47.
Parker EB, Campbell JL. Measuring access to primary medical care: some examples of the use of geographical information systems. Health Place. 1998;4:83–193.
Hewko J, Smoyer-Tomic KE, Hodgson MJ. Measuring neighbourhood spatial accessibility to urban amenities: does aggregation error matter? Environ Plan A. 2002;34(7):1185–206.
Talen E. Visualizing fairness: equity maps for planners. J Am Plan Assoc. 1998;64(1):22–38.
Talen E, Anselin L. Assessing spatial equity: an evaluation of measures of accessibility to public playgrounds. Environ Plan A. 1998;30(4):595–613.
Lawrence D, Kisely S. Inequalities in healthcare provision for people with severe mental illness. J Psychopharmacol. 2010;24(Suppl 4):61–8. https://doi.org/10.1177/1359786810382058.
Charreire H, Combier E. Poor prenatal care in an urban area: a geographic analysis. Health Place. 2009;15(2):412–9.
Smoyer-Tomic KE, Spence JC, Raine KD, Amrhein C, Cameron N, Yasenovskiy V, Cutumisu N, Hemphill E, Healy J. The association between neighborhood socioeconomic status and exposure to supermarkets and fast food outlets. Health Place. 2008;14:740–54. https://doi.org/10.1016/j.healthplace.2007.12.001.
Ball K, Timperio A, Crawford D. Neighbourhood socioeconomic inequalities in food access and affordability. Health Place. 2009;15:578–85. https://doi.org/10.1016/j.healthplace.2008.09.010.
Galvez MP, Hong L, Choi E, Liao L, Godbold J, Brenner B. Childhood obesity and neighborhood food-store availability in an inner-city community. Acad Pediatr. 2009;9:339–43. https://doi.org/10.1016/j.acap.2009.05.003.
Spence JC, Cutumisu N, Edwards J, Raine KD, Smoyer-Tomic K. Relation between local food environments and obesity among adults. BMC Public Health. 2009;9:192. https://doi.org/10.1186/1471-2458-9-192.
Macdonald L, Ellaway A, Macintyre S. The food retail environment and area deprivation in Glasgow City, UK. Int J Behav Nutr Phys Act. 2009;6:52. https://doi.org/10.1186/1479-5868-6-52.
Feng J, Glass TA, Curriero FC, Stewart WF, Schwartz BS. The built environment and obesity: a systematic review of the epidemiologic evidence. Health Place. 2010;16:175–90. https://doi.org/10.1016/j.healthplace.2009.09.008.
Hu R, Dong S, Zhao Y, Hu H, Li Z. Assessing potential spatial accessibility of health services in rural China: a case study of Donghai County. Int J Equity Health. 2013;12:35. https://doi.org/10.1186/1475-9276-12-35.
Wang F, Luo W. Assessing spatial and nonspatial factors for healthcare access: towards an integrated approach to defining health professional shortage areas. Health Place. 2005;11(2):131–46. https://doi.org/10.1016/j.healthplace.2004.02.003.
Wang F. Quantitative methods and applications in GIS. Boca Raton: Taylor & Francis Group; 2005.
Matsumoto M, Inoue K, Noguchi S, Toyokawa S, Eiji K. Community characteristics that attract physicians in Japan: a cross-sectional analysis of community demographic and economic factors. Human Resour Health. 2009;7:12.
Ranga V, Panda P. Geospat Spatial access to inpatient health care in northern rural India. Health. 2014;8(2):545–56.
Talen E. Neighborhoods as service providers: a methodology for evaluating pedestrian access. Environ Plan B. 2003;30:181–200. https://doi.org/10.1068/b12977.
Apparicio P, Abdelmajid M, Riva M, Shearmur R. Comparing alternative approaches to measuring the geographical accessibility of urban health services: distance types and aggregation-error issues. Int J Health Geogr. 2008;7:7. https://doi.org/10.1186/1476-072X-7-7.
Guagliardo MF. Spatial accessibility of primary care: concepts, methods and challenges. Int J Health Geogr. 2004;3:3.
Martin D, Williams HCWL. Market-area analysis and accessibility to primary health-care centres. Environ Plan. 1992;24:1009–19.
Bamford EJ, Dunne L, Taylor DS, Symon BG, Hugo GJ, Wilkinson D. Accessibility to general practitioners in rural South Australia. Med J Aust. 1999;171(11–12):614–6.
Apparicio P, Cloutier M-S, Shearmur R. The case of Montréal's missing food deserts: evaluation of accessibility to food supermarkets. Int J Health Geogr. 2007;6(1):4.
Luo W, Qi Y. An enhanced two-step floating catchment area (E2SFCA) method for measuring spatial accessibility to primary care physicians. Health Place. 2011;17(1):394.
Sadler RC, Gilliland JA, Arku G. An application of the edge effect in measuring accessibility to multiple food retailer types in Southwestern Ontario, Canada. Int J Health Geogr. 2010;10:34.
Salze P, Banos A, Oppert J-M, Charreire H, Casey R, Simon C, Chaix B, Badariotti D, Weber C. Estimating spatial accessibility to facilities on the regional scale: an extended commuting-based interaction potential model. Int J Health Geogr. 2011. https://doi.org/10.1186/1476-072X-10-2.
Vidal Rodeiro CL, Lawson AB. An evaluation of the edge effects in disease map modelling. J Comput Stat Data Anal. 2005;49:45–62.
Sharkey JR, Horel S. Neighborhood socioeconomic deprivation and minority composition are associated with better potential spatial access to the ground-truthed food environment in a large rural area. J Nutr. 2008;138(3):620–7.
Wan N, Zhan FB, Zou B, Chow E. A relative spatial access assessment approach for analyzing potential spatial access to colorectal cancer services in Texas. Appl Geogr. 2012;32:291–9. https://doi.org/10.1016/j.apgeog.2011.05.001.
Ngui AN, Apparicio P. Optimizing the two-step floating catchment area method for measuring spatial accessibility to medical clinics in Montreal. BMC Health Serv Res. 2011;11:166. https://doi.org/10.1186/1472-6963-11-166.
Fortney PD, Rost J, Warren J. Comparing alternative methods of measuring geographic access to health services. Health Serv Outcomes Res Methodol. 2000;1(2):173–84.
Van Meter EM, Lawson AB, Colabianchi N, Nichols M, Hibbert J, Porter DE, Liese AD. An evaluation of edge effects in nutritional accessibility and availability measures: a simulation study. Int J Health Geogr. 2010;9:40. https://doi.org/10.1186/1476-072X-9-40.
Bissonnette L, Wilson K, Bell S, Shah TI. Neighbourhoods and potential access to health care: the role of spatial and aspatial factors. Health Place. 2012;18(4):841–53. https://doi.org/10.1016/j.healthplace.2012.03.007.
Donohoe J, Marshall V, Tan X, Camacho FT, Anderson R, Balkrishnan R. Evaluating and comparing methods for measuring spatial access to mammography centers in Appalachia (Re-Revised). Health Serv Outcomes Res Methodol. 2016;16(1):22–40.
Jordan H, Roderick P, Martin D, Barnett S. Distance, rurality and the need for care: access to health services in South West England. Int J Health Geogr. 2004;3:21. https://doi.org/10.1186/1476-072X-3-21.
Luo J, Tian LL, Luo L, Yi H, Wang FH. Two-step optimization for spatial accessibility improvement: a case study of health care planning in rural China. BioMed Res Int. 2017. https://doi.org/10.1155/2017/2094654.
Fortney J, Rost K, Warren J. Health services & outcomes research. Methodology. 2000;1:173. https://doi.org/10.1023/A:1012545106828.
Pipley BD. Spatial statistics. New York: Wiley; 1981.
Iredale R, Jones L, Gray J, Deaville J. 'The edge effect': an exploratory study of some factors affecting referrals to cancer genetic services in rural Wales. Health Place. 2005;11(3):197–204. https://doi.org/10.1016/j.healthplace.2004.06.005.
Zhang XY, Lu H, Holt JB. Modeling spatial accessibility to parks: a national study. Int J Health Geogr. 2011;10:31. https://doi.org/10.1186/1476-072X-10-31.
Gao F, Kihal W, Le Meur N, Souris M, Deguen S. Assessment of the spatial accessibility to health professionals at French census block level. Int J Equity Health. 2016;15(1):125. https://doi.org/10.1186/s12939-016-0411-z.
Jin C, Cheng JQ, Lu YQ, Huang ZF, Cao FD. Spatial inequity in access to healthcare facilities at a county level in a developing country: a case study of Deqing County, Zhejiang, China. Int J Equity Health. 2015;14:67. https://doi.org/10.1186/s12939-015-0195-6.
Zhou Z, Su Y, Gao J, Campbell B, Zhu Z, Xu L, Zhang Y. Assessing equity of healthcare utilization in rural China: results from nationally representative. Int J Equity Health. 2013;12:34. https://doi.org/10.1186/1475-9276-12-34.
McGrail MR, Humphreys JS. Measuring spatial accessibility to primary care in rural areas: improving the effectiveness of the two-step floating catchment area method. Appl Geogr. 2009;29(4):533–41.
Strasser R. Rural health around the world: challenges and solutions. Fam Pract. 2003;20(4):457–63. https://doi.org/10.1093/fampra/cmg422.
Institut national de la statistique et des études économiques. http://www.insee.fr/fr/. Accessed 2 May 2014.
Lalloué B, Monnez JM, Padilla C, Kihal W, Le Meur N, Zmirou-Navier D, Deguen S. A statistical procedure to create a neighborhood socioeconomic index for health inequalities analysis. Int J Equity Health. 2013;12:21. https://doi.org/10.1186/1475-9276-12-21.
French health insurance. http://annuairesante.ameli.fr/. Accessed 12 Mar 2014.
Institut national de l'information géographique et forestière: http://www.ign.fr/.
Zdeb M. Driving distances and times using SAS® and Google Maps. SAS Global Forum 2010.
Barlet M, Coldefy M, Collin C, Lucas-Gabrielli V. L'Accessibilité potentielle localisée (APL): une nouvelle mesure de l'accessibilité aux médecins généralistes libéraux. Institut de recherche et documentation en économie de la santé. Question d'économie de la santé n° 174; 2012.
McGrail MR. Spatial accessibility of primary health care utilising the two step floating catchment area method: an assessment of recent improvements. Int. J. Popul Geogr. 2012. https://doi.org/10.1186/1476-072X-11-50.
Griffith DA. What is spatial autocorrelation? L'Espace Géographique. 1992;21:265–80.
Anselin L. Local indicator of spatial association—LISA. Geogr Anal. 1995;27:93–115. https://doi.org/10.1111/j.1538-4632.1995.tb00338.x.
Jacquez GM, Greiling DA. Local clustering in breast, lung and colorectal cancer in Long Island, New York. Int J Health Geogr. 2003;2:3. https://doi.org/10.1186/1476-072X-2-3.
Talen E. Neighborhoods as service providers: a methodology for evaluating pedestrian access. Environ Plann B. 2003;30:181–200. https://doi.org/10.1068/b12977.
https://demographie.medecin.fr/#l=fr;v=map2. Accessed June 2017.
Wang F. Measurement, optimization, and impact of health care accessibility: a methodological review. Ann Assoc Am Geogr Assoc Am Geogr. 2012;102(5):1104–12. https://doi.org/10.1080/00045608.2012.657146.
Zhao P, Batta R. Analysis of centroid aggregation for the Euclidean distance p-median problem. Eur J Oper Res. 1999;113(1):147–68. https://doi.org/10.1016/S0377-2217(98)00010-1.
Work presented here was conceived, carried out and analyzed by FG, SD and WK. MS and NL gave important suggestions and supervised the study. All authors read and approved the final manuscript.
This research is supported by EHESP Rennes, Sorbonne Paris Cité and Institut de recherche sur la santé l'environnement et le travail. Points of view or opinions in this article are those of the authors and do not necessarily represent the official position or policies of the EHESP Rennes, Sorbonne Paris Cité and IRSET.
All data generated or analysed during this study are included in this published article. If readers need supplementary information, they can contact me ([email protected]).
EHESP Rennes, Sorbonne Paris Cité, Paris, France
Fei Gao, Nolwenn Le Meur & Séverine Deguen
LIVE UMR 7362 CNRS (Laboratoire Image Ville Environnement), University of Strasbourg, 6700, Strasbourg, France
Wahida Kihal
L'équipe REPERES, Recherche en Pharmaco-épidémiologie et recours aux soins, UPRES EA-7449, Rennes, France
Fei Gao & Nolwenn Le Meur
IRD, UMR_D 190 "Emergence des Pathologies Virales" (IRD French Institute of Research for Development, Aix-Marseille University, EHESP French School of Public Health), Marseille, France
Marc Souris
Department of Quantitative Methods for Public Health, EHESP School of Public Health, Avenue du Professeur Léon Bernard, 35043, Rennes, France
Department of Social Epidemiology, Sorbonne Universités, UPMC Univ Paris 06, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique (UMRS 1136), Paris, France
Séverine Deguen
Fei Gao
Nolwenn Le Meur
Correspondence to Fei Gao.
Socioeconomic variables
Low level of educational attainment Proportion of women aged 25 and over not having graduated from high school
Women's unemployment rate Proportion of unemployed women eligible to work
Single parent families Proportion of all households with children headed by lone parents
Non-homeowner Proportion of all households not owning their main residence
Insecure employment situation Proportion of those on short-term or temporary contracts, in state-funded posts, or apprenticeship/internship
See Fig. 8.
X represents ISA when not accounting for edge effect; Y represents ISA when accounting for the edge effect. Class of "unchanged" in grey color: the change between X and Y is in the average; it is points around the green regression line within the 95% confidence interval (red lines); Class deteriorated in orange color: Points placed below the lower limit of the 95% confidence interval which signifies that these IRIS have a larger reduction than the average (the reverse for "improved" in blue color)
Gao, F., Kihal, W., Le Meur, N. et al. Does the edge effect impact on the measure of spatial accessibility to healthcare providers?. Int J Health Geogr 16, 46 (2017). https://doi.org/10.1186/s12942-017-0119-3
Potential spatial accessibility of healthcare professionals
E2SFCA algorithm
Spatial analyses | CommonCrawl |
\begin{document}
\title{A multiscale quasilinear system for colloids deposition in porous media: Weak solvability and numerical simulation of a near-clogging scenario}
\author[$\dagger$]{Michael Eden} \author[$\star$]{Christos Nikolopoulos} \author[$\ddag$]{Adrian Muntean}
\affil[$\dagger$]{Zentrum f\"ur Technomathematik, Department of Mathematics and Computer Science, University of Bremen, Germany} \affil[$\star$]{Department of Mathematics, School of Sciences, University of The Aegean, Greece} \affil[$\ddag$]{Department of Mathematics and Computer Science, Karlstad University, Sweden}
\maketitle \begin{abstract}
We study the weak solvability of a quasilinear reaction-diffusion system nonlinearly coupled with an linear elliptic system posed in a domain with distributed microscopic balls in $2D$.
The size of these balls are governed by an ODE with direct feedback on the overall problem. The system describes the diffusion, aggregation, fragmentation, and deposition of populations of colloidal particles of various sizes inside a porous media made of prescribed arrangement of balls.
The mathematical analysis of the problem relies on a suitable application of Schauder's fixed point theorem which also provides a convergent algorithm for an iteration method to compute finite difference approximations of smooth solutions to our multiscale model.
Numerical simulations illustrate the behavior of the local concentration of the colloidal populations close to clogging situations. \end{abstract}
{\bf Keywords:} Colloidal transport and deposition, reactive porous media, weak solutions to strongly nonlinear parabolic systems, two-scale finite difference approximation, clogging
{\bf MSC2020:} 35K61, 65N06, 35B27, 76S05, 80M40
\section{Introduction and problem statement}
We study a two-scale system modeling the effective diffusive transport as well as the aggregation, fragmentation, and deposition of populations of colloidal particles inside porous media. Such situations arise, for instance, in membrane filtration scenarios \cite{Fasano,Bruna_JFM}, papermaking \cite{Asa}, immobilization of colloids in soils \cite{Chen}, or transport of colloidal contaminants in groundwater \cite{Suciu}.
We are particularly interested in situations where micro-structural changes due to the deposition or dissolution of colloids are allowed to take place. This can locally change both the transport patterns and storage capacity of the medium; see \cite{Icardi,King,Hallak,Maes,Knabner,Noorden} for related cases. This variety of technological and natural processes is based on the transfer of colloidal particles from liquid suspension onto stationary surfaces \cite{johnson1995dynamics}. From this perspective, one can perceive that the porous media we are considering here behave like materials with reactive internal microstructures (see \cite{Diaz} for a periodic setting) and, based on \cite{Showalter_Oberwolfach}, they are sometimes classified as media with distributed microstructures. Additional motivation for this work comes from our own research on reactive flow in porous media and is linked very much with the work of P. Ortoleva and J. Chadam (see e.g. \cite{Chadam} and follow up papers), but it is worth mentioning that quite related aspects arise in pharmacy and medicine like drug delivery, thrombosis formation on arterial walls, evolution of Alzheimer's disease. We refer the reader, for instance, to \cite{Giulia,Thrombosis,Silvia} for works in this direction.
Denoting with $u=(u_1,...,u_N)$ ($i=1,...,N$) the molar concentrations of colloids of size $i$ (with $N\in \mathbb{N}$ the maximal size), its time evolution can be modelled by a quasi-linear parabolic system in the form of
\begin{align}\label{abstract_quasi} \partial_tu_i-\operatorname{div}(D_i(u)\nabla u_i)=F_i(u), \end{align} where $F_i(u)$ accounts for the aggregation, segregation, and adsorption processes and $D_i(u_i)$ the changing permeability as consequence of the micro-structural changes (like clogging) inside the porous medium itself. While \cref{abstract_quasi} is purely macroscopic, the computation of the effective permeability $D_i(u_i)$ is done on the micro-scale therefore leading to the two-scale nature of our problem. This system is a compact and abstract reformulation of a two-scale model for colloidal transport derived in \cite{MC20} via asymptotic homogenization (more details are given in \Cref{strategy}). Structurally similar (two-scale model with geometrical changes) models were investigated in, e.g., \cite{Eden19,Peter09}.
In this work, we take a $2D$-cross-section of a porous medium and assume the solid matrix of the cross section to be made up of circles of not-necessarily uniform radius. The growth and shrinkage of these circles, which represent the underlying micro-structural changes of the porous medium, are modelled via a scalar quantity governed by an additional ODE. For a similar geometrical setup see, e.g., \cite{Peter09}. The model and the resulting mathematical problem gets more complicated if we were to allow for more general geometries (e.g., evolving $C^2$-interfaces) that can not be represented by a scalar quantity like the radius in our setting. We treat our geometries in $2D$ mainly for the sake of simplicity of inequalities and transformations and also because the simulation work is easier to be handled in $2D$ compared to $3D$, there is no fundamental element in the analysis that is sensitive to dimensions (like Sobolev embeddings would be for example). As a consequence, the mathematical analysis part can be extended to $3D$ with suitable modifications on the upper and lower {\em a priori} bounds on the radii of the balls-like microstructure.
The quasilinear structure of the problem together with the multiscale coupling is non-standard. Here, we point out that $D_i$ and $F_i$ are non-linear operators that are not defined via point wise evaluation (in the sense of $D_i(u)(t,x)=D_i(t,x,u(t,x))$). In particular, it does not fit directly to the framework elaborated in, e.g., \cite{Alt} and it requires an approach that utilizes the underlying coupling present in the model equations behind the abstract system. A similar two-scale problem allowing for micro-structural changes was investigated in \cite{Meier09}.
In \Cref{strategy} we explain our working strategy to prove the existence of weak solutions to the overall problem. To keep things simple, we consider that the local porosity $\phi(r)$ does not degenerate. Note however, that it is technically possible to include in the analysis simple degeneracies (like neighboring microstructures touching in single points \cite{Schulz}), a complete (local) clogging being however out of reach. Besides the non-degeneracy of the effective parameter, another simplification is included -- the absence of the flow. Note that if the colloidal populations would be immersed in a fluid flow, then, most likely, besides the balance equations of the linear momentum one would also have to take into account the charge transport taking place between oppositely charged populations of particles; see e.g. \cite{Robin,Ray} for more information in this direction.
The paper is organized as follows: In \Cref{strategy} we present the model and outline our strategy for the analysis of our problem. We list the needed mathematical details of the problem so that we can prove in \Cref{existence} the existence of a weak solution. In \Cref{numerics}, we solve numerically our multiscale quasilinear problem and discuss the obtained numerical results for realistic parameter regimes. We add in \Cref{discussion} a detailed discussion of the potential of our problem, expected results, and related aspects.
\section{Problem statement and solution strategy}\label{strategy} In the following, let $S=(0,T)$ be the time interval of interest and $\Omega\subset\mathbb{R}^2$ a bounded Lipschitz domain. In addition, let $N\in\mathbb{N}$ be a given number indicating the maximal possible \emph{size} of an aggregate of colloid particle, where \emph{size} refers to the number of primary particles making up the aggregate. For each $i=\{1,...,N\}$, let $u_i\colon S\times\Omega\to[0,\infty)$ (we set $u=(u_1,...,u_N)$) denote the molar concentration density of aggregates of size $i$ at point $x\in\Omega$ at time $t\in S$. We take the function $v\colon S\times\Omega\to[0,\infty)$ to represent the mass density of absorbed material (mass that is in the system but currently not part of the diffusion and agglomeration process); this mass can be dissolved again by a Robin-type exchange allowing colloidal populations to re-enter the pore space. This process of absorption and dissolution is modelled in this context via an Robin-type exchange term (see e.g. \cite{Krehel}) in the form of
$$ \frac{2\pi r}{1-\pi r^2}(a_iu_i-\beta_iv). $$ Here, the radius function $r\colon S\times\Omega\to(0,r_{max})$ (for some $r_{max}>0$) acts as a measure of the \emph{clogginess} of the porous media and $\frac{2\pi r}{1-\pi r^2}$ is the ratio of the size of the boundary between fluid space and pore to the pore volume.
To describe the aggregation and fragmentation processes taking place inside the pore space of the medium, we use the \emph{Smoluchowski} formulation (we point to \cite{Aldous} for a review) given here by
$$ R_i(u)=\frac{1}{2}\sum_{j+l=i}\gamma_{jl}u_ju_l-u_i\sum_{j=1}^{N-i}\gamma_{jl}u_j. $$
It is important to note that in the context of porous media the colloidal populations involve a finite size chain of the cluster, i.e. there will be a population of $N$-mers where $N$ takes the maximum cluster size. As a result, we deal with a finite sum here. Interestingly, for many applications a good choice of such $N$ is rather low; see e.g. \cite{Krehel}.
The diffusion-reaction system for the different aggregates is then given via
\begin{subequations} \begin{alignat}{2} \partial_tu_i-\operatorname{div}(D_i(r)\nabla u_i)&=R_i(u)-\frac{2\pi r}{1-\pi r^2}(a_iu_i-\beta_iv)&\quad&\text{in}\ \ S\times\Omega,\label{overall-1}\\ -D_i(r)\nabla u_i\cdot n&=0&\quad&\text{on}\ \ S\times\partial\Omega,\label{overall-2}\\ u_i(0)&=u_{i0}&\quad&\text{in}\ \ \Omega.\label{overall-3} \end{alignat}
The effective diffusion matrix (including diffusion, dispersion, and tortuosity effects) $D_i(r)\in\mathbb{R}^{2\times2}$ can be calculated using any solution $w_{k}$, $k=1,2$, of the cell problem
\begin{alignat}{2} -\Delta w_{k}&=0&\quad&\text{in}\ \ S\times(Y\setminus \overline{B}(r)),\label{overall-4}\\ -\nabla w_{k}\cdot n&=e_k\cdot n&\quad&\text{on}\ \ S\times\partial B(r),\label{overall-5}\\ y&\mapsto w(\cdot,\cdot,y)&\quad&\text{is $Y$-periodic}\label{overall-6}. \end{alignat} Here, $Y=(0,1)^2$ denotes the unit cell, $\overline{B}(r)$ is the closed ball with radius $r$ and center point $a=(\nicefrac{1}{2},\nicefrac{1}{2})$, and $e_k$ the $k$-th unit normal vector. We have ($d_i>0$ are known constants)
$$ (D_i)_{jk}=d_i\phi(r)\int_{Y\setminus \overline{B}(r)}(\nabla w_{k}+e_{k})\cdot e_j\di{z} $$
where $\phi(r)=\frac{1-\pi r^2}{|\Omega|}$ denotes the porosity density of the medium. For more details regarding the cell problem and the effective diffusivity, we refer to \cite{MC20} where they are established via homogenization.
Finally, the evolution of $v$ is governed by an ODE parametrized in $x\in\Omega$
\begin{alignat}{2} \partial_tv&=\sum_{i=1}^N\left(\alpha_iu_i-\beta_iv\right)&\quad&\text{in}\ \ S\times\Omega,\label{overall-7}\\ v(0)&=v_0&\quad&\text{in}\ \ \Omega.\label{overall-8} \end{alignat}
and the radius function is governed by the following ODE parametrized in $x\in\Omega$
\begin{alignat}{2} \partial_tr&=2\pi\alpha\sum_{i=1}^N\left(a_iu_i-\beta_iv)\right)&\quad&\text{in}\ \ S\times\Omega,\label{overall-9}\\ r(0)&=r_0&\quad&\text{in}\ \ \Omega.\label{overall-10} \end{alignat} \end{subequations} A possible initial choice for the radii $r_0$ is depicted in Figure \ref{Rdunit}. We point out there also what will happen at the final time $T$; more details on the parameter setup are given in the simulation sections. What concerns the modeling of the deposition of the colloidal populations, our choice is similar to one reported in \cite{johnson1995dynamics}.
\begin{figure}
\caption{Example of $r(x_1,x_2,t=0)$ with corresponding $r(x_1,x_2,t=T)$ of the same simulation.
The parameter setting is as discussed in Figure \ref{Figex2D}.
Regions with larger circles correspond to low porosity and permeability.}
\label{Rdunit}
\end{figure} This accounts for the simple observation that the absorbed material leads to the clogging of the pore under the fundamental assumption of the growth of the radius is proportional to the amount of material that is absorbed. For a more concrete argumentation for this particular structure, we again point to \cite{MC20}.
The overall problem we are considering in this work is then given by \cref{overall-1,overall-2,overall-3,overall-4,overall-5,overall-6,overall-7,overall-8,overall-9,overall-10}. Regarding our concept of a weak solution of this system:
\begin{definition}[Weak solution]\label{weaksol} For a time interval $(0,s)\subset S$, a weak solution to the problem is given by a set of functions $(u,v,w,r)$ with the regularity \begin{align*} u_i&\in L^2((0,s);H^1(\Omega))\cap L^\infty((0,s)\times\Omega)\quad\text{such that}\ \ \partial_tu_i\in L^2((0,s)\times\Omega),\\ w&\in L^2((0,s)\times\Omega;H^1_\#(Y)),\quad v\in W^{1,1}((0,s);L^2(\Omega)),\quad r\in W^{1,1}((0,s);L^2(\Omega)) \end{align*} that satisfies \cref{overall-1,overall-2,overall-3,overall-4,overall-5,overall-6,overall-7,overall-8,overall-9,overall-10} in the standard weak Sobolev setting. \end{definition}
\paragraph{Solution strategy.} Without yet caring about regularity issues (like smoothness, integrability, measurability) and possible singularities, we want to suggest our solution strategy for the problem given by \cref{overall-1,overall-2,overall-3,overall-4,overall-5,overall-6,overall-7,overall-8,overall-9,overall-10} and show how it relates to the abstract quasi-linear PDE System \ref{abstract_quasi}.
We start with a few comments regarding the particular structure of our problem where we refer to the subproblems $(i)$-$(iv)$ for $u, w, v, r$, viz.
\begin{enumerate}
\item[(A)] The problem is strongly coupled: $(i)$ depends on $u, w, v, r$, $(ii)$ on $w, r$, $(iii)$ on $u, v$, and $(iv)$ on $r, u, v$.
\item[(B)] Problem $(i)$ is parabolic in $u$, $(ii)$ elliptic in $w$, $(iii)$ and $(iv)$ are first order ODEs in $v$ and $r$.
\item[(C)] Problem $(i)$ is nonlinear in $u$ and $r$, $(ii)$ is nonlinear in $r$, and $(iii)$ and $(iv)$ are linear.
\item[(D)] Problem $(ii)$ is not a \emph{real} free boundary problem, as the underlying domain $Y\setminus\overline{B(r)}$ depends only on $(t,x)$ while the derivatives are w.r.t.~$y$. \end{enumerate} As a consequence of points (A)--(D), a natural strategy is to first tackle the ODEs and to use them to inform the cell problem and the parabolic system. In the following, we outline the intermediate steps involved in getting to the abstract fixed-point problem that will be the starting point for our analysis in \Cref{existence}:
Step $(a)$: Looking at the linear ODE vor $v$ (given by \cref{overall-7,overall-8}),
we find the characterization of $v$ in terms of $u$ via (setting $b=\sum_{i=1}^N\beta_i$)
$$
v(t,x)=e^{-bt}\left(v_0(x)+\sum_{i=1}^N\alpha_i\int_0^te^{b\tau}u_i(\tau,x)\di{\tau}\right).
$$
With this in mind, we can eliminate $v$ for $u$ in our problem by setting $v=\mathcal{L}_v(u)$, where $\mathcal{L}_v$ is the abstract solution operator for the $v$-problem.
Step $(b)$: Similarly, looking at the second ODE (problem $(iv)$), we have
$$
r(t,x)=r_0(x)+2\pi\alpha\sum_{i=1}^N\int_0^t(a_iu_i(\tau,x)-\beta_iv(\tau,x))\di{\tau}
$$
With this characterization, we can introduce the corresponding solution operator $\widetilde{\mathcal{L}_{r}}$ via
$$
r=\mathcal{L}_{r}(u,v)=\mathcal{L}_{r}(u,\mathcal{L}_{v}(u))=\widetilde{\mathcal{L}_{r}}(u).
$$
Step $(c)$: Looking at the cell problem $(k=1,2)$
\begin{alignat*}{2}
-\Delta w_{k}&=0&\quad&\text{in}\ \ S\times(Y\setminus \overline{B}(r)),\\
-\nabla w_{k}\cdot n&=e_k\cdot n&\quad&\text{on}\ \ S\times\partial B(r),\\
y&\mapsto \tau(\cdot,\cdot,y)&\quad&\text{is $Y$-periodic},
\end{alignat*}
we expect to get solutions for every given $r>0$ such that $\overline{B}(r)\cap\partial Y=\emptyset$.
We introduce the corresponding solution operator via
$$w=\mathcal{L}_{w}(r)=\left(\mathcal{L}_{w}\circ \widetilde{\mathcal{L}_{v}}\right)(u)=\widetilde{\mathcal{L}_{w}}(u).$$
Step $(d)$: Putting everything together, we can rewrite the parabolic problem
$$
\partial_tu_i-\operatorname{div}(D_i(r,w)\nabla u_i)=R_i(u)-\frac{2\pi r}{1-\pi r^2}(a_iu_i-\beta_iv)
$$
into
$$
\partial_tu_i-\operatorname{div}\left(D_i\left(\widetilde{\mathcal{L}_{r}}(u),\widetilde{\mathcal{L}_{w}}(u)\right)\nabla u_i\right)=R_i(u)-\frac{2\pi\widetilde{\mathcal{L}_{r}}(u)}{1-\pi (\widetilde{\mathcal{L}_{r}}(u))^2}(a_iu_i-\beta_i\mathcal{L}_{v}(u))
$$
This highly nonlinear system of PDEs is now given only in terms of the unknown function $u$.
On an abstract level, we therefore want to investigate parabolic system like
\begin{subequations}
\begin{alignat}{2}
\partial_tu_i-\operatorname{div}\left(\widehat{D_i}(u)\nabla u_i\right)&=F_i(u)&\quad&\text{in}\ \ S\times\Omega,\label{nonlinear-1}\\
-\widehat{D_i}(u)\nabla u_i\cdot n&=0&\quad&\text{on}\ \ S\times\partial\Omega,\label{nonlinear-2}\\
u_i(0)&=u_{i0}&\quad&\text{in}\ \ \Omega\label{nonlinear-3}
\end{alignat}
\end{subequations}
where
$$
F_i(u)=R_i(u)-\frac{2\pi\widetilde{\mathcal{L}_{r}}(u)}{1-\pi (\widetilde{\mathcal{L}_{r}}(u))^2}(a_iu_i-\beta_i\mathcal{L}_{v}(u)).
$$ The exact setting regarding function spaces will be settled in the following section.
\section{Analysis}\label{existence} In this section, we present the detailed fixed-point argument (as outlined in Section \ref{strategy}) for the non-linear problem given via \cref{nonlinear-1,nonlinear-2,nonlinear-3}:
The strategy of our proof is a three-step process:
\begin{enumerate}
\item[1)] For a given function $\tilde{u}$ (of sufficient regularity), we establish well-posedness and estimates for the linear problem given by
\begin{subequations}
\begin{alignat}{2}
\partial_tu_i-\operatorname{div}\left(\widehat{D_i}(\tilde{u})\nabla u_i\right)&=F_i(\tilde{u})&\quad&\text{in}\ \ S\times\Omega,\label{linearized-1}\\
-\widehat{D_i}(\tilde{u})\nabla u_i\cdot n&=0&\quad&\text{on}\label{linearized-2}\ \ S\times\partial\Omega,\\
u_i(0)&=u_{i0}&\quad&\text{in}\ \ \Omega\label{linearized-3}.
\end{alignat}
\end{subequations}
This is established in \Cref{existence_linear}.
\item[2)] We show that there is a set such that the solution operator for \cref{linearized-1,linearized-2,linearized-3} maps that set onto itself, see \Cref{lemma_fixed}.
This result is local in time, since we need to keep $t$ small in order to control the norm of the solution.
\item[3)] Finally, we employ Schauder's fixed point theorem to establish the existence of at least one solution, see \Cref{existence}. \end{enumerate}
For some arbitrary (later to be fixed) $M>0$ and $s\in(0,T)$, let
$$
T_{s,M}=\{u\in L^2((0,s)\times\Omega)^N\ : \ \|u_i\|_\infty\leq M \ (i=1,...,N)\}. $$ For ease of notation, for any given $u$ of sufficient regularity we will write $v_u=\mathcal{L}_v(u)$, $r_u=\widetilde{\mathcal{L}_r}(u)$, $w_u=\widetilde{\mathcal{L}_w}(u)$ for the corresponding solution given for the particular subproblem and $Y_u=Y\setminus\overline{B(r_u)}$.
\subsection{Auxiliary results} We start by collecting some important auxiliary results and estimates that will be needed in the construction of the actual fixed-point argument.
\begin{table}[h] \centering \begin{tabular}{l"l"l"} Function & Assumption & Reason \\ \tline $r_0$ & $\nicefrac{1}{8}\leq r_0(x)\leq\nicefrac{1}{4}$ & Room for growth and shrinkage\\ $u_{i0}$ & $0\leq u_{i0}(x)\leq\nicefrac{M}{2}$ & Keeping the solution in $T_{s,M}$ \\ $v_0$ & $0\leq v_0(x)\leq C_v$ & Bounding $v_u$ \end{tabular} \caption{Assumptions regarding the initial data.} \end{table}
In a first step, we establish some sufficient conditions for the diffusivity matrix to not degenerate. Note that at this point it is not clear that this condition can be satisfied; this is shown in \Cref{lemma_boundsr}.
\begin{lemma}[Diffusivity]\label{lemma_diffus}
If $u\in L^2((0,s)\times\Omega)$ is chosen such that $0\leq2r_u\leq 1-\varepsilon_1$ for some small $\varepsilon_1>0$, we find that $\widehat{D_i}(u)$ is symmetric and positive definite, i.e., $\widehat{D_i}(u)\xi\cdot\xi\geq c_i|\xi|^2$ where the constants $c_i>0$ do not depend on $u$ and $\xi\in\mathbb{R}^2$. In addition, $\widehat{D_i}(u)\in L^\infty((0,s)\times\Omega)$ . \end{lemma} \begin{proof} Its entries are given by
$$ (\widehat{D_i}(u))_{jk}=d_i\phi(r_u)\int_{Y_u}(\nabla w_{u,k}+e_{k})\cdot e_j\di{z} $$ where $r_u=\widetilde{\mathcal{L}_{iv}}(u)$, $Y_u=Y\setminus\overline{B}(r_u)$, and $w_u=(w_{u,1},w_{u,2})=\mathcal{L}_{ii}(r_u)$. The $D_i$ are symmetric since
$$ \int_{Y_u}(\nabla w_{u,k}+e_{k})\cdot e_j\di{z}=\int_{Y_u}(\nabla w_{u,k}+e_{k})\cdot\left(\nabla w_{u,j}+e_j\right)\di{z} $$ by way of $w_{u,k}$ solving the cell problem.
Via that representation, non negativity is also straightforward to show (we refer to \cite[Section 12.5]{PS08} for a similar argument) as long as $\phi(r_u)$ is non negative. For the positivity, we have to ensure that there is some $c_i>0$ such that $\phi(r_u),\, |Y_u|\geq c_i$ for all $(t,x)\in S\times\Omega$. Both hold true if $r_u$ is bounded away from $\nicefrac{1}{2}$, i.e, if there is some $\varepsilon_1>0$ such that $2r_u\leq1-\varepsilon_1$ for all $(t,x)\in S\times\Omega$.
Now, regarding the boundedness of $D_i$, we first see that $|\phi(r_u)|\leq|\Omega|^{-1}$ when $0\leq2r_u\leq1-\varepsilon_1$ is satisfied. Due to $|Y_u|\leq|Y|=1$, boundedness of $D_i$ is clear. \end{proof}
In the following, we will try to establish sufficient conditions for a function $u\in L^2((0,s)\times\Omega)$ to guarantee that the condition $2r_u\leq1-\varepsilon_1$ is met. Setting
$$
a_u(t,x)=2\pi\alpha\sum_{i=1}^N(a_iu_i-\beta_iv_u), $$ we get
\begin{equation}
r_u(t,x)=r_0(x)+\int_0^ta_u(\tau,x)\di{\tau}. \end{equation}
\begin{lemma}[Bounds for $r$]\label{lemma_boundsr} If $M,\varepsilon_1,\varepsilon_2>0$ satisfy \begin{equation}\label{eq:lemma_boundsr}
M\left(e^{bt}-1\right)\leq \frac{b}{2\pi\alpha a}\min\{1-2\sup r_0-\varepsilon_1, \inf r_0-\varepsilon_2-2\pi\alpha bt\sup v_0\} \end{equation} for all $t\in(0,s)$, it holds $2\varepsilon_2\leq 2r_u\leq1-\varepsilon_1$ for all $u\in T_{s,M}$. \end{lemma}
\begin{proof} For every $u\in T_{s,M}$, we find that
$$ v_u(t,x)=e^{-bt}\left(v_0(x)+\sum_{i=1}^Na_i\int_0^te^{b\tau}u_i(\tau,x)\di{\tau}\right). $$ As a consequence,
$$ -\frac{a}{b} M(e^{bt}-1)\leq v_u(t,x)\leq v_0(x)+\frac{a}{b} M(e^{bt}-1). $$ This implies
$$ a_u=2\pi\alpha\sum_{i=1}^N(a_iu_i-\beta_iv_u)\leq2\pi\alpha\left( aM+aM(e^{bt}-1)\right)=2\pi\alpha aMe^{bt} $$ as well as
$$ a_u\geq-2\pi\alpha\left(aM+bv_0(x)+aM(e^{bt}-1)\right)=-2\pi\alpha\left(bv_0(x)+aMe^{bt}\right). $$ Therefore,
$$ \inf r_0-2\pi\alpha\left(tb\sup v_0+\frac{aM}{b}(e^{bt}-1)\right)\leq r_u(t,x)\leq \sup r_0+2\pi\frac{\alpha aM}{b}(e^{bt}-1). $$ As a consequence, $2\varepsilon_2<2r_u<1-\varepsilon_1$ can be ensured by the following two conditions:
\begin{align*} M\left(e^{bt}-1\right)&\leq\frac{b}{2\pi\alpha a}\left(1-2\sup r_0-\varepsilon_1\right),\\ M\left(e^{bt}-1\right)&\leq \frac{b}{2\pi\alpha a}(\inf r_0-\varepsilon_2-2\pi\alpha bt\sup v_0). \end{align*}
\end{proof}
\begin{remark} The condition \ref{eq:lemma_boundsr} required in \Cref{lemma_boundsr} can always be met (over some possibly small time interval $(0,s)$) for $M, \varepsilon_1, \varepsilon_2$ small enough as long as the initial radius distribution satisfies $2\varepsilon_2<2r_0(x)<1-\varepsilon_1$. Connecting \Cref{lemma_boundsr} with \Cref{lemma_diffus} leads to well behaved diffusivities for $u\in T_{s,M}$. The additional bound from below in the form of $\varepsilon_2$ is needed for the transformation for the cell problem for $w_k$. \end{remark}
Now, looking at the r.h.s.~of our reaction diffusion equation, we have for $u\in T_{s,M}$ (setting $\gamma=\max_{i,j}\gamma_{ij}$):
\begin{equation}\label{est_F1} -M^2\gamma\left(N-\frac{k+1}{2}\right)\leq R_k(u)\leq M^2\gamma\left(N-\frac{k+1}{2}\right)\quad (1\leq k\leq N). \end{equation} Due to $r_u\leq\nicefrac{1}{2}$ and
$$ \frac{2\pi r_u}{1-\pi r_u^2}\leq\frac{\pi}{1-\nicefrac{\pi}{4}}\leq15 $$ we arrive at
\begin{equation}\label{est_F2} \frac{2\pi r_u}{1-\pi r_u^2}(a_iu_i-\beta_iv_u)\leq 15\left(a_iM+\frac{a}{b}\beta_iM(e^{bt}-1)\right), \end{equation} and
\begin{equation}\label{est_F3} \frac{2\pi r_u}{1-\pi r_u^2}(a_iu_i-\beta_iv_u)\geq-15\left(a_iM+\beta_i\left(v_0(x)+\frac{a}{b}M(e^{bt}-1)\right)\right). \end{equation}
As a consequence, for every $u\in T_{s,M}$, we find that $F_i(u)\in L^\infty(S\times\Omega)$ for all $i=1,...,N$. In particular, we find that
\begin{equation}\label{eq:rhs}
\sup\{\|F_i(u)\|_\infty\ : u\in T_{s,M}\}=C \end{equation}
where the constant $C$ depends only $s,M$.
\begin{lemma}[Estimates for the radius]\label{lem_rad} For $u^{(1)},u^{(2)}\in T_{s,M}$ let $r^{(1)}, r^{(2)}$ be the corresponding solutions of the radius ODE problem. Then,
\begin{align*}
\left|r^{(1)}-r^{(2)}\right|
&\leq C\int_0^t\left(\left|u^{(1)}-u^{(2)}\right|+\int_0^\tau e^{bs}\left|u^{(1)}-u^{(2)}\right|\di{s}\right)\di{\tau}. \end{align*} where the constant $C>0$ is independent of the particular choice of $u^{(k)}$ ($k=1,2$) \end{lemma} \begin{proof} The radius ODE can be solved by integration ($k=1,2$):
$$ r^{(k)}(t,x)=r_0(x)+2\pi\alpha\sum_{i=1}^N\int_0^ta_iu_i^{(k)}(\tau,x)-\beta_iv^{(k)}(\tau,x)\di{\tau} $$
where $v^{(k)}$ are given via
$$ v^{(k)}(t,x)=e^{-bt}\left(v_0(x)+\sum_{i=1}^Na_i\int_0^te^{b\tau}u_i^{(k)}(\tau,x)\di{\tau}\right). $$
Consequently, we can estimate
\begin{align*}
\left|r^{(1)}-r^{(2)}\right|&\leq2\pi\alpha\sum_{i=1}^N\int_0^t\left(a_i\left|u_i^{(1)}-u_i^{(2)}\right|
+\beta_i\sum_{j=1}^Na_j\int_0^\tau e^{bs}\left|u_j^{(1)}-u_j^{(2)}\right|\di{s}\right)\di{\tau}\\
&\leq C\int_0^t\left(\left|u^{(1)}-u^{(2)}\right|+\int_0^\tau e^{bs}\left|u^{(1)}-u^{(2)}\right|\di{s}\right)\di{\tau}. \end{align*} where the constant $C>0$ is independent of the particular choice of $u^{(k)}$ ($k=1,2$). \end{proof}
\begin{lemma}[Estimates for the cell problem]\label{lem_trans} Let $\varepsilon_2\leq r_1\leq r_2\leq\nicefrac{1}{2}(1-\varepsilon_1)$ and let $w^{(i)}_k$, $k,i=1,2$, solve
\begin{alignat*}{2} -\Delta w_{k}^{(i)}&=0&\quad&\text{in}\ \ S\times Y^{(i)},\\ -\nabla w_{k}^{(i)}\cdot n&=e_k\cdot n&\quad&\text{on}\ \ S\times\Sigma^{(i)},\\ \int_{Y^{(i)}}w_k^{(i)}(y)\di{y}&=0,&\\ y&\mapsto w_k^{(i)}(y)&\quad&\text{is $Y$-periodic}. \end{alignat*} Then, the following estimate holds:
$$
\left|\int_{Y^{(1)}}\nabla w_k^{(1)}\cdot e_j\di{y}-\int_{Y^{(2)}}\nabla w_k^{(2)}\cdot e_j\di{y}\right|
\leq C|r^{(1)}-r^{(2)}|, $$ where the constant $C>0$ might dependent on $e_1$ and $e_2$ but not on the particular choice of $r^{(1)}$ and $r^{(2)}$. Here, we have set $Y^{(j)}=Y\setminus\overline{B(r^{(j)})}$ and $\Sigma^{(j)}=\partial B(r^{(j)})$. \end{lemma}
\begin{proof} We prove this statement in three steps. First, we introduce a coordinate transform that allows us to compare the different solutions and, second, go on proving some important energy estimates. Finally, we use these energy estimates to proof the desired result.\\[.2cm]
\emph{Step1: Transformation:} We set $a=(\nicefrac{1}{2},\nicefrac{1}{2})$ and introduce the transformation $\xi\colon\overline{Y}\to\overline{Y}$ given by
$$ \xi(y)= \begin{cases}
y,\quad &|y-a|\geq\nicefrac{1}{2},\\
(1-\chi(|y-a|))y+\chi(|y-a|)\left(\nicefrac{r^{(1)}}{r^{(2)}}(y-a)+a\right),\quad&r^{(2)}\leq|y-a|\leq\nicefrac{1}{2},\\
\nicefrac{r^{(1)}}{r^{(2)}}(y-a)+a,\quad &|y-a|\leq r^{(2)} \end{cases} $$ Here, $\chi\colon[r^{(2)},\nicefrac{1}{2}]\to[0,1]$ is a smooth cut-off function with compact support (i.e., $\chi\in C_0^\infty(r^{(2)},\nicefrac{1}{2})$) satisfying $\chi(r^{(2)})=1$, $\chi(\nicefrac{1}{2})=0$, as well as $-\nicefrac{4}{\varepsilon_1}\leq\chi'(z)\leq0$.
As a result, $\xi$ is a smooth function as well and satisfies $\xi(Y^{(2)})=Y^{(1)}$ and $n_{\Sigma^{(1)}}(\xi(y))=n_{\Sigma^{(2)}}(y)$ for all $y\in\Sigma^{(2)}$.
\begin{figure}
\caption{Sketch of the transformation connecting reference cells for different radii $r^{(1)}$ and $r^{(2)}$.}
\label{fig:transform}
\end{figure}
Calculating the Jacobi matrix for $\xi$, we see that $D\xi=\mathds{I}_2$ for $|y-a|\geq\nicefrac{1}{2}$ and $D\xi=\left(\nicefrac{r^{(1)}}{r^{(2)}}\right)^2\mathds{I}_2$ for $|y-a|\leq r^{(2)}$.
For the transition part, i.e., $r^{(2)}\leq|y-a|\leq\nicefrac{1}{2}$, we calculate
\begin{align*} \partial_{y_i}\xi_j(y)&=
\partial_{y_i}\left[y\mapsto(1-\chi(|y-a|))y_j+\chi(|y-a|)\left(\nicefrac{r^{(1)}}{r^{(2)}}(y_j-\nicefrac{1}{2})+\nicefrac{1}{2}\right)\right]\\
&=\delta_{ij}\bigg(1+(\nicefrac{r^{(1)}}{r^{(2)}}-1)\chi(|y-a|)\bigg)
+\left(\nicefrac{r^{(1)}}{r^{(2)}}(y_j-\nicefrac{1}{2})+\nicefrac{1}{2}-y_j\right)\frac{y_i-\nicefrac{1}{2}}{|y-a|}\chi'(|y-a|) \end{align*}
As a consequence, we find that the Jacobian is given by the symmetric matrix
\begin{align}
\label{deter}
D\xi(y)=a(|y-a|)\begin{pmatrix}1&0\\0&1\end{pmatrix}
+b(|y-a|)\begin{pmatrix}
(y_1-\nicefrac{1}{2})^2&
(y_1-\nicefrac{1}{2})(y_2-\nicefrac{1}{2})\\
(y_1-\nicefrac{1}{2})(y_2-\nicefrac{1}{2})&
(y_2-\nicefrac{1}{2})^2
\end{pmatrix} \end{align}
where (setting $\overline{r}=r^{(2)}-r^{(1)}\geq0$)
\begin{align*} a(z)=\left(1-\frac{\overline{r}}{r^{(2)}}\chi(z)\right),\quad b(z)=-\frac{\chi'(z)}{z}\frac{\overline{r}}{r^{(2)}}. \end{align*} We can calculate the determinant as $$
\det D\xi(y)=a(|y-a|)\big(a(|y-a|)+b(|y-a|)\left(y_1^2-y_1+y_2^2-y_2+1\right)\big). $$
Since $a(|y-a|)>0$, $b(|y-a|)\geq0$ and $y_1^2-y_1+y_2^2-y_2+1>0$ for all $y=(y_1,y_2)\in Y$, we find that $$
\det D\xi(y)\geq\inf_{r^{(2)}\leq|y-a|\leq\nicefrac{1}{2}}a^2(|y-a|)=\left(\frac{r^{(1)}}{r^{(2)}}\right)^2. $$ This shows that
$$ 4\varepsilon_2^2\leq\left(\frac{\varepsilon_2}{\nicefrac{1}{2}(1-\varepsilon_1)}\right)^2\leq\det D\xi(y)\leq1 $$ which implies invertibility of $D\xi$.\\[.3cm]
\emph{Step 2: Energy estimates}. In the following, we set $F(y)=D\xi(y)$ and $J(y)=|\det F(y)|$. We start with the the weak forms
$$ \int_{Y^{(i)}}\nabla w^{(i)}_{k}\cdot\nabla \eta^{(i)}\di{z}=\int_{\Sigma^{(i)}}e_k\cdot n_{\Sigma^{(i)}}\eta^{(i)}\di{\sigma} \quad \left(\eta^{(i)} \in H^1_\#(Y^{(i)}),\ i=1,2\right). $$
We take the difference of these two weak forms:
$$ \int_{Y^{(1)}}\nabla w^{(1)}_{k}\cdot\nabla \eta^{(1)}\di{y}-\int_{Y^{(2)}}\nabla w^{(2)}_{k}\cdot\nabla \eta^{(2)}\di{y}=e_k\cdot\left[\int_{\Sigma^{(1)}}n_{\Sigma^{(1)}}\eta^{(1)}\di{\sigma}-\int_{\Sigma^{(2)}}n_{\Sigma^{(2)}}\eta^{(2)}\di{\sigma}\right]. $$
and transform the surface integral on the right-hand side in order to arrive at $$
\int_{\Sigma^{(1)}}n_{\Sigma^{(1)}}\eta^{(i)}\di{\sigma}-\int_{\Sigma^{(2)}}n_{\Sigma^{(2)}}\eta^{(2)}\di{\sigma}=\int_{\Sigma^{(2)}}n_{\Sigma^{(1)}}(\xi(y))\eta^{(1)}(\xi(y))|\det D\xi(y)|\di{\sigma}-\int_{\Sigma^{(2)}}n_{\Sigma^{(2)}}\eta^{(2)}\di{\sigma}. $$
By construction, we have $n_{\Sigma^{(1)}}(\xi(y))=n_{\Sigma^{(2)}}(y)$ for all $y\in\Sigma^{(2)}$ leading to
\begin{align*} \int_{\Sigma^{(1)}}n_{\Sigma^{(1)}}\eta^{(1)}\di{\sigma}-\int_{\Sigma^{(2)}}n_{\Sigma^{(2)}}\eta^{(2)}\di{\sigma}&=\int_{\Sigma^{(2)}}\bigg(\eta^{(1)}(\xi(y))\det D\xi(y)-\eta^{(2)}(y)\bigg)n_{\Sigma^{(2)}}\di{\sigma}\\ &=\int_{\Sigma^{(2)}}\bigg(\eta^{(1)}(\xi(y))-\eta^{(2)}(y)\bigg)\det D\xi(y)n_{\Sigma^{(2)}}\di{\sigma}\\ &\qquad+\int_{\Sigma^{(2)}}\bigg(\det D\xi(y)-1\bigg)\eta^{(2)}(y)n_{\Sigma^{(2)}}\di{\sigma} \end{align*}
For the volume integral on the l.h.s., we get (note that the Jacobian is symmetric)
\begin{multline*} \int_{Y^{(1)}}\nabla w^{(1)}_{k}\cdot\nabla \eta^{(1)}\di{y}-\int_{Y^{(2)}}\nabla w^{(2)}_{k}\cdot\nabla \eta^{(2)}\di{y}\\ =\int_{Y^{(2)}}\det D\xi(D\xi)^{-2}\nabla w^{(1)}_{k}(\xi)\cdot\nabla \eta^{(1)}(\xi)-\nabla w^{(2)}_{k}\cdot\nabla \eta^{(2)}\di{y} \end{multline*}
and, as a consequence,
\begin{multline*} \int_{Y^{(2)}}\det D\xi(D\xi)^{-2}\nabla w^{(1)}_{k}(\xi)\cdot\nabla \eta^{(1)}(\xi)-\nabla w^{(2)}_{k}\cdot\nabla \eta^{(2)}\di{y}\\ =\int_{\Sigma^{(2)}}\bigg(\eta^{(1)}(\xi(y))-\eta^{(2)}(y)\bigg)\det D\xi(y)n_{\Sigma^{(2)}}\di{\sigma}
+\int_{\Sigma^{(2)}}\bigg(|\det D\xi(y)|-1\bigg)\eta^{(2)}(y)n_{\Sigma^{(2)}}\di{\sigma}. \end{multline*}
Now, choosing $\widetilde{\eta}^1=\eta^{(2)}=\widetilde{w}_k^{(1)}-w_k^{(2)}=:\overline{w}_k$, this leads to
\begin{multline*}
\|\nabla \overline{w}_k\|^2_{L^2(Y^{(2)})}\leq
\int_{Y^{(2)}}\left|\det D\xi(D\xi)^{-2}-\mathds{I}_2\right|\left|\nabla \widetilde{w}^{(1)}_{k}\right|\cdot\left|\nabla \overline{w}_k\right|\di{y}\\
+\int_{\Sigma^{(2)}}\bigg|\det D\xi(y)-1\bigg|\left|\overline{w}_k\right|\di{\sigma}. \end{multline*}
For $y\in\Sigma^{(2)}$, i.e., $|y-a|=r^{(2)}$, we have
$$ 1-\det D\xi(y)=1-\left(\frac{r^{(1)}}{r^{(2)}}\right)^2=\frac{(r^{(2)})^2-(r^{(1)})^2}{(r^{(2)})^2}\leq\frac{\overline{r}}{r^{(2)}}. $$
Now, for $y\in Y_2$ with $|y-a|\geq\nicefrac{1}{2}$, we have $\det D\xi=1$ and $D\xi=\mathds{I}_2$ and, in the case that $r^{(2)}\leq|y-a|\leq \nicefrac{1}{2}$,
$$
\left|\det D\xi(D\xi)^{-2}-\mathds{I}_2\right|\leq\frac{\left|\det D\xi-1\right|}{|D\xi|^{2}}+\frac{\left|(D\xi)^{-1}-\mathds{I}_2\right|}{|D\xi|}+\left|(D\xi)^{-1}-\mathds{I}_2\right| $$
Since $|D\xi|^2\geq \det D\xi\geq 4\varepsilon_2^2$ and $1-\det D\xi(y)\leq \nicefrac{\overline{r}}{r^{(2)}}$:
$$
\left|\det D\xi(D\xi)^{-2}-\mathds{I}_2\right|\leq\frac{\overline{r}}{4r^{(2)}\varepsilon_2^2}+\left(1+\frac{1}{2\varepsilon_2}\right)\left|(D\xi)^{-1}-\mathds{I}_2\right| $$
Finally, via
$$
\left|(D\xi)^{-1}-\mathds{I}_2\right|\leq \left|(D\xi)^{-1}\right|\left|\mathds{I}_2-D\xi\right|\leq 2\varepsilon_2\left|\mathds{I}_2-D\xi\right| $$
we arrive at (looking at \cref{deter})
$$
\left|\det D\xi(D\xi)^{-2}-\mathds{I}_2\right| \leq\frac{\overline{r}}{r^{(2)}}\left(\frac{1}{4\varepsilon_2^2}+2\varepsilon_2+1+\frac{1}{\varepsilon_1r^{(2)}}\right) $$ Therefore we find that
$$
\|\nabla \overline{w}_k\|^2_{L^2(Y^{(2)})}\leq C(\varepsilon_1,\varepsilon_2)\overline{r}\left(\int_{Y^{(2)}}\left|\nabla \widetilde{w}^{(1)}_{k}\right|\cdot\left|\nabla \overline{w}_k\right|\di{y}
+\int_{\Sigma^{(2)}}\left|\overline{w}_k\right|\di{\sigma}\right). $$ Applying Poincar\'e's inequality (possible due to the zero average condition) and the trace theorem leads to the energy estimate
\begin{align} \label{tr_energy}
\|\overline{w}_k\|_{H^1(Y^{(2)})}\leq \tilde{C}(\varepsilon_1,\varepsilon_2)\overline{r}, \end{align} where the constant $\tilde{C}(\varepsilon_1,\varepsilon_2)>0$ is independent of $r^{(1)}$ and $r^{(2)}$.\\[.3cm]
\emph{Step 3: Proving the result}. Using \cref{tr_energy}, we go on by estimating the following key expression:
\begin{align*}
\left|\int_{Y^{(1)}}\nabla w_k^{(1)}\cdot e_j\di{y}-\int_{Y^{(2)}}\nabla w_k^{(2)}\cdot e_j\di{y}\right|
&\leq \left|\int_{Y^{(1)}}\nabla w_k^{(1)}\di{y}-\int_{Y^{(2)}}\nabla w_k^{(2)}\di{y}\right|\\
&=\left|\int_{Y^{(2)}}\det D\xi (D\xi)^{-1}\nabla \widetilde{w}_k^{(1)}-\nabla w_k^{(2)}\di{y}\right|\\ &\leq \widehat{C}(\varepsilon_1,\varepsilon_2)\overline{r}, \end{align*}
\end{proof}
\subsection{A fixed-point argument} Now, let $\varepsilon_1,\varepsilon_2, M^*, s^*>0$ and initial conditions $r_0, v_0$ be chosen such that $2\varepsilon_2\leq 2r_u(t,x)\leq1-\varepsilon_1$ for all $(t,x)\in(0,s^*)\times\Omega$ and all $u\in T_{s^*,M^*}$ (this is possible due to \Cref{lemma_boundsr,lemma_diffus}). Also, let $0\leq u_{i0}(x)\leq\nicefrac{M^*}{2}$. These choices imply $F(u)=(F_1(u),...,F_N(u))\in L^\infty((0,s^*)\times\Omega)^N$ for all $u\in T_{s^*,M^*}$ (see \cref{eq:rhs}).
In the following, let $s\in (0,s^*)$ and $M\in(0,M^*)$.
We will now look at the linearized problem: For some $\tilde{u}\in T_{s,M}$, we try to find a function $u\in W((0,s);H^1(\Omega))$ solving
\begin{subequations} \begin{alignat}{2}
\partial_tu_i-\operatorname{div}\left(\widehat{D_i}(\tilde{u})\nabla u_i\right)&=F_i(\tilde{u})&\quad&\text{in}\ \ S\times\Omega,\label{lina}\\
-\widehat{D_i}(\tilde{u})\nabla u_i\cdot n&=0&\quad&\text{on}\ \ S\times\partial\Omega,\label{linb}\\
u_i(0)&=u_{i0}&\quad&\text{in}\ \ \Omega.\label{linc} \end{alignat} \end{subequations}
\begin{lemma}[Existence result for linearized problem] \label{existence_linear} For each $\tilde{u}\in T_{s,M}$, there is a unique $u\in W((0,s);H^1(\Omega))$ solving the problem given by \cref{lina,linb,linc}. Moreover, the following a priori estimates are satisfied \begin{multline*}
\|\partial_tu\|^2_{L^2(S;H^1(\Omega)^*)}+\|u\|_{L^\infty((0,s);L^2(\Omega))}^2+\|\nabla u\|_{L^2((0,s)\times\Omega)}^2\\
\leq C\left(\|u_0\|^2_{L^2(\Omega)}+\sum_{i=1}^N\|F_i(\tilde{u})\|_{L^\infty((0,s)\times\Omega)}\right) \end{multline*} where the constant $C>0$ does not depend on $\tilde{u}$, $s$, and $M$. Please note that the above estimate implies boundedness in $W((0,s);H^1(\Omega))$ as well. \end{lemma}
\begin{proof} Since $\tilde{u}\in T_{s,M}$, we have $F_i(\tilde{u})\in L^\infty((0,s)\times\Omega)$ ($i=1,...,N$).
Also, the diffusivity matrix $\widehat{D_i}(\tilde{u})$ is uniformly positive definite (i.e., there is $c_i>0$ such that $\widehat{D_i}(\tilde{u})(t,x)\xi\cdot\xi\geq c_i|\xi|^2$ for all $(t,x)\in (0,s)\times\Omega$ and all $\xi\in\mathbb{R}^3$). Finally, as the $D_i$ are also bounded, the existence of a unique solution follows by standard theory of parabolic PDE.
To search for the needed {\em a priori} estimates, we test the weak form with $u_i$. Hence, we are led to
$$
\|u_i(t)\|^2_{L^2(\Omega)}+2c_i\int_0^t\|\nabla u_i\|^2_{L^2(\Omega)}\di{\tau}\leq\|u_{i0}\|^2_{L^2(\Omega)}+2\int_0^t\int_\Omega|F_i(\tilde{u})u_i|\di{x}\di{\tau}\quad (t\in(0,s)). $$
From here, summing over $i=1,...,N$ and applying Grönwall's inequality leads to the desired estimate for $u$ and $\nabla u$. Similarly, taking a test function $\varphi\in L^2((0,s);H^1(\Omega))$ such that $\|\varphi\|\leq1$, we find that
$$
\langle\partial_tu_i,\varphi\rangle_{L^2((0,s);H^1(\Omega)^*)}\leq\int_\Omega |F_i(\tilde{u})\varphi|\di{x}+\int_\Omega|\widehat{D_i}(\tilde{u})\nabla u_i\nabla\varphi|\di{x} $$ thus completing the estimate. \end{proof}
With the solvability of the linarized problem established, we want to investigate under what circumstances we can ensure that $u\in T_{s,M}$ as well; as this would then naturally lead to a potential fixed-point scheme. As a first point, any $\widetilde{u}\in T_{s,M}$ leads to a solution $u\in W((0,s);H^1(\Omega))$ which again leads to the corresponding solution operator
$$ \mathcal{L}\colon T_{s,M}\to W((0,s);H^1(\Omega))^N. $$ We now need to show, that $s\in(0,s^*)$ and $M\in(0,M^*)$ can be chosen such that $\mathcal{L}[T_{s,M}]\subset T_{s,M}$. With the following lemma, we first establish $\mathcal{L}[T_{s,M}]\subset L^\infty((0,s)\times\Omega)^N$.
\begin{lemma}[Boundedness] \label{lemma_bounded} For every $\tilde{u}\in T_{s,M}$, the solution of the linearized equation is bounded by
$$ -t\esssup(F_i(\tilde{u}))_-\leq u_i\leq \esssup u_{i0}+t\esssup F_i(\tilde{u}). $$ In particular, we have $u\in L^\infty((0,s)\times\Omega)^N$. \end{lemma}
\begin{proof} By the linearity of the problem, we can decompose the solution $u_i=\pi_i+\omega_i$, where
\begin{minipage}[t]{0.5\textwidth} \begin{alignat*}{2}
\partial_t\pi_i-\operatorname{div}\left(\widehat{D_i}(\tilde{u})\nabla \pi_i\right)&=0&\quad&\text{in}\ \ S\times\Omega,\\
-\widehat{D_i}(\tilde{u})\nabla \pi_i\cdot n&=0&\quad&\text{on}\ \ S\times\partial\Omega,\\
\pi_i(0)&=u_{i0}&\quad&\text{in}\ \ \Omega, \end{alignat*} \end{minipage}
\begin{minipage}[t]{0.5\textwidth} \begin{alignat*}{2}
\partial_t\omega_i-\operatorname{div}\left(\widehat{D_i}(\tilde{u})\nabla \omega_i\right)&=F_i(\tilde{u})&\quad&\text{in}\ \ S\times\Omega,\\
-\widehat{D_i}(\tilde{u})\nabla \omega_i\cdot n&=0&\quad&\text{on}\ \ S\times\partial\Omega,\\
\omega_i(0)&=0&\quad&\text{in}\ \ \Omega. \end{alignat*} \end{minipage}
Estimating the $\pi_i$-problems via $(\pi_i-L_i)_+$ for $L_i=\esssup u_{i0}$, we find that $\pi_i\leq L_i$. Using Duhamel's principle, we get $\omega_i(t,x)=\int_0^t h_i(\tau,t,x)\di{\tau}$ where the $\tau$-parametrized function $h_i$ solves
\begin{alignat*}{2}
\partial_th_i-\operatorname{div}\left(\widehat{D_i}(\tilde{u})\nabla h_i\right)&=0&\quad&\text{in}\ \ S\times\Omega,\\
-\widehat{D_i}(\tilde{u})\nabla h_i\cdot n&=0&\quad&\text{on}\ \ S\times\partial\Omega,\\
h_i(0)&=F_i(\tilde{u}(\tau,\cdot))&\quad&\text{in}\ \ \Omega. \end{alignat*} This implies $h_i\leq\esssup(F_i(\tilde{u}))_+$ and, as a consequence $\omega_i\leq t\esssup(F_i(\tilde{u}))_+$. Finally, we have
$$ u_i\leq \esssup u_{i0}+t\esssup(F_i(\tilde{u}))_+. $$ Now, since $u_{i0}\geq0$, we find that $\pi_i\geq0$ as well. Testing with $(h_i+\esssup(F_i(\tilde{u}))_-)_-$, we arrive at $h_i\geq-\esssup(F_i(\tilde{u}))_-$ and, as a consequence $\omega_i\geq-t\esssup(F_i(\tilde{u}))_-$. This shows
$$ u_i\geq-t\esssup(F_i(\tilde{u}))_-. $$ In particular, we find that $u_i\in L^\infty((0,s)\times\Omega)$ with
$$\|u_i(t)\|_{L^\infty(\Omega)}\leq \|u_{i0}\|_{L^\infty(\Omega)}+t\|F_i(\tilde{u})(t)\|_{L^\infty(\Omega)}.$$ \end{proof}
Now, in order to get concrete bounds for the solution $u=(u_1,...,u_N)$, we have to take a closer look at the right-hand sides: For the $F_i(\tilde{u})$, we have the estimates (given our assumptions on $r_0$, $s^*$, and $M^*$ and using \cref{est_F1,est_F2,est_F3}): \begin{align*} F_i(\tilde{u})&\leq M\left(M\gamma\left(N+\frac{k+1}{2}\right)+15\left(a_i+\frac{a}{b}\beta_i(e^{bt}-1)\right)\right)+15\beta_iv_0(x),\\ F_i(\tilde{u})&\geq-M\left(M\gamma\left(N+\frac{k+1}{2}\right)+15\left(a_i+\frac{a}{b}\beta_i(e^{bt}-1)\right)\right) \end{align*} or, more compactly,
\begin{align}\label{estimate_rhs}
\|F_i(\tilde{u})(t)\|_{L^\infty(\Omega)}\leq15\beta_i\|v_0\|_{L^\infty(\Omega)}+M\left(M\gamma\left(N+\frac{k+1}{2}\right)+15\left(a_i+\frac{a}{b}\beta_i(e^{bt}-1)\right)\right). \end{align} With this estimate at hand, we are now able to establish that $\mathcal{L}$ is a self-mapping for a suitable choice of $(s,M)$.
\begin{lemma}[Fixed-point operator] \label{lemma_fixed} For any $M\in(0,M^*)$ there is $s\in(0,s^*)$ such that for every $\tilde{u}\in T_{s,M}$ the solution $u$ of the linearized problem also satisfies $u=\mathcal{L}(\tilde{u})\in T_{s,M}$. \end{lemma} \begin{proof} For any given $M\in(0,M^*)$, we find that
$$
\lim_{t\to0}t\|F_i(\tilde{u})\|_\infty\to0\quad(i=1,...,N). $$
uniformly for $\tilde{u}\in T_{s,M}$ (see inequality \ref{estimate_rhs}). As a consequence, it is possible to find $s\in(0,s^*)$ such that $s\|F_i(\tilde{u})\|_\infty\leq\nicefrac{M}{2}$ for all $i=1,...,N$ and for all $\tilde{u}\in T_{s,M}$. This implies $u\in T_{s,M}$ via \Cref{lemma_bounded}. \end{proof}
Please note that $T_{s,M}$ is a closed subset of $L^2((0,s)\times\Omega)$. In the following lemma we investigate continuity of the fixed point operator
\begin{lemma}[Continuity] \label{lemma_cont} The operator $$ \mathcal{L}\colon T_{s,M}\to L^2((0,s)\times\Omega) $$ is continuous with respect to the $L^2$-norm.
\end{lemma} \begin{proof} Now let $\tilde{u},\tilde{u}^{(k)}\in T_{s,M}$ such that $\tilde{u}^{(k)}\to \tilde{u}$ in $L^2((0,s)\times\Omega)$ for $k\to\infty$. In addition, let $u=\mathcal{L}(\tilde{u})$ and $u^{(k)}=\mathcal{L}(\tilde{u}^{(k)})$ ($k\in\mathbb{N})$ be the corresponding unique solutions to the linearized problem (see \Cref{existence_linear}).
Now, the sequence $u^{(k)}$ is bounded in $W((0,s);H^1(\Omega))$ since $0\leq\tilde{u}^{(k)}\leq M$ and the a priori estimates given by \Cref{existence_linear}. Since $W((0,s);H^1(\Omega))$ is a reflexive Banach space and since it is compactly embedded in $L^2((0,s)\times\Omega)$ (\emph{Lions-Aubin lemma}), there is a subsequence (for ease of notation, still denoted by $u^{(k)}$) and a limit function $u^*$ such that $u^{(k)}$ converges to $u^*$ strongly and weakly in $L^2((0,s)\times\Omega)$ and $W((0,s);H^1(\Omega))$, respectively. Without loss of generality, we also have $u^{(k)}\to u$ pointwise almost everywhere over $(0,s)\times\Omega$ (possibly by choosing a further subsequence). In the following, we show continuity by establishing that $u^*=u$.\footnote{Due to this resulting statement: \emph{Every subsequence has a further subsequence converging to $u$.}}
The components of $u^{(k)}$ satisfy (for all $\varphi\in H^1(\Omega)$ and $t\in(0,s)$)
\begin{align*}
\langle\partial_tu_i^{(k)},\varphi\rangle_{H^1(\Omega)^*}+\int_\Omega\widehat{D_i}(\tilde{u}^{(k)})\nabla u^{(k)}\cdot\nabla\varphi\di{x}&=\int_\Omega F_i(\tilde{u}^{(k)})\varphi\di{x}. \end{align*} Now, since $\tilde{u}^{(k)}\to \tilde{u}$ in $L^2((0,s)\times\Omega))$, it holds
$$ \int_\Omega F_i(\tilde{u}^{(k)})\varphi\di{x}\to\int_\Omega F_i(\tilde{u})\varphi\di{x} \quad (\varphi\in H^1(\Omega),\, i=1,..,N). $$ For the diffusion term, we take a look at
\begin{multline*}
\int_\Omega\left(\widehat{D_i}(\tilde{u})\nabla u^*-\widehat{D_i}(\tilde{u}^{(k)})\nabla u^{(k)}\right)\cdot\nabla\varphi\di{x}\\
=\int_\Omega\widehat{D_i}(\tilde{u})\nabla\left(u^*-u^{(k)}\right)\cdot\nabla\varphi\di{x}
+\int_\Omega\left(\widehat{D_i}(\tilde{u})-\widehat{D_i}(\tilde{u}^{(k)})\right)\nabla u^{(k)}\cdot\nabla\varphi\di{x}. \end{multline*} Here, the first term on the right hand side goes to zero due to the weak convergence of $u^{(k)}$ to $u^*$ in $W((0,s);H^1(\Omega))$.
Looking at the second term, we recall
\begin{multline*} \left(\widehat{D_i}(\tilde{u})-\widehat{D_i}(\tilde{u}^{(k)})\right)_{lm}\\ =d_i\left(\phi(r^{(0)})\int_{Y^{(0)}}(\nabla w^{(0)}_{l}+e_{l})\cdot e_m\di{z}-\phi(r^{(k)})\int_{Y^{(k)}}(\nabla w^{(k)}_{l}+e_{l})\cdot e_m\di{z}\right), \end{multline*} which can be estimated using \Cref{lem_rad,lem_trans}
$$
\left|\widehat{D_i}(\tilde{u})-\widehat{D_i}(\tilde{u}^{(k)})\right|
\leq C\int_0^t\left(\left|\tilde{u}-\tilde{u}^{(k)}\right|+\int_0^\tau e^{bs}\left|\tilde{u}-\tilde{u}^{(k)}\right|\di{s}\right)\di{\tau}. $$ Here, we have used for the porosity that
$$
\left|\phi(r^{(0)})-\phi(r^{(k)})\right|\leq\frac{\pi^2}{|\Omega|}\left|r^{(0)}-r^{(k)}\right|. $$ Now, since $\tilde{u}^{(k)}\to\tilde{u}$ almost everywhere over $(0,s)\times\Omega$, dominated convergence leads to
$$ \int_\Omega\left(\widehat{D_i}(\tilde{u})-\widehat{D_i}(\tilde{u}^{(k)})\right)\nabla u^{(k)}\cdot\nabla\varphi\di{x}\to0 $$ As a consequence, $u^*=u$. \end{proof}
\begin{theorem}[Existence]\label{existence} The operator $$ \mathcal{L}\colon T_{s,M}\to L^2((0,s)\times\Omega) $$ has at least one fixed-point $u^*\in W((0,s);H^1(\Omega))$. \end{theorem} \begin{proof} $T_{s,M}$ is a non-empty, closed, and convex subset of $L^2((0,s)\times\Omega)$ and $\mathcal{L}$ is continuous with respect to the $L^2((0,s)\times\Omega)$ norm (\Cref{lemma_cont}). Moreover, we have $\mathcal{L}[T_{s,M}]\subset T_{s,M}$ via \Cref{lemma_fixed}. Finally, since $\mathcal{L}[T_{s,M}]\subset W((0,s);H^1(\Omega))$ which is compactly embedded in $L^2((0,s)\times\Omega)$ by virtue of Lions-Aubin's lemma, we can employ Schauder's fixed point thorem to conclude the existence of at least one fixed-point $u^*\in W((0,s);H^1(\Omega))\cap T_{s,M}$. \end{proof}
\begin{remark} Relying for instance on techniques from \cite{degenerate}, we expect the weak solution given by \Cref{existence} to be of higher regularity provided that data (boundary of $\Omega$, initial conditions) are sufficiently smooth. This could change, however, if we were to allow actual clogging of the porous medium.
\end{remark}
\section{Numerical simulation of the two-scale quasilinear problem}\label{numerics}
\subsection{Setup of the model equations and target geometry}
The aim is to solve numerically the two-dimensional macroscopic model problem for the species concentration $u_i$ ($i\in\{1,\dots,N\}$) and $v$. To focus the attention on physically relevant choices of parameters, we use the setup described in \cite{johnson1995dynamics}; see also \cite{Krehel,MC20} for more details. Essentially, we look at a theoretical model describing the dynamics of colloid deposition on collector surfaces, when both inter-particle, and particle-surface electrostatic interactions are assumed to be negligible. The numerical range of the used parameters fit to the situations that can relate to the immobilization of bio-colloids in soils.
The simulation output we are looking after includes approximated space and time concentration profiles of colloidal populations, spatial distribution of microstructures for given time slices, and estimated amount of deposited colloidal mass. This information helps us detect in {\em a posteriori} way the locations in $\Omega$ where deposition-induced clogging is likely to happen.
We have \begin{subequations}\label{mod1} \begin{equation} \hspace{-1cm}{ \partial_t} u_i(x,t) = D_{ijk}(x,t) \Delta_x u_i(x, t) + R_i(u)\nonumber
-\frac{{L}(x,t)}{A(x,t)} \left(a_i u_i(x,t)-\beta_i v(x,t)\right),\label{e1} \end{equation}
describing the diffusion of $u_i$ in the macroscopic domain $\Omega$.
The effective diffusion tensor has the form
$$D_{ijk}(x,t)= d_i \phi(x,t)\tau_{jk}(x,t),$$ where the entries
$$\tau_{jk}(x,t)=\int_{Y(x,t)} \left( \delta_{j,k}+\nabla_{y_j}w_k(z,t) \right) dz,$$
for all $i=1,\ldots,N$, $j,k=1,2$.
In addition, the length $L$ and area $A$ functions related to the motion of the boundary (for $r<1/2$) are:
\begin{equation} \label{mod1ha} {L}(x,t)=\int_{\Gamma(x,t)}ds=2\pi r(x,t),\quad A(x,t)=\int_{ Y_0(x,t)}dy=1-\pi r^2(x,t),\quad \mbox{(in 2D)} \end{equation}
\begin{equation} \label{mod1hb} R_i(u)=\frac12\sum_{i+j=k}\alpha_{i,j}\beta_{i,j} u_i u_j -
u_k\sum_{i=1}^\infty \alpha_{k,i}\beta_{k,i} u_i. \end{equation}
Moreover, the cell functions $w:=(w_1(x,y,t),w_2(x,y,t))$, assumed to have constant mean, satisfy
\begin{equation} \label{mod1hc} -\Delta_y w_i=0,\quad i=1,2 \quad \mbox{in}\quad Y_0(x,t),\\ \end{equation} \begin{equation} \label{mod1hd}
- n_0(x,t)\cdot\nabla_y w_i=0, \quad \mbox{on}\quad \partial Y,\quad
- n_0(x,t)\cdot\nabla_y w_i= n_i(x,t), \quad \mbox{on}\quad \partial B(r). \end{equation} with $\Gamma_e:=\partial Y$ being the boundary of the cell $n_0(x,t)=(n_1(x,t),n_2(x,t))$
is the corresponding normal vector.
Equation (\ref{e1})
needs to be complemented with corresponding initial and boundary conditions. In the sequel of this section, we focus the discussion on the case of a two dimensional macroscopic domain, i.e. $x=(x_1,x_2)\in[0,1]\times [0,1]$.
We set Robin conditions at the one side of the square
\begin{equation} \label{mod1b}
\frac{\partial u_i}{\partial n}(x_1,0,t)+ b_r u_i (x_1,0,t)=\left\{\begin{array}{cc}
u_i^b(x_1)>0 & t\in [0,t_0],\\
0 & t>t_0,
\end{array}\right., \quad x_1\in [0,1],
\end{equation} while we impose Neumann boundary conditions for the rest of the boundary \begin{equation} \label{mod1b2}
\frac{\partial u_i}{\partial n}(x_1,x_2, t)=0, \end{equation} for $(x_1,x_2)$ such that $0\leq x_2\leq 1$ with $x_1=0,1$ or $0\leq x_1\leq 1$ with $x_2=0$
and with initial conditions \begin{equation} \label{mod1c}
u_i(x,0)=u_i^a(x)\geq 0.
\end{equation}
Moreover, we have \begin{equation}\label{mod1av} { \partial_t} v (x, t)= \sum_{i=1}^N \alpha_i u_i(x,t)-\beta v(x,t),
\end{equation}
with some initial condition
\begin{equation} \label{mod1bv}
v(x,0)= v_a(x)\geq 0,
\end{equation}
and \begin{equation}
r(x,t)\, { \partial_t} r (x,t)=
\alpha\left( \sum_{i=1}^N a_i u_i(x,t)-\beta v(x,t)\right)
{L}(x,t), \label{mod1R} \end{equation}
together with some initial distribution \begin{equation} \label{mod2R}
r(x,0)=r_a(x)>0, \end{equation}
\end{subequations}
for $ x\in [0,1]\times [0,1]$.
We discuss in Section \ref{simulation} additional choices of suitable initial and boundary conditions.
\subsection{Discretization schemes}\label{simulation}
To treat problem \eqref{mod1} numerically, we need to obtain firstly a numerical approximation for the cell problems \eqref{mod1hc} and determine
the shape of the corresponding cell functions $w_1,w_2$ posed in $Y_0(x,t)$.
More specifically, we proceed for the various values of $r$, for $r_a\leq r(x,t) \leq 1/2$. We take a partition of width $\delta r$, $r_a=r_0, r_1=r_0+\delta r,\ldots, r_{M_1}=1/2$.
Then since $ Y_0$ is determined as the area contained inside the square cell and outside the circle of radius $r$, we obtain a sequence of solutions for the cell problem \eqref{mod1hc} for each $ {Y_0}_i$ corresponding to the radius $r_i$ of the partition.
We use a finite element scheme to solve these cell problems. To be precise, we use the MATLAB finite element package ''\texttt{Distmesh}" (see details in \cite{Persson}) to triangulate the domain ${ Y_0}_i= Y_0(r_i)$. Furthermore, a solver has been implemented to handle this specific problem (equations \eqref{mod1hc}); it works in a similar fashion as applied in \cite{MC20}.
In Figure \ref{Figw1}, we illustrate the numerical solution for this problem for a particular choice of $r_i$. Specifically, we choose to look at $r_i=.25$.
\begin{figure}
\caption{\it Numerical solution of the cell problem \eqref{mod1hc} and specifically for $w_1$ with $r_i=.25$.}
\label{Figw1}
\end{figure}
Having available the numerical evaluation of the cell functions $w$ as approximate solutions to the cell problems \eqref{mod1hc} and \eqref{mod1hd}, the entries of the diffusion tensor
$D_{ijk}=\int_{ Y_0(x,t)} d_i \left( \delta_{j,k}+\nabla_{y_j}w_k \right)$, $i=1,\ldots,N$, $j,k=1,2$
can be calculated directly and for each $(x,t)$ and consequently for the corresponding value for $r(x,t)$ and thus for $ Y_0(x,t)$. Then the corresponding value of $D_{ijk}(x,t)$ is approximated via linear interpolation.
Next, we solve the system of equations \eqref{e1}-\eqref{mod2R}. We use a finite difference scheme to solve the two-dimensional version of the field equation \eqref{e1},
together with its boundary and initial conditions.
More specifically we consider a square domain $\Omega =[0,1]\times [0,1]$.
For this purpose we implement a forward finite difference scheme and for this purpose initially we
consider a uniform partition of the domain $\Omega$, with $x=(x_1,\,x_2)\in \Omega$, $0\leq x_1\leq 1$, $0\leq x_2\leq 1$, of $(M+1)\times (M+1)$ points with spacial step $\delta x_1=\delta x_2=\delta x$, with ${x_1}_{\ell_1}={\ell_1} \delta x$, ${\ell_1}=0,1,\ldots M$, ${x_2}_{\ell_2}={\ell_2} \delta x$, ${\ell_2}=0,1,\ldots M$.
Additionally, we take a partition of $N_T$ points in the time interval $[0,T]$, where $T$ is the maximum time of the simulation, with step $\delta t$ and $t_n=n\delta t$, $i=0,\ldots N_T-1$.
Let ${U_i}_{{\ell_1},{\ell_2}}^n$ the numerical approximation of the species $i$ of the solution of equation \eqref{e1}
at the point $({x_1}_{\ell_1},{x_2}_{\ell_2},t_n )$ of $\Omega_T=\Omega\times [0,T]$, that is $u_i({x_1}_{\ell_1},{x_2}_{\ell_2},t_n )\simeq {U_i}_{{\ell_1},{\ell_2}}^n$. Moreover we denote by ${\mathrm{D}_i}_{{\ell_1},{\ell_2}}^n$ the corresponding approximation of the diffusion coefficients $D_{ijk}({x_1}_{\ell_1},{x_2}_{\ell_2},t_n )\simeq { \mathrm{D}_i}_{{\ell_1},{\ell_2}}^n$ and similarly by ${V_i}_{{\ell_1},{\ell_2}}^n$ the approximation for the species $v$, $v({x_1}_{\ell_1},{x_2}_{\ell_2},t_n )\simeq {V}_{{\ell_1},{\ell_2}}^n$.
\paragraph{\bf Finite difference scheme for the model equations.}
Initially we focus on the appropriate discretization of the terms in \eqref{e1}. For the spatial derivatives $\frac{\partial }{\partial x_s}\left( D_i(x,t)\frac{\partial u_i }{\partial x_s}\right)$, where $s=1,2$ we apply a discretization of the form \begin{eqnarray*} &\hspace{-.5cm}\frac{\partial }{\partial x_1}\left( {D_i}(x,t)\frac{\partial u_i}{\partial x_1}\right)\simeq \mathtt{\Delta} \left(u_i( {D_i} {u_i}_{x_1})\right)_{x_1}:= \frac{1}{\delta x}\left[ {\mathrm{D}_i}_{{\ell_1}+\frac12,{\ell_2}}^n \left( \frac{{U_i}_{{\ell_1}+1,{\ell_2}}^n - {U_i}_{{\ell_1},{\ell_2}}^n }{\delta x}\right) - {\mathrm{D}_i}_{{\ell_1}-\frac12,{\ell_2}}^n \left( \frac{{U_i}_{{\ell_1},{\ell_2}}^n- {U_i}_{{\ell_1}-1,{\ell_2}}^n }{\delta x}\right) \right]\\ &\hspace{-.5cm}\frac{\partial }{\partial x_2}\left( {{D}_i}(x,t)\frac{\partial }{\partial x_2}\right)\simeq \mathtt{\Delta} \left(u_i( {{D}_i} {u_i}_{x_2})\right)_{x_2}:= \frac{1}{\delta x}\left[ {\mathrm{D}_i}_{{\ell_1},{\ell_2}+\frac12}^n \left( \frac{{U_i}_{{\ell_1},{\ell_2}+1}^n - {U_i}_{{\ell_1},{\ell_2}}^n }{\delta x}\right) - {\mathrm{D}_i}_{{\ell_1},{\ell_2}-\frac12}^n \left( \frac{{U_i}_{{\ell_1},{\ell_2}}^n- {U_i}_{{\ell_1},{\ell_2}-1}^n }{\delta x}\right) \right]\\ &\hspace{-.5cm} {\mathrm{D}_i}_{{\ell_1}+\frac12,{\ell_2}}= \frac{ {\mathrm{D}_i}_{{\ell_1}+1,{\ell_2}}+ {\mathrm{D}_i}_{{\ell_1},{\ell_2}}}{2}, \quad
{\mathrm{D}_i}_{{\ell_1}-\frac12,{\ell_2}}=
\frac{ {\mathrm{D}_i}_{{\ell_1},{\ell_2}}+ {\mathrm{D}_i}_{{\ell_1}-1,{\ell_2}}}{2}, \ \quad
\\ &\hspace{-.5cm} {\mathrm{D}_i}_{{\ell_1},{\ell_2}+\frac12}= \frac{ {\mathrm{D}_i}_{{\ell_1},{\ell_2}+1}+ {\mathrm{D}_i}_{{\ell_1},{\ell_2}}}{2}, \quad
{\mathrm{D}_i}_{{\ell_1},{\ell_2}-\frac12}=\frac{ {\mathrm{D}_i}_{{\ell_1},{\ell_2}}+ {\mathrm{D}_i}_{{\ell_1},{\ell_2}-1}}{2}. \quad \end{eqnarray*}
Moreover we use a standard forward in time discretization for the time derivative
and we conclude with a finite difference scheme of the form for the species $u_i$'s, \begin{eqnarray*} {U_i}_{{\ell_1},{\ell_2}}^{n+1}={U_i}_{{\ell_1},{\ell_2}}^{n} +\delta t\, \mathtt{\Delta} \left(U_i(D {u_i}_{x_1})\right)_{x_1} +\delta t\,\mathtt{\Delta} \left(U_i(D {u_i}_{x_1})\right)_{x_1} +\delta t {R_i}_{{\ell_1},{\ell_2}}^{n} -\delta t {F}_{{\ell_1},{\ell_2}}^{n} \end{eqnarray*} and for the species $v$ \begin{eqnarray*} V_{{\ell_1},{\ell_2}}^{n+1}={V}_{{\ell_1},{\ell_2}}^{n}+\delta t \sum_{i=1}^N \alpha_i {U_i}_{{\ell_1},{\ell_2}}^{n}-\beta V_{{\ell_1},{\ell_2}}^{n}, \end{eqnarray*} where \begin{eqnarray*}
{R_i}_{{\ell_1},{\ell_2}}^{n}=
\frac12\sum_{p+q=s}\alpha_{p,q}\beta_{p,q} {U_p}_{{\ell_1},{\ell_2}}^{n} {U_p}_{{\ell_1},{\ell_2}}^{n} -
{U_s}_{{\ell_1},{\ell_2}}^{n}\sum_{p=1}^\infty {\alpha_{s,p}}\beta_{s,p} {U_p}_{{\ell_1},{\ell_2}}^{n},
\end{eqnarray*}
and \begin{eqnarray*}
{F}_{{\ell_1},{\ell_2}}^{n}=\frac{{L}_{{\ell_1},{\ell_2}}^{n}}{A_{{\ell_1},{\ell_2}}^{n}}\left(a_i {U_i}_{{\ell_1},{\ell_2}}^{n}-\beta_i V_{{\ell_1},{\ell_2}}^{n}\right), \end{eqnarray*} are the approximations of the source terms at the point $({x_1}_{\ell_1},{x_2}_{\ell_2},t_n )$.
In addition, the functions for the length $L(r)$ and for the area $A(r)$, are approximated, for $r\leq 1/2$ by the relations:
\begin{eqnarray*} {L}_{{\ell_1},{\ell_2}}^{n}=2\pi r_{{\ell_1},{\ell_2}}^{n},\quad A_{{\ell_1},{\ell_2}}^{n}=1-\pi (r_{{\ell_1},{\ell_2}}^{n})^2,\quad \mbox{(in 2D)}. \end{eqnarray*}
Furthermore, we have the approximate value $r_{{\ell_1},{\ell_2}}^{n}$ of the radius $r$ given by \begin{eqnarray*} r_{{\ell_1},{\ell_2}}^{n+1}=r_{{\ell_1},{\ell_2}}^{n} + \delta t\frac{1}{r_{{\ell_1},{\ell_2}}^{n}}\alpha \left( \sum_{i=1}^N a_i {U_i}_{{\ell_1},{\ell_2}}^{n}-\beta V_{{\ell_1},{\ell_2}}^{n}\right) {L}_{{\ell_1},{\ell_2}}^{n}. \end{eqnarray*}
\subsection{Basic simulation output}
In the first set of simulations we consider homogeneous Neumann boundary conditions at the three edges of the square $\Omega$, namely at $x_1=0, x_1=1$ for $0\leq x_2\leq 1$ and at $x_2=1$, $0\leq x_1\leq 1$.
At the edge $x_2=0$, $0\leq x_1\leq 1$ we impose Robin boundary conditions given by equation \eqref{mod1b}. That is we consider a scenario of having inflow at this side of $\Omega$ for a particular time period, $[0,t_0]$ which stops after some time $t_0$, and we want mainly to observe the deposition process of the colloid species around the solid cores of the cells. The later can be apparent by the variation in time of the radius $r$.
We take zero distributions as initial conditions ($t=0$) for the colloidal populations, while we consider various specific initial distributions for the radius $r$.
We consider $N=3$ mobile species $u_i$ and one immobile species $v$. Our model needs a quite large number of parameters. We take them as follows: $\kappa=1,\,\, (d_1,\,d_2,\,d_3)=(.3,.5,.99)$,\,\, $(a_1,\,a_2,\,a_3)=(.9,.5,.3)$,\,\, $(\beta_1,\,\beta_2,\,\beta_3)=(1,1,1)$, $\alpha_{i,j}=.1,\,\,\beta_{i,j}=100$, $i,j=1,\ldots 3$, $u_a^i(x)=0,\,\,v_a(x)=0,\,\, r_a(x)=.05 ,\,\, 0\leq x\leq 1$.
Regarding the choice of boundary condition at $(x_1,0)$, we take the function $u_i^b$ to be defined as \[(u_1^b,\,u_2^b,\,u_3^b)=({u_1^b}_0 x_1(i-x_1),0,0)\]
with ${u_1^b}_0=25$ for $t\in [0,t_0]$ and zero for $t>t_0$, with $t_0=2$.
Moreover, we let $b_r=0.5$, $v(x_1,x_2,0)=0$, and $r(x_1,x_2,0)=0.1$.
In addition, we take as final simulation time $T=3$ and set the remaining parameters to be $M=41$, $\mathrm{R}:=\delta t/\delta x^2=0.2$.
\paragraph{Approximated concentration profiles.}
In the first of the following graphs, i.e. in Figure \ref{Figu1}, concentration profiles of the colloidal population $u_1$ are plotted against space. Similar profiles are exhibited by the other colloidal populations as well. As general rule, we keep the discussion about what happens with $u_1$ only as here the effects are more visible. This corresponds also to the physical situation when most of the mass is contained in the monomer population, while the amount of observable dimer, trimer, 4-mer populations is considerably lower; see e.g. \cite{Krehel} and references cited therein.
In the first two frames we have $t<t_0$; hence we can see that there is an inflow in $\Omega$ through one edge and so we can observe the diffusion of $u_1$ taking place in the $x_2$ direction. In the last two frames taken at times after $t_0$ (hence here the inflow has stopped) we see that the concentration of $u_1$ near the edge drops possibly due to an activation of the reaction mechanisms. Especially, the deposition activates and consumes monomers initially involved in diffusion.
\begin{figure}
\caption{Concentration profiles at different time steps for the species $u_1$.}
\label{Figu1}
\end{figure}
In Figure \ref{Figu2}, we present similar graph for the concentration of $u_2$. As expected, the behaviour is similar as for the species $u_1$. Moreover, for the third species $u_3$ during the simulation we notice no difference in its qualitative behaviour.
\begin{figure}
\caption{Concentration profiles at different time steps for the species $u_2$.}
\label{Figu2}
\end{figure}
Regarding the behaviour of the immobile species $v$ pointed out in Figure \ref{Figv}, we observe an initial distribution in the first two frames $t=0.5,\, t=1.5$, following the form of the mobile species $u_i$ and an increase inside the domain $\Omega$. After the inflow stops, for instance, see the last two frames $t=1,75,\, t=3$, the distribution of the mass of the deposited species appears to be stationary.
\begin{figure}
\caption{Mass at different time steps for the deposited species $v$.}
\label{Figv}
\end{figure}
Focusing now in the behaviour of $r$, we present in Figure \ref{Figrf} time frames of contour plots of the radius at times $t_i=0.75,\,,1,5\,,2,25\,,3$. We observe the expected increase of the radius with respect to time. Even for $t>t_0=2$, after the inflow has stopped to happen, we still have a slight increase of the radius due to the accumulation of the immobile species around the spherical cores of the cells.
\begin{figure}
\caption{Contour plots of the radius $r=r(x_1,x_2,t_i)$ for the time steps $t_i=0.75,\,,1,5\,,2,25\,,3$. }
\label{Figrf}
\end{figure}
As final remarks regarding this numerical experiment, the main observables $u_1$, $u_2$, $u_3$, and $v$ are plotted in Figure \ref{Figuiv_p} against time for fixed locations inside the domain $\Omega$; see specifically the points $(0,0.5)$, the center $(0.5,0.5)$, $(0.5,1)$ and at the corner $(0,0)$.
\begin{figure}
\caption{Concentration profiles of the species $u_i,\,v$ versus time at different spatial points in the square domain.}
\label{Figuiv_p}
\end{figure}
\paragraph{\bf Approximations with non uniform initial radius.} In the following experiment we consider for the same scenario of initial and boundary conditions, \eqref{mod1b}, \eqref{mod1b2}, \eqref{mod1c}, a non uniform distribution for the initial values of the radius $r_0=r(x_1,x_2,0)$. Specifically, we consider larger values of the radius in the form of two peaks centered at the points $(0.2,\,0.2)$ and $(0.8,\,0.8)$ and with $r_a$ having the form \begin{eqnarray*} r_a= r_c +r_1 \exp\left[-c (x_1-.2)^2-c(x_2-.2)^2\right] +r_1 \exp\left[-c (x_1-.8)^2-c(x_2-.8)^2\right]. \end{eqnarray*}
In this context, we take $r_c=0.05$, $r_1=0.35$, $c=60$ so that the maximum radius at these two points is quite large but smaller than one ($\max r(x_1,x_2,0)\simeq 0.42 $) as it can be seen in the yellow area shown in Figure \ref{Figex2r0}. Here we also set $M=41$ for the spatial partition and $\mathrm{R}=0.25$ The rest of the parameters values are the same as in the previous numerical experiment.
The effect of the non-uniform initial radius distribution is apparent in the evolution of the species of the model; particularly, this non-uniformity effect can be traced back in the evolution of the population $u_1$ as exhibited in Figure \ref{Figex2u1}.
Due to the inflow from the edge $x_2=0$, we have now high values in the $u_1$ concentration around this edge (yellow area) of the domain, while inside the domain we have lower value (blue areas); this behavior can be seen in the first two frames of the simulation ($t=0.75,\,t=1.5$). We notice a gradual increasing perturbation of the symmetric form of $u_1$ around the point $(0.2,\,0.2)$ due to the fact that, precisely at this point, we have large values of $r$. In the next frames, at $(t=2.25,\, t=3)$ and particularly at $t=2.25$, we observe the concentration of $u_1$ after the time that the inflow in the domain has stopped ($t>t_0$ and $\frac{\partial u_i}{\partial n}(x_1,0,t)+ b_r u_i (x_1,0,t)=0$). The dominant mechanisms now are the diffusion and the surface reaction, i.e. the deposition of material around the cores of the cells. Thus we observe lower values of $u_1$ (blue and green areas) around the points with larger $r$ (close to the two initial peaks of $r$) where there the material has been deposited and higher values (yellow areas) in between the aforementioned peak points where the values of $r$ are smaller and deposition is slower. Essentially due to the same mechanism, at the final frame $t=3$ at the end of the simulation, the values of $u_1$ decrease and tend to zero with slower speed within the area close to the corner $(0,1)$.
\begin{figure}
\caption{Contour plots at different time steps for the concentration of the species $u_1$ for the case of nonuniform initial radius distribution.}
\label{Figex2u1}
\end{figure}
In Figure \ref{Figex2r0}, we present the contour plot of the initial value of $r$ for this experiment.
\begin{figure}
\caption{ Contour plot steps for initial radius distribution $r_a=r(x_1,x_2,0)$.}
\label{Figex2r0}
\end{figure}
In Figure \ref{Figex2rT}, we point out the spatial distribution of the radius $r=r(x_1,x_2,T)$, where $T$ is the final time of the simulation. In this case, we observe a behaviour consistent with what happens with the profile of the colloidal population $u_1$ towards the end of the simulation, i.e. around $t=3$. This effect is shown in Figure \ref{Figex2u1}.
Higher values of $r$ equal to $0.5$, where clogging occurs, are taken in the lower part of the domain near the edge $x_2=0$ as well as in the neighbour of the points $(0.2,\,0.2)$ and $(0.8,\,0.8)$; observe the yellow areas in Figure \ref{Figex2rT}.
In the rest of the domain $\Omega$ the radius $r$ attains lower values. This is in line with the observed behaviour of the concentration profiles of $u_1$ around the end of the simulation.
\begin{figure}
\caption{Contour plot for the radius distribution $r_a=r(x_1,x_2,T)$ at the end of the simulation.}
\label{Figex2rT}
\end{figure}
The evolution of the diffusivity during the experiment is also apparent in Figure \ref{Figex2D}. We notice initially low values of it in the areas (blue regions) around the two peaks and higher values in the intermediate area (yellow region), in the first frame for $t=0.75$. As $r$ gradually increases the corresponding areas with low diffusivity expand as we can see in the second and third frame for $t=1.5,\, 2.25$, and finally, also for $t=3$ at the end of the simulation where we obtain the final map of the diffusivity. This contains also information on the tortuosity of the material. The latter frame is in fact a "reverse" image of Figure
\ref{Figex2rT} as very low values of $D$ are linked to clogging around the blue areas where $r$ is large.
\begin{figure}
\caption{Contour plots at different time steps for the effective diffusivity $D(x,t)$ for the case of nonuniform initial radius distribution.}
\label{Figex2D}
\end{figure}
It is worthwhile to note that the spatial distribution of the balls-like microstructure that corresponds to the vizualization shown in Figure
\ref{Figex2rT} of the effective transport coefficient is pointed out in Figure \ref{Rdunit}. The unavoidable occurrence of clogging is pointed out in all these representations.
\section{Discussion}\label{discussion} We have proven the existence of a weak solution to a specific coupled multiscale quasilinear system describing the diffusion, aggregation, fragmentation, and deposition of populations of colloidal particles in porous media. The structure of the system was originally derived in \cite{MC20} and we kept it here.
Tracking numerically the $x$-dependence in the shape of the microstructures rises serious computational problems especially in 3D or even in 2D when working with low-regular shapes. Because of the strong separation between the macroscopic length scale and the microscopic length scale, such setting is parallelizable; see \cite{Omar} for a prestudy in this direction done for a micro-macro reaction-diffusion problem with $x$-dependent microstructure arising in the context of transport of nutrients in plants. The approach used in \cite{Omar} is potentially applicable here as well. Moreover, what concerns the discretization techniques used in this framework, a more advanced finite difference scheme, such as an appropriate version of Du Fort Frankel scheme, can give in principle more flexibility and accuracy in the numerical computations, e.g. by allowing larger time steps.
Our multiscale model can allow for further relevant extensions in at least twofold direction:
(1) For instance, a particularly interesting development would be to allow for some amount of stochasticity in the balance laws. In this spirit, the ODE for the growth of the balls induced by the deposition of the species $v$ could have not only a random distribution of initial positions\footnote{This is tractable with the current form of the model.} but also some suitably scaled "Brownian noise" in the production term mimicking an additional contribution eventually due to a non-uniform deposition of colloids on the boundary of the microstructures (compare with the setting from \cite{Maris}). The difficulty in this case is that, due to the strong coupling in the system, the overall problem becomes a quasilinear SPDE, which is much more difficult to handle mathematically and from the simulation point of view compared with our current purely deterministic setting.
(2) Another development that would be interesting to follow in the deterministic setup is to attempt a computational efficient hybrid-type modeling. In this context, one idea would be to couple continuum population models for colloidal dynamics with discrete network models describing the mechanics of the underlying material (see e.g. the approach proposed in \cite{Axel} having paper as target material). Relevant questions would be: What is the counterpart of our equation for the radius growth of a ball $B(r)$, when the ball is replaced by a point? How does "continuum" deposition take place on "discrete" fixed locations? Are points able to absorb matter in $2D$ and $3D$?
We expect that the non-standard type of couplings suggested in (1) and (2) (i.e. deterministic-stochastic and continuum-discrete) can potentially be posed in terms of measured-valued balance equations. We will investigate some of these ideas in follow-up works.
\section*{Acknowledgments} AM is partially supported by the grant VR 2018-03648 "{\em Homogenization and dimension reduction of thin heterogeneous layers}". We thank R. E. Showalter (Oregon) and O. Richardson (Karlstad) for useful discussions on closely related topics.
\end{document} | arXiv |
\begin{document}
\title{On the Impact of Hard Adversarial Instances \ on Overfitting in Adversarial Training}
\begin{abstract} Adversarial training is a popular method to robustify models against adversarial attacks. However, it exhibits much more severe overfitting than training on clean inputs. In this work, we investigate this phenomenon from the perspective of training instances, i.e., training input-target pairs. Based on a quantitative metric measuring instances' difficulty, we analyze the model's behavior on training instances of different difficulty levels. This lets us show that the decay in generalization performance of adversarial training is a result of the model's attempt to fit hard adversarial instances. We theoretically verify our observations for both linear and general nonlinear models, proving that models trained on hard instances have worse generalization performance than ones trained on easy instances. Furthermore, we prove that the difference in the generalization gap between models trained by instances of different difficulty levels increases with the size of the adversarial budget. Finally, we conduct case studies on methods mitigating adversarial overfitting in several scenarios. Our analysis shows that methods successfully mitigating adversarial overfitting all avoid fitting hard adversarial instances, while ones fitting hard adversarial instances do not achieve true robustness.
\end{abstract}
\section{Introduction} \label{sec:intro}
The existence of adversarial examples~\cite{szegedy2013intriguing} causes serious safety concerns when deploying modern deep learning models. For example, for classification tasks, imperceptible perturbations of the input instance can fool state-of-the-art classifiers. Many strategies to obtain models that are robust against adversarial attacks have been proposed~\cite{buckman2018thermometer, dhillon2018stochastic,ma2018characterizing,pang2020rethinking,pang2019improving,samangouei2018defense,xiao2020enhancing}, but most of them have been found to be ineffective in the presence of adaptive attacks~\cite{athalye2018obfuscated, croce2020reliable, tramer2020adaptive}. Ultimately, this leaves adversarial training~\cite{madry2017towards} and its variants~\cite{alayrac2019labels, carmon2019unlabeled, gowal2020uncovering, hendrycks2019using, sinha2019harnessing, wu2020adversarial, zhang2019you} as the most effective and popular approach to construct robust models. Unfortunately, adversarial training yields much worse performance on the test data than vanilla training. In particular, it strongly suffers from overfitting~\cite{rice2020overfitting}, with the model's performance decaying significantly on the test set in the later phase of adversarial training. While this can be mitigated by early stopping~\cite{rice2020overfitting} or model smoothing~\cite{chen2021robust}, the reason behind the overfitting of adversarial training remains poorly understood.
In this paper, we study this phenomenon from the perspective of training instances, i.e., training input-target pairs. We introduce a quantitative metric to measure the relatively difficulty of an instance within a training set.
Then we analyze the model's behavior, such as its loss and intermediate activations, on training instances of different difficulty levels. This lets us discover that the model's generalization performance decays significantly when it fits the hard adversarial instances in the later training phase.
To more rigorously study this phenomenon, we then perform theoretical analyses on both linear and nonlinear models. For linear models, we study logistic regression on a Gaussian mixture model, in which we can calculate the analytical expression of the model parameters upon convergence and thus the robust test accuracy. Our theorem demonstrates that adversarial training on harder instances leads to larger generalization gaps. We further prove that the gap in robust test accuracy between models trained by hard instances and ones trained by easy instances increases with the size of the adversarial budget. In the case of nonlinear models, we derive the lower bound of the model's Lipschitz constant when the model is well fit to the adversarial training instances. This bound increases with the difficulty level of the training instances and the size of the adversarial budget. Since a larger Lipschitz constant indicates a higher adversarial vulnerability~\cite{ruan2018reachability, weng2018towards, weng2018evaluating}, our theoretical analysis confirms our empirical observations.
Our empirical and theoretical analysis indicate avoid fit hard adversarial training instances can mitigate adversarial overfitting. In this regard, we conduct case studies in three different scenarios: standard adversarial training, fast adversarial training and adversarial fine-tuning with additional training data. We show that existing methods successfully mitigating adversarial overfitting implicitly avoid fitting hard adversarial input-target pairs, by either adaptive inputs or adaptive targets. On the contrary, methods which highlight fiting hard adversarial instances might not be truely robust at all. All our results are evaluated by AutoAttack~\cite{croce2020reliable} and compared with methods available on RobustBench~\cite{croce2020robustbench}.
\textbf{Contributions.} In summary, our contributions are as follows: 1) Based on a quantitative metric of instance difficulty, we show that fitting hard adversarial instances leads to degraded generalization performance in adversarial training. 2) We conduct a rigorous theoretical analysis on both linear and nonlinear models. For linear models, we show analytically that models trained on harder instances have larger robust test error than the ones trained on easy instances; the gap increases with the size of the adversarial budget. For nonlinear models, we derive a lower bound of the model's Lipschitz constant. The lower bound increases with the difficulty of the training instances and the size of the adversarial budget, indicating both factors make adversarial overfitting more severe. 3) We show that existing methods successfully mitigating adversarial overfiting implicitly avoid fitting hard adversarial instances.
\textbf{Notation and terminology.} In this paper, ${\bm{x}}$ and ${\bm{x}}'$ are the clean input and its adversarial counterpart. We use $f_{\bm{w}}$ to represent a model parameterized by ${\bm{w}}$ and omit the subscript ${\bm{w}}$ unless ambiguous. ${\bm{o}} = f_{\bm{w}}({\bm{x}})$ and ${\bm{o}}' = f_{\bm{w}}({\bm{x}}')$ are the model's output of the clean input and the adversarial input.
${\mathcal{L}}_{\bm{w}}({\bm{x}}, {\bm{y}})$ and ${\mathcal{L}}_{\bm{w}}({\bm{x}}', {\bm{y}})$ represent the loss of the clean and adversarial instances, receptively, in which we sometimes omit ${\bm{w}}$ and ${\bm{y}}$ for notation simplicity. We use $\|{\bm{w}}\|$ and $\|{\mathbf{X}}\|$ to represent the $l_2$ norm of the vector ${\bm{w}}$ and the largest singular value of the matrix ${\mathbf{X}}$, respectively.
$sign$ is an elementwise function which returns $+1$ for positive elements, $-1$ for negative elements and $0$ for $0$. $\mathbf{1}_y$ is the one-hot vector with only the $y$-th dimension being $1$.
The term \textit{adversarial budget} refers to the allowable perturbations applied to the input instance. It is characterized by $l_p$ norm and the size $\epsilon$ as a set $\mathcal{S}^{(p)}(\epsilon) = \{\Delta | \|\Delta\|_p \leq \epsilon\}$, with $\epsilon$ defining the budget size. Therefore, given the training set ${\mathcal{D}}$, the robust learning problem can be formulated as the min-max optimization $\min_{\bm{w}} \mathbb{E}_{{\bm{x}} \sim {\mathcal{D}}} \max_{\Delta \in \mathcal{S}^{(p)}(\epsilon)} {\mathcal{L}}_{\bm{w}}({\bm{x}} + \Delta)$. A notation table is provided in Appendix~\ref{sec:notation}.
In this paper, \textit{vanilla training} refers to training on the clean inputs, and \textit{vanilla adversarial training} to the adversarial training method in~\cite{madry2017towards}. \textit{RN18} and \textit{WRN34} are the 18-layer ResNet~\cite{he2016deep} and the 34-layer WideResNet~\cite{zagoruyko2016wide} used in~\cite{madry2017towards} and~\cite{wong2020fast}, respectively. To avoid confusion with the general term \textit{overfitting}, which denotes the gap between the training and test accuracy, we employ the term \textit{adversarial overfitting} to indicate the phenomenon that robust accuracy on the test set decreases significantly in the later phase of vanilla adversarial training. Usually, adversarial overfitting means a larger generalization gap. This phenomenon was pointed out in~\cite{rice2020overfitting} and does not occur in vanilla training. Our code is submitted on GoogleDrive anonymously\footnote{\href{https://drive.google.com/file/d/1vb6ehNMkBeNIM3dLr_igKMUcCgBd9ZmK/view?usp=sharing}{https://drive.google.com/file/d/1vb6ehNMkBeNIM3dLr\_igKMUcCgBd9ZmK/view?usp=sharing}}.
\section{Related Work}
We concentrate on white-box attacks, where the attacker has access to the model parameters. Such attacks are usually based on first-order information and stronger than black-box attacks~\cite{andriushchenko2020square, dong2018boosting}. For example, the \textit{fast gradient sign method (FGSM)}~\cite{goodfellow2014explaining} perturbs the input based on its gradient's sign, i.e., $\Delta = \epsilon\ sign(\triangledown_{\bm{x}} {\mathcal{L}}_{\bm{w}}({\bm{x}}))$.
The \textit{iterative fast gradient sign method (IFGSM)}~\cite{kurakin2016adversarial} iteratively runs FGSM using a smaller step size and projects the perturbation back to the adversarial budget after each iteration. On top of IFGSM, \textit{projected gradient descent (PGD)}~\cite{madry2017towards} use random initial perturbations and restarts to boost the strength of the attack.
Many methods have been proposed to defend a model against adversarial attacks~\cite{buckman2018thermometer, dhillon2018stochastic,ma2018characterizing,pang2020rethinking,pang2019improving,samangouei2018defense,xiao2020enhancing}. However, most of them were shown to utilize obfuscated gradients~\cite{athalye2018obfuscated, croce2020reliable, tramer2020adaptive}, that is, training the model to tackle some specific types of attacks instead of achieving true robustness. This makes these falsely robust models vulnerable to stronger adaptive attacks. By contrast, several works have designed training algorithms to obtain \textit{provably} robust models~\cite{cohen2019certified, gowal2019scalable, raghunathan2018certified, salman2019provably, wong2018provable}. Unfortunately, these methods either do not generalize to modern network architectures or have a prohibitively large computational complexity. As a consequence, adversarial training~\cite{madry2017towards} and its variants~\cite{alayrac2019labels, carmon2019unlabeled, hendrycks2019using, sinha2019harnessing, wu2020adversarial, zhang2019you} have become the de facto approach to obtain robust models in practice. In essence, these methods generate adversarial examples, usually using PGD, and use them to optimize the model parameters.
While effective, adversarial training is more challenging than vanilla training. It was shown to require larger models~\cite{xie2020intriguing} and to exhibit a poor convergence behavior~\cite{liu2020loss}. Furthermore, as observed in~\cite{rice2020overfitting}, it suffers from \textit{adversarial overfitting}:
the robust accuracy on the test set significantly decreases in the late adversarial training phase.
\cite{rice2020overfitting} thus proposed to perform early stopping based on a separate validation set to improve the generalization performance in adversarial training. Furthermore,~\cite{chen2021robust} introduced logit smoothing and weight smoothing strategies to reduce adversarial overfitting. In parallel to this, several techniques to improve the model's robust test accuracy were proposed~\cite{wang2020improving, wu2020adversarial, zhang2021geometryaware}, but without solving the adversarial overfitting issue. By contrast, other works~\cite{balaji2019instance, huang2020self} were empirically shown to mitigate adversarial overfitting but without providing any explanations as to how this phenomenon was addressed. In this paper, we study the causes of adversarial overfitting from both an empirical and a theoretical point of view. We also identify the reasons why prior attempts~\cite{balaji2019instance, chen2021efficient, huang2020self} successfully mitigate it.
\section{A Metric for Instance Difficulty} \label{sec:hardeasy}
Parametric models are trained to minimize a loss objective based on several input-target pairs called training set, and are then evaluated on a held-out set called test set. By comparing the loss value of each instance, we can understand which ones, in either the training or the test set, are more difficult for the model to fit. In this section, we introduce a metric for instance difficulty, which mainly depends on the data and on the perturbations applied to the instances.
Let $\overline{{\mathcal{L}}}({\bm{x}})$ denote the average loss of ${\bm{x}}$'s corresponding perturbed input across all training epochs.
We define the difficulty of an instance ${\bm{x}}$ within a finite set ${\mathcal{D}}$ as \begin{equation} \begin{aligned} d({\bm{x}}) = &{\mathbb{P}}(\overline{{\mathcal{L}}}({\bm{x}}) < \overline{{\mathcal{L}}}(\widetilde{{\bm{x}}}) \vert \widetilde{{\bm{x}}} \sim U({\mathcal{D}})) + \frac{1}{2} {\mathbb{P}}(\overline{{\mathcal{L}}}({\bm{x}}) = \overline{{\mathcal{L}}}(\widetilde{{\bm{x}}}) \vert \widetilde{{\bm{x}}} \sim U({\mathcal{D}}))\;, \end{aligned} \label{eq:difficulty} \end{equation} where $\widetilde{{\bm{x}}} \sim U({\mathcal{D}})$ indicates $\widetilde{{\bm{x}}}$ is uniformly sampled from the finite set ${\mathcal{D}}$. $d({\bm{x}})$ is a bounded function measuring the relatively difficulty of an instance ${\bm{x}}$ within a set. It is close to $0$ for the hardest instances, and $1$ for the easiest ones. We discuss the motivation and properties of $d({\bm{x}})$ in Appendix~\ref{subsec:d_function}. Especially, we show that it mainly depends on the data and the perturbation applied, the model architecture or the training duration can hardly affect the difficulty function. Therefore, $d({\bm{x}})$ can represent the difficulty of ${\bm{x}}$ within a set under a specific type of perturbation. In following sections of this paper, we treat $d({\bm{x}})$ as an intrinsic property of ${\bm{x}}$ given the adversarial attack.
\section{Instance Difficulty and Adversarial Overfitting} \label{sec:overfit}
The model-agnostic difficulty metric of Section~\ref{sec:hardeasy} allows us to select training instances based on their difficulty. In Figures~\ref{fig:example_cifar10} and~\ref{fig:example_svhn} of Appendix~\ref{sec:example}, we show some samples of the easiest and the hardest instances of each class in CIFAR10~\cite{krizhevsky2009learning} and SVHN~\cite{netzer2011reading}, respectively. In both cases, the easiest instances are visually highly similar, whereas the hardest ones are much more diverse, some of them being ambiguous or even incorrectly labeled.
Below, we study how easy and hard instances impact the performance of adversarial training, with a focus on the adversarial overfitting phenomenon. The detailed experimental settings are deferred to Appendix~\ref{sec:app_exp_settings_general}.
\subsection{Training on Subsets of Hard Instances Leads to Overfitting} \label{subsec:subset}
We start by training RN18 models for 200 epochs using either the 10000 easiest, random or hardest instances of the CIFAR10 training set via either vanilla training, FGSM adversarial training or PGD adversarial training. The adversarial budget is based on the $l_\infty$ norm and $\epsilon = 8 / 255$. Note that the instance's difficulty is defined under the corresponding perturbation type, and we enforce these subsets to be class-balanced.
For example, the easiest 10000 instances consist of the easiest 1000 instances in each class. We provide the learning curves under different settings in Figure~\ref{fig:overfit_main}.
\begin{figure*}
\caption{Learning curves obtained by training on the 10000 easiest, random and hardest instances of CIFAR10 under different scenarios. The training error (dashed lines) is the error on the selected instances, and the test error (solid lines) is the error on the whole test set.}
\label{subfig:overfit_main_pgd}
\label{subfig:overfit_main_fgsm}
\label{subfig:overfit_main_clean}
\label{fig:overfit_main}
\end{figure*}
For PGD adversarial training, in Figure~\ref{subfig:overfit_main_pgd}, while we observe adversarial overfitting when using the random instances, as in~\cite{rice2020overfitting}, no such phenomenon occurs when using the easiest instances: the performance on the test set does not degrade during training.
However, PGD adversarial training fails and suffers more severe overfitting when using the hardest instances. Note that, in Figure~\ref{fig:overfit_hardoptim} (Appendix~\ref{sec:app_exp_subset}), we show that this failure is not due to improper optimization.
By contrast, FGSM adversarial training and vanilla training (Figure~\ref{subfig:overfit_main_fgsm},~\ref{subfig:overfit_main_clean}), through which the model are not truely robustness~\cite{madry2017towards}, do not suffer from severe adversarial overfitting.
In these cases, the models trained with the hardest instances also achieve non-trivial test accuracy. Furthermore, the gaps in robust test accuracy between the models trained by easy instances and by hard ones are much smaller.
In Appendix~\ref{sec:app_exp_subset}, we perform additional and comprehensive experiments, evidencing that our conclusions hold for different datasets and values of $\epsilon$, and for an adversarial budget based on the $l_2$ norm. We show that more severe adversarial overfitting happens when the size of the adversarial budget $\epsilon$ increases.
Furthermore, we experiment with training models using increasingly many training instances, start with the easiest ones. Our results in Figure~\ref{fig:easy_compare} show that the models can benefit from using more data, but only using early stopping as done in~\cite{rice2020overfitting}. This indicates that the hard instances can still benefit adversarial training, but need to be utilized in an adaptive manner.
\subsection{Fitting Hard Instances Leads to Overfitting} \label{subsec:hardoverfit}
Let us now turn to the more standard setting where we train the model with the entire training set. To nonetheless analyze the influence of instance difficulty in this scenario, we divide the training set ${\mathcal{D}}$ into $10$ non-overlapping groups $\{{\mathcal{G}}_i\}_{i = 0}^9$, with ${\mathcal{G}}_i = \{{\bm{x}} \in {\mathcal{D}} | 0.1 \times i \leq d({\bm{x}}) < 0.1 \times (i + 1)\}$.
That is, ${\mathcal{G}}_0$ is the hardest group, whereas ${\mathcal{G}}_9$ is the easiest one. We then train an RN18 model on the entire CIFAR10 training set using PGD adversarial training and monitor the training behavior of the different groups. In particular, in Figure~\ref{subfig:loss_by_group}, we plot the average loss of the instances in the groups ${\mathcal{G}}_0$, ${\mathcal{G}}_3$, ${\mathcal{G}}_6$ and ${\mathcal{G}}_9$. The resulting curves show that, in the early training stages, the model first fits the easy instances, as evidenced by the average loss of group ${\mathcal{G}}_9$ decreasing much faster than that of the other groups. By contrast, in the late training phase, the model tries to fit the more and more difficult instances, with the average loss of groups ${\mathcal{G}}_0$ and ${\mathcal{G}}_3$ decreasing much faster than that of the other groups. In this period, however, the robust test error (solid grey line) increases, which indicates that adversarial overfitting arises from the model's attempt to fit the hard adversarial instances.
In addition to average losses, inspired by~\cite{ilyas2019adversarial}, which showed that the penultimate layer's activations of a robust model correspond to its \textit{robust features} that cannot be misaligned by adversarial attacks, we monitor the group-wise average magnitudes of the penultimate layer's activations. As shown in Figure~\ref{subfig:feature_by_group}, the model first focuses on extracting robust features for the easy instances, as evidenced by the comparatively large activations of the instances in ${\mathcal{G}}_9$. In the late phase of training, the slope of the curves for more and more difficult instances increases significantly, bridging the gap between easy and hard instances. This further indicates that the model focuses more on the hard instances in the later training phase, at which point it starts overfitting.
\begin{figure}\label{subfig:loss_by_group}
\label{subfig:feature_by_group}
\label{fig:loss_by_group}
\end{figure}
\section{Theoretical Analysis} \label{sec:thm}
We now study the relationship between adversarial overfitting and instance difficulty from a theoretical viewpoint. We start with the linear model, where the loss function has an analytical expression, and then generalize our analysis to the nonlinear cases. We use $\{{\bm{x}}_i, y_i\}_{i=1}^n$ to represent the
training data, and $({\mathbf{X}}, {\bm{y}})$ as its matrix form. $\{{\bm{x}}'_i, y_i\}_{i=1}^n$ and $({\mathbf{X}}', {\bm{y}})$ are their adversarial counterparts. Here, ${\bm{x}}_i \in {\mathbb{R}}^m$, $y_i \in \{-1, +1\}$, ${\mathbf{X}} \in {\mathbb{R}}^{n \times m}$ and ${\bm{y}} \in \{-1, +1\}^n$. The notation used for our theoretical analysis is summarized in Table~\ref{tbl:notation} of Appendix~\ref{sec:notation}.
\subsection{Linear Models} \label{sec:linear}
We study the logistic regression model under an $l_2$ norm based adversarial budget.
In this case, the model is parameterized by ${\bm{w}} \in {\mathbb{R}}^m$ and outputs $sign({\bm{w}}^T{\bm{x}}'_i)$ given the adversarial example ${\bm{x}}'_i$ of the input ${\bm{x}}_i$. The loss function for this instance is $\frac{1}{1 + e^{y_i {\bm{w}}^T{\bm{x}}'_i}}$. We assume over-parameterization, which means $n < m$.
The following theorem shows that, under mild assumptions, the parameters of the adversarially trained logistic regression model converge to the $l_2$ max-margin direction of the training data.
\begin{theorem} \label{thm:converge}
For a dataset $\{{\bm{x}}_i, y_i\}_{i = 1}^n$ that is linearly separable under the adversarial budget ${\mathcal{S}}^{(2)}(\epsilon)$, any initial point ${\bm{w}}_0$ and step size $\alpha \leq 2\|{\mathbf{X}}\|^{-2}$, the gradient descent ${\bm{w}}_{u + 1} = {\bm{w}}_u - \alpha \triangledown_{{\bm{w}}} {\mathcal{L}}_{{\bm{w}}_u}({\mathbf{X}}')$ converges asymptotically to the $l_2$ max-margin vector of the training data. That is, \begin{equation} \begin{aligned}
\lim_{u \to \infty} \frac{{\bm{w}}_u}{\|{\bm{w}}_u\|} &= \frac{\widehat{{\bm{w}}}}{\|\widehat{{\bm{w}}}\|},\ \mathrm{where}\ \ \widehat{{\bm{w}}} = \argmin_{{\bm{w}}} \|{\bm{w}}\| \ \ &s.t. \ \ \forall i \in \{1, 2, ..., n\}, \ {\bm{w}}^T {\bm{x}}_i \geq 1\;. \end{aligned} \label{eq:max_margin} \end{equation} \end{theorem}
The proof is deferred to Appendix~\ref{sec:proof_converge}. Theorem~\ref{thm:converge} extends the conclusion in~\cite{soudry2018implicit}, which only studies the non-adversarial case. It also indicates that the optimal parameters are only determined by the support vectors of the training data, which are the ones with the smallest margin. According to the loss function, the smallest margin means the largest loss values and thus the hardest training instances based on our definition in Section~\ref{sec:hardeasy}.
To further study how the training instances' difficulty influences the model's generalization performance, we assume that the data points are drawn from a $K$-mode Gaussian mixture model (GMM). Specifically, the $k$-th component has a probability $p_k$ of being sampled and is formulated as follows: \begin{equation} \begin{aligned} \mathrm{if}\ y_i = +1,\ {\bm{x}}_i \sim {\mathcal{N}}(r_k{\bm{\eta}}, {\mathbf{I}});\ \mathrm{if}\ y_i = -1,\ {\bm{x}}_i \sim {\mathcal{N}}(-r_k{\bm{\eta}}, {\mathbf{I}}). \end{aligned} \label{eq:gmm} \end{equation} Here, ${\bm{\eta}} \in {\mathbb{R}}^m$ is the unit vector indicating the mean of the positive instances, and $r_k \in {\mathbb{R}}^+$ controls the average distance between the positive and negative instances. The mean values of all modes in this GMM are colinear, so $r_k$ indicates the difficulty of instances sampled from the $k$-th component.
Without the loss of generality, we assume $r_1 < r_2 < ... < r_{K-1} < r_K$. As in Section~\ref{sec:overfit}, we consider models trained with the subsets of the training data, e.g., $n$ instances from the $l$-th component. $l = 1$ then indicates training on the hardest examples, while $l = K$ means using the easiest. In matrix form, we have ${\mathbf{X}} = r_l {\bm{y}} {\bm{\eta}}^T + {\mathbf{Q}}$ for the instances sampled from the $l$-th component, where the rows of noise matrix ${\mathbf{Q}}$ are sampled from ${\mathcal{N}}(\mathbf{0}, {\mathbf{I}})$.
Although the max-margin direction in Equation (\ref{eq:max_margin}), where the parameters converge based on Theorem~\ref{thm:converge}, does not have an analytical expression, the results in~\cite{wang2020benign} indicate that, in the over-parameterization regime and when the training data is sampled from a GMM, the max-margin direction is the min-norm interpolation of the data with high probability. Since the latter has an analytical form given by ${\mathbf{X}}^T ({\mathbf{X}} {\mathbf{X}}^T)^{-1}{\bm{y}}$, we can then calculate the exact generalization performance of the trained model as stated in the following theorem.
\begin{theorem} \label{thm:main}
If a logistic regression model is adversarially trained on $n$ separable training instances sampled from the $l$-th component of the GMM model described in (\ref{eq:gmm}). If $\frac{m}{n\log n}$ is sufficiently large\footnote{Specifically, $m$ and $n$ need to satisfy $m > 10 n \log n + n -1$ and $m > C n r_l \sqrt{\log 2 n}\|{\bm{\eta}}\|$. The constant $C$ is derived in the proof of Theorem 1 in~\cite{wang2020benign}.}, then with probability $1 - O(\frac{1}{n})$, the expected adversarial test error ${\mathcal{R}}$ under the adversarial budget ${\mathcal{S}}^{(2)}(\epsilon)$, which is a function of $r_l$ and $\epsilon$, on the whole GMM model described in (\ref{eq:gmm}) is given by \begin{equation} \begin{aligned} {\mathcal{R}}(r_l, \epsilon) = \sum_{k = 1}^K p_k \Phi\left( r_k g(r_l) - \epsilon \right),\ g(r_l) = (C_1 - \frac{1}{C_2 r_l^2 + o(r_l^2)})^{\frac{1}{2}},\ C_1, C_2 \geq 0. \end{aligned} \label{eq:r} \end{equation} $C_1$, $C_2$ are independent of $\epsilon$ and $r_l$. The function $\Phi$ is defined as $\Phi(x) = {\mathbb{P}}(Z > x),\ Z \sim {\mathcal{N}}(0, 1)$. \end{theorem}
We defer the proof of Theorem~\ref{thm:main} to Appendix~\ref{sec:proof_main}, in which we calculate the \textit{exact} expression of ${\mathcal{R}}(r_l, \epsilon)$, $C_1$, $C_2$, and show that $C_1$, $C_2$ are positive numbers almost surely. Since $C_1$ and $C_2$ are independent of $r_l$, and $\Phi(x)$ is a monotonically decreasing function, we conclude that the robust test error ${\mathcal{R}}(r_l, \epsilon)$ becomes smaller when $r_l$ increases. That is, when the training instances become easier, the corresponding generalization error under the adversarial attack becomes smaller.
Theorem~\ref{thm:main} holds for all $\epsilon$ only if the training data is separable under the corresponding adversarial budget. The following corollary shows that the difference in the robust test error between models trained with easy instances and the ones with hard ones increases when $\epsilon$ becomes larger. \begin{corollary} \label{coro:epsilon} Under the conditions of Theorem~\ref{thm:main} and the definition of ${\mathcal{R}}$ in Equation (\ref{eq:r}), if $\epsilon_1 < \epsilon_2$, then we have $\forall\ 0 \leq i < j \leq K, {\mathcal{R}}(r_i, \epsilon_1) - {\mathcal{R}}(r_j, \epsilon_1) < {\mathcal{R}}(r_i, \epsilon_2) - {\mathcal{R}}(r_j, \epsilon_2)$. \end{corollary} The proof is in Appendix~\ref{sec:proof_coro}. This indicates that, compared with training on clean inputs, i.e., $\epsilon = 0$, the generalization performance of adversarial training with $\epsilon>0$ is more sensitive to the difficulty of the training instances. This is consistent with our empirical observations in Figure~\ref{fig:overfit_main}.
\subsection{General Nonlinear Models} \label{sec:thm_nonlinear}
In this section, we study the binary classification problem using a general nonlinear model. We consider a model with $b$ parameters, i.e., ${\bm{w}} \in {\mathbb{R}}^b$. Without loss of generality, we assume the output of the function $f_{\bm{w}}$ to lie in $[-1, +1]$. Furthermore, we assume isoperimetry of the data distribution:
\begin{assumption} \label{asp:iso}
The data distribution $\mu$ is a composition of $K$ $c$-isoperimetric distributions on ${\mathbb{R}}^m$, each of which has a positive conditional variance. That is, $\mu = \sum_{k = 1}^K \alpha_k \mu_k$, where $\alpha_k > 0$ and $\sum_{k = 1}^K \alpha_k = 1$. We define $\sigma^2_k = \mathbb{E}_{\mu_k}[Var[y|{\bm{x}}]]$, and without loss of generality assume that $\sigma_1 \geq \sigma_2 \geq ... \geq \sigma_K > 0$. Furthermore, given any $L$-Lipschitz function $f_{\bm{w}}$, i.e., $\forall {\bm{x}}_1, {\bm{x}}_2, \|f_{\bm{w}}({\bm{x}}_1) - f_{\bm{w}}({\bm{x}}_2)\| \leq L\|{\bm{x}}_1 - {\bm{x}}_2\|$, we have \begin{equation} \begin{aligned}
\forall k \in \{1, 2,..., K\}\ {\mathbb{P}}({\bm{x}} \sim \mu_k, \|f_{\bm{w}}({\bm{x}}) - \mathbb{E}_{\mu_k}(f_{\bm{w}})\| \geq t) \leq 2 e^{-\frac{mt^2}{2cL^2}}\;. \end{aligned} \end{equation} \end{assumption}
This is a benign assumption; the data distribution is a mixture of $K$ components and each of them contains samples from a sub-Gaussian distribution. These components correspond to training instances of different difficulty levels measured by the conditional variance. We then study the property of the model $f_{\bm{w}}$ under adversarial attacks.
\begin{definition} \label{def:h} Given the dataset $\{{\bm{x}}_i, y_i\}_{i = 1}^n$, the model $f_{\bm{w}}$, the adversarial budget ${\mathcal{S}}^{(p)}(\epsilon)$ and a positive constant $C$, we define the function $h(C, \epsilon)$ as \begin{equation} \begin{aligned} h(C, \epsilon) &= \min_{{\bm{w}} \in {\mathcal{T}}(C, \epsilon)} \min_i h_{i, {\bm{w}}}(\epsilon)\ \ s.t.\ {\mathcal{T}}(C, \epsilon) = \left\{{\bm{w}} \vert \frac{1}{n} \sum_{i = 1}^n (f_{\bm{w}}({\bm{x}}_i') - y_i)^2 \leq C\right\}\;, \\
\mathrm{where}\ h_{i, {\bm{w}}}(\epsilon) &= \max \zeta,\ s.t.\ [f_{\bm{w}}({\bm{x}}_i) - \zeta, f_{\bm{w}}({\bm{x}}_i) + \zeta] \subset \left\{f_{\bm{w}}({\bm{x}}_i + \Delta) | \Delta \in {\mathcal{S}}^{(p)}(\epsilon)\right\}. \end{aligned} \end{equation} Here, ${\bm{x}}'_i$ is the adversarial example of ${\bm{x}}$. We omit the superscript $(p)$ for notation simplicity. \end{definition}
\begin{lemma} \label{lem:c} $\forall C, \epsilon_1 < \epsilon_2$, $h(C, \epsilon_1) \leq h(C, \epsilon_2)$; $\forall \epsilon, C_1 < C_2$, $h(C_1, \epsilon) \geq h(C_2, \epsilon)$. \end{lemma}
By definition, $h_{i, {\bm{w}}}(\epsilon) \geq 0$ depicts the bandwidth $\zeta$ of the model's output range in the domain of the adversarial budget on a training instance. $h(C, \epsilon)$ is the minimum bandwidth among the models whose mean squared error on the adversarial training set is smaller than $C$. Based on the definitions of ${\mathcal{T}}$ and $h_{i, {\bm{w}}}$, and for a fixed value of $C$, we have $\forall \epsilon_1 < \epsilon_2$, $h_{i, {\bm{w}}}(\epsilon_1) \leq h_{i, {\bm{w}}}(\epsilon_2)$ and ${\mathcal{T}}(C, \epsilon_2) \subset {\mathcal{T}}(C, \epsilon_1)$. As a result, $\forall \epsilon_1 < \epsilon_2$, $h(C, \epsilon_1) \leq h(C, \epsilon_2)$. In addition, since $\forall C_1 < C_2$, ${\mathcal{T}}(C_1, \epsilon) \subset {\mathcal{T}}(C_2, \epsilon)$ for a fixed value of $\epsilon$, we have $\forall C_1 < C_2$, $h(C_1, \epsilon) \geq h(C_2, \epsilon)$.
That is to say, $h(C, \epsilon)$ is a monotonically non-decreasing function on $\epsilon$ and a monotonically non-increasing function on $C$. In practice, when $f_{\bm{w}}$ represents a deep neural network, $h(C, \epsilon)$ increases with $\epsilon$ almost surely, because the attack algorithm usually generates adversarial examples at the boundary of the adversarial budget.
We then state our main theorem below.
\begin{theorem} \label{thm:nonlinear}
Given $n$ training pairs $\{{\bm{x}}_i, y_i\}_{i = 1}^n$ sampled from the $l$-th component $\mu_l$ of the distribution in Assumption~\ref{asp:iso}, the parametric model $f_{\bm{w}}$, the adversarial budget ${\mathcal{S}}^{(p)}(\epsilon)$ and the corresponding function $h$ defined in Definition~\ref{def:h}, we assume that the model $f_{\bm{w}}$ is in the function space ${\mathcal{F}} = \{f_{\bm{w}}, {\bm{w}} \in {\mathcal{W}}\}$ with ${\mathcal{W}} \subset {\mathbb{R}}^b$ having a finite diameter $diam({\mathcal{W}}) \leq W$ and, $\forall {\bm{w}}_1, {\bm{w}}_2 \in {\mathcal{W}}, \|f_{{\bm{w}}_1} - f_{{\bm{w}}_2}\|_\infty \leq J\|{\bm{w}}_1 - {\bm{w}}_2\|_\infty$. We train the model $f_{\bm{w}}$ adversarially using these $n$ data points.
Let ${\bm{x}}'$ be the adversarial example of the data point ${\bm{x}}$ and $\delta \in (0, 1)$. If we have $\frac{1}{n} \sum_{i = 1}^n (f_{\bm{w}}({\bm{x}}'_i) - y_i)^2 = C$ and $\gamma \mathrel{\mathop:}= \sigma^2_l + h^2(C, \epsilon) - C \geq 0$, then with probability at least $1 - \delta$, the Lipschitz constant of $f_{\bm{w}}$ is lower bounded as \begin{equation} \begin{aligned} Lip(f_{\bm{w}}) \geq \frac{\gamma}{2^7}\sqrt{\frac{nm}{c\left(b\log(4WJ\gamma^{-1}) - \log(\delta/2 - 2e^{-2^{-11}n\gamma^2})\right)}}\;, \end{aligned} \label{eq:lip_bound} \end{equation}
where $Lip(f_{\bm{w}})$ is the Lipschitz constant of $f_{\bm{w}}$: $\forall {\bm{x}}_1, {\bm{x}}_2$, $\|f_{\bm{w}}({\bm{x}}_1) - f_{\bm{w}}({\bm{x}}_2)\| \leq Lip(f_{\bm{w}})\|{\bm{x}}_1 - {\bm{x}}_2\|$. \end{theorem}
The proof is deferred to Appendix~\ref{sec:nonlinear_proof}. Theorem~\ref{thm:nonlinear} extends the results in~\cite{bubeck2021universal} to the case of adversarial training. Note that modern deep neural network models typically have millions of parameters, so $b \gg \max\{c, m, n\}$. In this case, we can approximate the lower bound (\ref{eq:lip_bound}) by $Lip(f_{\bm{w}}) \gtrsim \frac{\gamma}{2^7}\sqrt{\frac{nm}{bc\log(4WJ\gamma^{-1})}}$, and the right hand side increases with $\gamma$. Since $\gamma \mathrel{\mathop:}= \sigma^2_l + h^2(C, \epsilon) - C$, the lower bound increases with both $\sigma_l$ and $\epsilon$ but decreases as $C$ increases.
The Lipschitz constant is widely used to bound a model's adversarial vulnerability~\cite{ruan2018reachability, weng2018towards, weng2018evaluating}: larger Lipschitz constants indicate higher adversarial vulnerability. Recall that $\gamma$ needs to be non-negative, so $C$ is upper bounded, which means that the model is well fit to the adversarial training set and the adversarial vulnerability is approximately the generalization gap. Based on this, the adversarial vulnerability of a model increases with the size of the adversarial budget and the difficulty level of the training instances; it also increases as the training mean squared error decreases. That is, under the same adversarial budget, the adversarial vulnerability increases with the instances' difficulty, measured by $\sigma_l$ in our theorem; using the same training instances, the adversarial vulnerability increases with the adversarial budget measured by $\epsilon$. In addition, as adversarial training progresses, the mean squared error $C$ on the adversarial training instances becomes smaller, which makes the lower bound of the Lipschitz constant larger. This indicates that adversarial vulnerability becomes larger in the later phase of adversarial training.
\begin{wrapfigure}{r}{0.4\textwidth}
\centering \includegraphics[width = 0.38\textwidth]{figure/lipschitz/lip_curve.pdf} \caption{The curves of the Lipschitz upper bound when the model is adversarially trained by the easiest, the random and the hardest 10000 instances. The y-axis is log-scale.} \label{fig:lip_curve}
\end{wrapfigure}
We conduct empirical evidence to confirm the validty of Theorem~\ref{thm:nonlinear} in our settings. We use CIFAR10 as the dataset and RN18 as the network architecture. Since calculating the Lipschitz constant of a deep neural network is NP-hard~\cite{scaman2018lipschitz}, exactly calculating the Lipschitz constant~\cite{jordan2020exactly} can only be achieved for simple multi-layer perceptron (MLP) models, not for modern deep networks. Instead, we estimate the upper bound of the Lipschitz constant numerically, as in~\cite{scaman2018lipschitz}.
Table~\ref{tbl:upper_lip} demonstrates the upper bound of the Lipschitz constant of models trained in different settings. In the $l_\infty$ cases, we set $\epsilon$ to be $2/255$, $4/255$ and $8/255$; in the $l_2$ cases, we set $\epsilon$ to be $0.5$, $0.75$ and $1$. Due to the stochasticity introduced by power method, we run the algorithm in~\cite{scaman2018lipschitz} for $20$ times and report the average, we find the variance is negligible. Based on the results in Table~\ref{tbl:upper_lip}, it is clear that models adversarially trained by the hard training instances have much larger Lipschitz constant than the ones by the easy training instances in all cases.
Figure~\ref{fig:lip_curve} demonstrates the curves of the Lipschitz upper bound when the model is adversarially trained by the easiest, the random and the hardest 10000 instances. The adversarial budget is $l_\infty$ norm based and $\epsilon = 8 / 255$. We can clearly see that as the training goes, the Lipschitz upper bound increases in all cases. In addition, compared with training on easy instances, the Lipschitz upper bound of the models adversarially trained on hard instances increases much faster. These are consistent with our analysis in Theorem~\ref{thm:nonlinear}.
\begin{table}[!ht] \small \centering \begin{tabular}{p{1.6cm}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}} \Xhline{4\arrayrulewidth} \multirow{2}{*}{Training Set} & \multicolumn{3}{c}{Lipschitz in $l_\infty$ Cases ($\times 10^4$)} & \multicolumn{3}{c}{Lipschitz in $l_2$ Cases ($\times 10^4$)} \\ & $\epsilon = 2/255$ & $\epsilon = 4/255$ & $\epsilon = 8/255$ & $\epsilon = 0.50$ & $\epsilon = 0.75$ & $\epsilon = 1.00$ \\ \hline Easy10K & 5.91 & 6.06 & 14.54 & 3.34 & 3.67 & 2.91 \\ Random10K & 28.98 & 79.96 & 93.63 & 30.01 & 31.28 & 39.34 \\ Hard10K & 72.42 & 117.60 & 567.24 & 60.62 & 80.06 & 77.55 \\ \Xhline{4\arrayrulewidth} \end{tabular} \caption{Upper bound of the Lipschitz constant under different settings of $\epsilon$ and training instances.} \label{tbl:upper_lip} \end{table}
In this section, we study two previous methods that were found to reduce the overfitting problem in adversarial training. We show that they both implicitly avoid fitting hard adversarial instances, by either adaptive inputs or adaptive targets. We then study the new scenario of fine-tuning a pretrained model with additional data and demonstrate improved performance when we process easy and hard instances adaptively. The details of the experimental settings for this section are deferred to Appendix~\ref{subsec:casestudy_setting}.
\subsection{Adapting the Input: Instance-adaptive Adversarial Training} \label{sec:instancewise}
\begin{wrapfigure}{r}{0.4\textwidth}
\centering \includegraphics[width = 0.38\textwidth]{figure/casestudy/instancewise/200epoch_featuretractor_eps.pdf} \caption{We plot the instance adaptive adversarial budget size $\epsilon_i$ for the training set of CIFAR10 as a function of the instance difficulty $d({\bm{x}}_i)$. The model architecture is RN18 and $\hat{\epsilon}$ is $8 / 255$.} \label{fig:epsilon_final}
\end{wrapfigure}
Using an instance-adaptive adversarial budget has been shown to mitigate adversarial overfitting and yield a better trade-off between the clean and robust accuracy~\cite{balaji2019instance}. In instance-adaptive adversarial training (IAT), each training instance ${\bm{x}}_i$ maintains its own adversarial budget's size $\epsilon_i$ during training. In each epoch, $\epsilon_i$ increases to $\epsilon_i + \epsilon_\Delta$ if the instance is robust under this enlarged adversarial budget. By contrast, $\epsilon_i$ decreases to $\epsilon_i - \epsilon_\Delta$ if the instance is not robust under the original adversarial budget. Here, $\epsilon_{\Delta}$ is the step size of the adjustment.
We follow the experiments in~\cite{balaji2019instance}, and the learning curves in Figure~\ref{fig:casestudy_learncurve} (Appendix~\ref{sec:learncurve_casestudy}) clearly show that there is no adversarial overfitting in IAT.
In Figure~\ref{fig:epsilon_final}, we provide the final instance-wise adversarial budget size $\epsilon_i$, as a function of the corresponding instance's difficulty $d({\bm{x}}_i)$ defined under the PGD attack. It is clear that $d({\bm{x}}_i)$ is strongly correlated with $\epsilon_i$: the correlation between them is $0.844$. This means that IAT adaptively uses weaker attacks for hard instances and stronger attacks for easy ones. In essence, by adaptively adjusting the attacks and thus the input to the model, IAT avoids fitting the hard input-target pairs in PGD adversarial training and mitigates adversarial overfitting.
Furthermore, we conduct gradient analysis for all groups $\{{\mathcal{G}}_i\}_{i = 0}^9$ of training instances. By studying the gradient of the loss function w.r.t. the model parameters among instances in different groups, we find that the cosine similarity between the gradients of easy and hard instances is much higher in IAT than in vanilla adversarial training. This indicates that the gradients of hard instances are less noisy and more informative in IAT. More details and discussions are provided in Appendix~\ref{sec:grad_analysis}.
\subsection{Adapting the Target: Self-Adaptive Training} \label{sec:sat}
\begin{wrapfigure}{r}{0.4\textwidth}
\centering \includegraphics[width = 0.38\textwidth]{figure/casestudy/movetarget/weight.pdf} \caption{Average weights of different groups in the training set of CIFAR10 during training. During the warmup period (first $90$ epochs), the weight for every training instance is $1$. The model architecture is RN18.} \label{fig:casestudy_movetarget_weight}
\end{wrapfigure}
Self-adaptive training (SAT)~\cite{huang2020self} solves the adversarial overfitting issue by adapting the target. By contrast with common practice consisting of using a fixed target, usually the ground-truth, SAT adapts the target of each instance to the model's output. Specifically, after a warm-up period, the target ${\bm{t}}_i$ for an instance ${\bm{x}}_i$ is initialized as a one-hot vector by its ground-truth label ${\bm{y}}_i$ and updated in an iterative manner after each epoch as ${\bm{t}}_i \leftarrow \rho {\bm{t}}_i + (1 - \rho) {\bm{o}}_i$. Here, $\rho$ is a predefined momentum factor and ${\bm{o}}_i$ is the output probability of the current model on the corresponding clean instance. SAT uses the loss of TRADES~\cite{zhang2019theoretically} but replaces the ground-truth label $y$ with the adaptive target ${\bm{t}}_i$: ${\mathcal{L}}_{SAT}({\bm{x}}_i) = {\mathcal{L}}({\bm{x}}_i, {\bm{t}}_i) + \lambda \max_{\Delta_i \in {\mathcal{S}}(\epsilon)} KL({\bm{o}}_i || {\bm{o}}'_i)$, where $KL$ refers to the Kullback–Leibler divergence and $\lambda$ is the weight for the regularizer. Furthermore, SAT uses a weighted average to calculate the loss of a mini-batch; the weight assigned to each instance ${\bm{x}}_i$ is proportional to the maximum element of its target ${\bm{t}}_i$. The loss is then normalized to ensure that all instances' weights sum up to $1$. By using weighted averaging, the instances with confident predictions are strengthened, whereas the ambiguous instances are downplayed.
We follow the settings in~\cite{huang2020self}, and the learning curves in Figure~\ref{fig:casestudy_learncurve} (Appendix~\ref{sec:learncurve_casestudy}) confirm that SAT mitigates adversarial overfitting. Figure~\ref{fig:casestudy_movetarget_weight} depicts the average weights assigned to the training instances in the groups ${\mathcal{G}}_0$, ${\mathcal{G}}_3$, ${\mathcal{G}}_6$ and ${\mathcal{G}}_9$ during training. We can clearly see that the hard instances always have smaller weights than the easy ones, which means that the hard instances are downplayed and assigned smaller weights to update the model parameters. Compared with using the fixed ground-truth as targets, the adaptive targets in SAT are easier to fit for the difficult instances in adversarial training, because the targets are based on the model's outputs. In Appendix~\ref{sec:label_sat}, we show the training accuracy under the ground-truth label $y$ and the adaptive target ${\bm{t}}$. For easy instances, $y$ and ${\bm{t}}$ are mostly consistent, so the training accuracy in both cases is very high. By contrast, the training accuracy of hard instances under the adaptive target ${\bm{t}}$ is much higher than the one under the ground-truth $y$. Therefore, by replacing $y$ with ${\bm{t}}$, SAT avoids fitting hard input-target pairs.
As for IAT, we conduct gradient analysis on SAT and draw the same conclusion: the parameter gradients of the easy and hard instances have higher similarity in SAT than in vanilla adversarial training. More details and discussions are provided in Appendix~\ref{sec:grad_analysis}.
We observed adversarial overfitting to occur when the learning rate is small. Inspired by this finding, we study a new but realistic scenario: fine-tuning a pretrained model with additional data. While additional training data was shown to improve the performance of adversarial training~\cite{carmon2019unlabeled}, here we demonstrate that letting the model adaptively fit the easy and hard instances further improves the performance.
We conduct experiments on both CIFAR10 and SVHN, using WRN34 and RN18 models, respectively.
For CIFAR10, we use the same additional data as the one in~\cite{carmon2019unlabeled}; for SVHN, we use its extra held-out set. Our pretrained models are trained in the same way as in~\cite{rice2020overfitting}. When constructing a mini-batch, half of the training instances are taken from the additional data and the other half are randomly sampled from the original training set. We tune the learning rate and find that fixing it to $10^{-3}$ to be the best choice for all settings.
The additional experimental details are deferred to Appendix~\ref{subsec:casestudy_setting}.
We fine-tune the model for either 5 epochs or only 1 epoch, which means that each additional data instance is used either 5 times or only once. This is because we observed the performance of vanilla adversarial training to start decaying after $5$ epochs. As such, methods such as IAT and SAT from the previous sections are not applicable here, because they need a sufficient number of epochs to adjust either the adversarial budget's size or the targets. Nevertheless, we can still utilize ideas from these methods to improve the performance of fine-tuning. First, we exploit the observation that the confidence of the model's predictions is generally not high for hard instances. Similar to SAT, we therefore assign different weights for different training instances within a mini-batch to calculate the loss. The weight assigned to an instance is the maximum probability of its prediction, and it is then normalized to ensure that the weights within a mini-batch sum up to $1$. Formally, if we use ${\bm{o}}_{i, c}$ to denote the probability of classifying the $i$-th instance in the mini-batch as class $c$, then the weight assigned to this instance to calculate the loss is $w_i = \frac{\max_c {\bm{o}}_{i, c}}{\sum_{j} \max_c {\bm{o}}_{j, c}}$.
In addition to re-weighting, we can also add a KL regularization term measuring the KL divergence between the output probability of the clean instance and of the adversarial instance. The KL term encourages the adversarial outputs to be close to the clean ones. In other words, the clean outputs serve as adaptive targets. For hard instances, the clean and adversarial inputs are usually both misclassified.
Therefore, the clean outputs of these instances constitute simpler targets compared with the ground-truth labels. Note that our KL regularization term differs from the one in TRADES~\cite{zhang2019theoretically}, because the adversarial examples here are generated in the same way as vanilla adversarial training, whereas in TRADES they are generated to maximize the KL term. Ultimately, the loss objective of a mini-batch $\{{\bm{x}}_i\}_{i = 1}^B$ used for fine-tuning is expressed as
\begin{equation} \begin{aligned}
{\mathcal{L}}_{FT}(\{{\bm{x}}_i\}_{i = 1}^B) = \sum_{i = 1}^B w_i \left[ {\mathcal{L}}_\theta({\bm{x}}'_i) + \lambda KL({\bm{o}}_i || {\bm{o}}'_i) \right]\;, \end{aligned} \label{eq:ft} \end{equation} where $w_i$ is the adaptive weight when we use re-weighting, or $1 / B$ otherwise. $\lambda$ is $6$ when using the regularization term and $0$ otherwise. ${\mathcal{L}}_{FT}$ differs from ${\mathcal{L}}_{SAT}$ in Section~\ref{sec:sat}, because the first term of ${\mathcal{L}}_{FT}$ acts on the adversarial input. We observed this to yield better performance.
\begin{table*} \centering
\begin{tabular}{p{1.1cm}<{\centering}p{2.1cm}p{1.1cm}<{\centering}p{1.1cm}<{\centering}p{1.2cm}<{\centering}p{1.2cm}<{\centering}p{1.2cm}<{\centering}p{1.2cm}<{\centering}} \Xhline{4\arrayrulewidth} \multirow{2}{*}{Duration} & \multirow{2}{*}{Method} & \multirow{2}{*}{PGD10} & \multirow{2}{*}{PGD200} & APGD & APGD & \multirow{2}{*}{Square5K} & \multirow{2}{*}{\textbf{Overall}} \\ & & & & CE & DLR & & \\ \Xhline{4\arrayrulewidth} \multicolumn{8}{c}{\textbf{WRN34 on CIFAR10, $\epsilon = 8 / 255$}} \\ \hline \multicolumn{2}{l}{No Fine Tuning} & 56.27 & 55.60 & 53.91 & 54.01 & 59.57 & 52.01 \\ \hdashline \multirow{4}{*}{1 Epoch} &
Vanilla AT & 58.66 & 58.08 & 56.73 & 55.38 & 61.01 & 54.11 \\
& RW & 58.29 & 57.75 & 56.52 & 56.20 & 61.64 & 54.69 \\
& KL & 60.66 & 60.42 & 59.68 & 55.40 & 58.92 & \textbf{54.73} \\
& RW + KL & 58.87 & 58.54 & 57.88 & 55.46 & 58.26 & 54.69 \\ \hdashline \multirow{4}{*}{5 Epochs} &
Vanilla AT & 59.72 & 58.93 & 57.13 & 57.34 & 63.60 & 55.49 \\
& RW & 59.72 & 59.08 & 57.52 & 58.19 & 63.79 & 56.41 \\
& KL & 65.49 & 65.25 & 63.61 & 57.07 & 61.27 & 56.55 \\
& RW + KL & 62.46 & 62.16 & 61.12 & 57.74 & 61.06 & \textbf{56.99} \\ \Xhline{4\arrayrulewidth} \multicolumn{8}{c}{\textbf{RN18 on SVHN, $\epsilon = 0.02$}} \\ \hline \multicolumn{2}{l}{No Fine Tuning} & 71.88 & 70.79 & 68.45 & 69.39 & 71.12 & 67.77 \\ \hdashline \multirow{4}{*}{1 Epoch} &
Vanilla AT & 74.98 & 73.83 & 71.70 & 72.35 & 74.31 & 70.81 \\
& RW & 74.29 & 73.51 & 71.33 & 72.33 & 74.11 & 70.83 \\
& KL & 78.32 & 77.45 & 75.51 & 73.12 & 75.01 & 72.29 \\
& RW + KL & 77.50 & 76.70 & 75.26 & 73.40 & 74.73 & \textbf{72.53} \\ \hdashline \multirow{4}{*}{5 Epoch} &
Vanilla AT & 76.48 & 75.12 & 72.80 & 73.70 & 75.88 & 72.18 \\
& RW & 76.23 & 75.10 & 73.19 & 74.03 & 75.92 & 72.72 \\
& KL & 77.99 & 77.20 & 74.87 & 74.19 & 75.19 & 73.17 \\
& RW + KL & 77.05 & 76.38 & 74.40 & 74.56 & 75.72 & \textbf{73.35} \\ \Xhline{4\arrayrulewidth} \end{tabular} \caption{Robust test accuracy of fine-tuned WRN34 models on CIFAR10 and RN18 models on SVHN. All numbers are percentages. \textit{Vanilla AT} represents vanilla adversarial training in~\cite{madry2017towards}. \textit{RW} means that we use the re-weighting instances in the mini-batch. \textit{KL} means that the loss function uses the KL regularization term. } \label{tbl:ft}
\end{table*}
We provide the results on CIFAR10 and SVHN in Table~\ref{tbl:ft}.
in addition to vanilla adversarial training (vanilla AT), we evaluate models that were fine-tuned using either re-weighting, or KL regularization, or both. To avoid gradient masking and to comprehensively evaluate the robustness of the fine-tuned models, we report the robust accuracy using $5$ different types of attackers on the test set: the 10-iteration and 200-iteration PGD (PGD10, PGD200), the cross-entropy loss based AutoPGD (APGD-CE), the difference-of-logit-ratio based AutoPGD (APGD-DLR) introduced in~\cite{croce2020reliable} and the 5000-iteration Square Attack (Square5K)~\cite{andriushchenko2020square}, which is a strong black-box attack strategy. Finally, we report the overall robust accuracy of the ensemble of these $5$ attacks: an instance is considered robust only if it is robust against all of these $5$ attacks. Altogether, these results show that using re-weighting and KL regularization in fine-tuning improves the performance, in both the $1$-epoch case and the $5$-epoch one. This demonstrates that avoiding fitting hard adversarial examples helps to improve the generalization performance in adversarial fine-tuning.
\section{Case Study and Discussion} \label{sec:casestudy}
Our empirical observation and theoretical analysis indicate fitting hard adversarial leads to adversarial overfitting. In this section, we study existing methods and show the ones successfully mitigating adversarial overfitting all implicitly avoid fitting hard adversarial instances, which provides an explanation for their success. For methods encouraging fitting hard adversarial instances, they are shown failed to obtain truely robust models. Besides standard adversarial training, we also study the cases of fast adversarial training and adversarial fine-tuning with additional training data. The results indicate our findings are valid for different scenarios to obtain robust models. The detailed experimental settings for this section are in Appendix~\ref{subsec:mitigate_setting}.
\subsection{Standard Adversarial Training} \label{subsec:standard}
Existing methods mitigating adversarial overfitting can be generally divided into two categories: one is to use adaptive inputs, such as~\cite{balaji2019instance}; the other is to use adaptive targets, such as~\cite{chen2021robust, huang2020self}. Both categories aim to prevent the model from fitting hard input-target pairs.
We use instancewise adversarial training (IAT) and self-adaptive training (SAT) as examples of these two categories. Experiment details and results are provided in Appendix~\ref{subsec:app_revisit}. For IAT, we show high correlation between instance difficulty and its adversarial budget for training. Especially, we find hard instances are assigned smaller adversarial budgets for training, which indicates IAT prevents the model from fitting hard adversarial instances. For SAT, we show the accuracy of instances of different difficulty levels on the groundtruth and the adaptive targets. We find the hard instances have much higher accuracy on their adaptive targets compared with the ground truth, while such difference is much smaller for easy instances. Our results indicate SAT uses adaptive targets which are much easier to fit to avoid directly fitting hard adversarial instances.
\cite{zhang2021geometryaware} uses a geometric-aware reweighting scheme to assign different weights to different training instances. On the contrary, they assign larger weights to training instances which PGD breaks in fewer iterations. This means, they assign larger weights to hard adversarial instances, opposite to what our analysis indicates. However, their methods are later shown vulnerable against adaptive attacks~\cite{hitaj2021evaluating}, which refutes the claims in~\cite{zhang2021geometryaware} and thus indicates our claims.
\subsection{Fast Adversarial Training} \label{subsec:fast}
Adversarial training in~\cite{madry2017towards} introduces a significant computational overhead. To accelerate the algorithm, we utilize transferable adversarial examples (ATTA in~\cite{zheng2020efficient}), which stores the adversarial perturbation for each training instance as an initial point for the next epoch. Under this setting, one-step PGD is enough to generate good enough adversarial examples. We show that adaptively utilizing the easy and hard training instances not only mitigates adversarial overfitting, but also significantly improves the performance of the final model.
First, we introduce a reweighting scheme. In contrast to~\cite{zhang2021geometryaware}, we assign lower weights to hard instances when calculate the training objective. Specifically, each training instance is assigned a weight proportional to the adversarial output probability of the true label, which is closely related to our proposed difficulty function. The computational overhead of this reweighting scheme is negligible.
In addition to reweighting, we also use adaptive targets similar to SAT~\ref{huang2020self} to improve the performance. For each training instance $({\bm{x}}, y)$, we maintain an adaptive moving average target $\widetilde{{\bm{t}}}$. $\widetilde{{\bm{t}}}$ is updated in an exponential average manner in each epoch $\widetilde{{\bm{t}}} \leftarrow \rho\widetilde{{\bm{t}}} + (1 - \rho) {\bm{o}}'$ where $\rho$ is the momentum factor. Different from SAT~\ref{huang2020self}, we use the adversarial output ${\bm{o}}'$ instead of the clean output ${\bm{o}}$ to avoid an increase in computational complexity. The final adaptive target we use is ${\bm{t}} = \beta \mathbf{1}_y + (1 - \beta)\widetilde{{\bm{t}}}$ and thus the loss objective is ${\mathcal{L}}_{\bm{w}}({\bm{x}}', {\bm{t}})$. The factor $\beta$ controls how ``adaptive'' our target is: $\beta = 0$ yields a fully adaptive moving average target $\widetilde{{\bm{t}}}$ and $\beta = 1$ yields a one-hot target $\mathbf{1}_y$. We provide the pseudocode as Algorithm~\ref{alg:fast} in Appendix~\ref{subsec:mitigate_setting}.
We run experiments on CIFAR10 using WRN34 models under the $l_\infty$ adversarial budget of size $\epsilon = 8/255$, the standard setting where most fast adversarial training algorithms are benchmarked~\cite{croce2020robustbench}. We evaluate the model's robust accuracy on the test set by AutoAttack~\cite{croce2020reliable}, the popular and reliable attack for evaluation. The results are provided in Table~\ref{tbl:ft}, where the results of the baseline methods are taken from RobustBench~\cite{croce2020robustbench}. We also report the number of epochs and the number of forward and backward passes in a mini-batch update of each method. The product of these two values indicates the training complexity.
We can clearly see that both reweighting and adaptive targets improve the performance on top of ATTA~\cite{zheng2020efficient}. Note that our method based on adaptive targets achieve the best performance while needing only $1/4$ of the training time of~\cite{chen2021efficient}, the strongest baseline. \cite{wong2020fast} is the only baseline consuming less training time than ours, but its performance is much worse than ours; it suffers from catastrophic overfitting when using a WRN34 model. In Appendix~\ref{subsec:app_fastadv}, we provide the learning curves of our methods under different settings and show that both reweighting and adaptive targets mitigate adversarial overfitting. We also conduct an ablation study on the value of $\beta$ and find that a decrease in $\beta$ decreases the generalization gap. This indicates that the more adaptive the targets, the smaller the generalization gap.
\begin{table}[!ht] \centering \begin{tabular}{p{4.1cm}p{1.2cm}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}} \Xhline{4\arrayrulewidth} Method & Model & Epochs & Complexity & AA \\ \Xhline{4\arrayrulewidth} Shafahi et al. (2019)~\cite{shafahi2019adversarial} & WRN34 & 200 & 2 & 41.17 \\ Wong et al. (2020)~\cite{wong2020fast} & RN18 & 15 & 4 & 43.21 \\ Zheng et al. (2020)~\cite{zheng2020efficient} & WRN34 & 38 & 4 & 44.48 \\ Zhang et al. (2019)~\cite{zhang2019you} & WRN34 & 105 & 3 & 44.83 \\ Chen et al. (2021)~\cite{chen2021efficient} & WRN34 & 100 & 7 & 51.12 \\ \hdashline Reweighting & WRN34 & 38 & 4 & 46.15 \\ Adaptive Target & WRN34 & 38 & 4 & 51.17 \\ \Xhline{4\arrayrulewidth} \end{tabular} \caption{Comparison between different accelerated adversarial training methods in robust test accuracy against AutoAttack (AA). The baseline results are from RobustBench. \textit{Complexity} shows the number of forward passes and backward passes in one mini-batch update.} \label{tbl:fast} \end{table}
\begin{wraptable}{r}{.4\textwidth} \begin{tabular}{p{1.5cm}<{\centering}p{2.0cm}p{1.2cm}<{\centering}} \Xhline{4\arrayrulewidth} Duration & Method & AA \\ \Xhline{4\arrayrulewidth} \multicolumn{3}{c}{\textbf{WRN34 on CIFAR10, $\epsilon = 8 /255$}} \\ \hline \multicolumn{2}{l}{No Fine Tuning} & 52.01 \\ \hdashline \multirow{2}{*}{1 Epoch} &
Vanilla AT & 54.11 \\ & Ours & 54.69 \\ \hdashline \multirow{2}{*}{5 Epoch} &
Vanilla AT & 55.49 \\ & Ours & 56.99 \\ \Xhline{4\arrayrulewidth} \multicolumn{3}{c}{\textbf{RN18 on SVHN, $\epsilon = 0.02$}} \\ \hline \multicolumn{2}{l}{No Fine Tuning} & 67.77 \\ \hdashline \multirow{2}{*}{1 Epoch} &
Vanilla AT & 70.81 \\ & Ours & 72.53 \\ \hdashline \multirow{2}{*}{5 Epoch} &
Vanilla AT & 72.18 \\ & Ours & 73.35 \\ \Xhline{4\arrayrulewidth} \end{tabular} \caption{Robust accuracy of fine-tuned models against AutoAttack(AA).} \label{tbl:ft} \end{wraptable}
\subsection{Adversarial Finetuning with Additional Data} \label{subsec:finetune}
We observe that adversarial overfitting occurs in the small learning rate regime in Section~\ref{sec:overfit}. To further study this, we propose to fine-tune an adversarially pretrained model using additional training data, because we also use small learning rate to fine-tune a model. While additional training data was shown to be beneficial in~\cite{alayrac2019labels,carmon2019unlabeled}, we demonstrate that letting the model adaptively fit the easy and hard instances can further improve the performance.
We conduct experiments on both CIFAR10 and SVHN, using WRN34 and RN18 models, respectively. The experimental settings are the same as~\cite{carmon2019unlabeled} except the learning rate. We tune the learning rate and find that fixing it to $10^{-3}$ is the best choice. The model is fine-tuned for either $1$ epoch or $5$ epochs, which means that each additional training instance is used either $5$ times or only once. This is because we observed the performance of vanilla adversarial training to start decaying after $5$ epochs. As such, methods requiring many epochs such as~\cite{balaji2019instance} and~\cite{huang2020self} are not applicable here.
Our first technique, reweighting, is the same as in Section~\ref{subsec:fast}. In addition to reweighting, we can also add a KL regularization term measuring the KL divergence between the output probability of the clean instance and of the adversarial instance. The KL term encourages the adversarial output to be close to the clean one. In other words, the clean output probability serves as the adaptive target. For hard instances, the clean and adversarial inputs are usually both misclassified. Therefore, the clean outputs of these instances constitute simpler targets compared with the ground-truth labels. Ultimately, the loss objective of a mini-batch $\{{\bm{x}}_i\}_{i = 1}^B$ used for fine-tuning is expressed as ${\mathcal{L}}_{FT}(\{{\bm{x}}_i\}_{i = 1}^B) = \sum_{i = 1}^B w_i \left[ {\mathcal{L}}_{\bm{w}}({\bm{x}}'_i) + \lambda KL({\bm{o}}_i || {\bm{o}}'_i) \right]$ where $w_i$ is the adaptive weight when we use re-weighting, or $1 / B$ otherwise. $\lambda$ is $6$ when using the regularization term and $0$ otherwise.
We use both reweighting and KL regularization to fine-tune the model. Our results are shown in Table~\ref{tbl:ft}, where the robust test accuracy is also evaluated by AutoAttack. It is clear that our methods can improve the performance under all settings. We also conduct an ablation study in Appendix~\ref{subsec:app_finetune}. All these results show that avoiding fitting hard adversarial examples helps to improve the generalization performance in adversarial fine-tuning with additional training data.
\section{Conclusion}
We have investigated \textit{adversarial overfitting} from the perspective of the easy and hard training instances. Based on a quantitative metric to measure the instance difficulty, we have shown that a model's generalization performance under adversarial attacks degrades during the later phase of training as the model fits the hard adversarial instances. We have conducted theoretical analyses on both linear and nonlinear models. On an over-parameterized logistic regression model, we have shown that training on harder adversarial instances leads to poorer generalization performance and the gap increases with the size of the adversarial budget. On general nonlinear models, we have proven that the lower bound of a well-trained model's Lipschitz constant increases with the difficulty of the training instances. Finally, our case study on existing methods show that the ones successfully mitigating adversarial overfitting implicitly avoid fitting hard adversarial instances while the others fail to achieve true robustness. Our findings can also be applied in fast adversarial training and adversarial fine-tuning.
\begin{enumerate}
\item For all authors... \begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{\bf We summarize our contributions in the Introduction part.}
\item Did you describe the limitations of your work?
\answerYes{\bf Current theoretical analysis is based on linear models and separable data.
There is a gap between theory and practice.
We mention this limitation in the beginning of Section~\ref{sec:thm}.
We also mention that more quantitative analysis of training instance's impact on model's generalization performance and theoretical analysis on nonlinear models will be our further focus in the end of this paper.}
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{\bf This work is research-oriented, its societal impacts depend on the its downstream applications. For the work itself, we don't see any negative societal impacts in the foreseeable future.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{} \end{enumerate}
\item If you are including theoretical results... \begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{\bf Please check the assumptions in Theorem~\ref{thm:converge} and~\ref{thm:main}.}
\item Did you include complete proofs of all theoretical results?
\answerYes{\bf See Section~\ref{sec:proof_converge} ,~\ref{sec:proof_main} and~\ref{sec:proof_coro} in Appendix.} \end{enumerate}
\item If you ran experiments... \begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{\bf See Section~\ref{sec:app_exp_settings} in Appendix.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{\bf We run the experiments in multiple times and report the average. The variance is small, so we did not report it explicitly in the table. For example, the variance of experimental results in Table 1 is smaller than $0.012$.}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{\bf See Section~\ref{sec:app_exp_settings_general} in Appendix.} \end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{\bf We mention the URL in the footnotes of the existing codes or data. We also cite the corresponding papers.}
\item Did you mention the license of the assets?
\answerYes{\bf All the assets are either on a github public repository or a public website. They are free for research purposes. We mention the specific license of each dataset or code in the footnotes of Appendix~\ref{sec:app_exp_settings}.}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{\bf We provide our code in the supplemental material.}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{} \end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{} \end{enumerate}
\end{enumerate}
Ï
\begin{appendices}
\section{Notation} \label{sec:notation}
\begin{table}[!ht] \centering
\begin{tabular}{p{1.2cm}|p{3.6cm}|p{8cm}} \Xhline{4\arrayrulewidth} $b$ & Section~\ref{sec:thm_nonlinear} & The number of parameters in a general nonlinear model. \\ $c$ & Assumption~\ref{asp:iso}, Section~\ref{sec:thm_nonlinear} & The coefficient in isoperimetry. \\ $C$ & Section~\ref{sec:thm_nonlinear} & The mean squared error on the adversarial training set. \\ $d$ & Equation~\ref{eq:difficulty}, Section~\ref{sec:hardeasy} & The function introduced by the proposed difficulty metric. \\ ${\mathcal{D}}$ & Section~\ref{sec:hardeasy} & The data set. \\ $f_{\bm{w}}$ & Section~\ref{sec:intro} & The model parameterized by ${\bm{w}}$. \\ ${\mathcal{F}}$ & Theorem~\ref{thm:nonlinear}, Section~\ref{sec:thm_nonlinear} & The function space of the model. \\ ${\mathcal{G}}$ & Section~\ref{subsec:hardoverfit} & Groups of the training set divided by instance difficulty. \\ $h$ & Definition~\ref{def:h}, Section~\ref{sec:thm_nonlinear} & The bandwidth of the model's output range. \\ $J$ & Theorem~\ref{thm:nonlinear}, Section~\ref{sec:thm_nonlinear} & The Lipschitz constant of $f_{\bm{w}}$ w.r.t ${\bm{w}}$. \\ $K$ & Section~\ref{sec:thm} & The number of components in the data distribution. \\ $l$ & Section~\ref{sec:thm} & The component index where the training data is sampled. \\ $L$ & Assumption~\ref{asp:iso}, Section~\ref{sec:thm_nonlinear} & The Lipschitz constant of $f_{\bm{w}}$ w.r.t the input. \\ ${\mathcal{L}}$ & Section~\ref{sec:intro} & The loss function. \\ $m$ & Section~\ref{sec:thm} & Dimension of the input data. \\ $n$ & Section~\ref{sec:thm} & The number of training instances. \\ ${\bm{o}}$, ${\bm{o}}'$ & Section~\ref{sec:casestudy} & Model's output of the clean and the adversarial input. \\ $p$ & Section~\ref{sec:intro} & Shape of the adversarial budget. \\ $r$ & Equation~\ref{eq:gmm}, Section~\ref{sec:linear} & The coefficient in the GMM model. \\ ${\mathcal{R}}$ & Theorem~\ref{thm:main}, Section~\ref{sec:linear} & The robust test error. \\ ${\bm{t}}$, $\widetilde{{\bm{t}}}$ & Section~\ref{subsec:fast} & The adaptive target and the moving average target. \\ ${\bm{w}}$ & Section~\ref{sec:thm} & Model parameters. \\ $W$ & Theorem~\ref{thm:nonlinear}, Section~\ref{sec:thm_nonlinear} & The diameter upper bound of the parameter space. \\ ${\mathcal{W}}$ & Theorem~\ref{thm:nonlinear}, Section~\ref{sec:thm_nonlinear} & The space of model parameters. \\ ${\bm{x}}$, ${\bm{x}}'$, ${\mathbf{X}}$ & Section~\ref{sec:intro} \& Section~\ref{sec:thm} & Clean input, adversarial input and its matrix form. \\ $y$, ${\bm{y}}$ & Section~\ref{sec:intro} \& Section~\ref{sec:thm} & Label and its vector form. \\ $\alpha$ & Algorithm~\ref{alg:fast} & The step size of the adversarial attacks. \\ $\beta$ & Section~\ref{subsec:fast} & The coefficient controlling how adaptive the target is. \\ $\gamma$ & Theorem~\ref{thm:nonlinear}, Section~\ref{sec:thm_nonlinear} & The non-negative variable introduced in Theorem~\ref{thm:nonlinear}. \\ $\delta$ & Theorem~\ref{thm:nonlinear}, Section~\ref{sec:thm_nonlinear} & The probability introduced in Theorem~\ref{thm:nonlinear}. \\ $\epsilon$ & Section~\ref{sec:intro} & The size of the adversarial budget. \\ ${\bm{\eta}}$ & Equation~\ref{eq:gmm}, Section~\ref{sec:linear} & The direction of the mean of each GMM's component. \\ $\rho$ & Section~\ref{subsec:fast} & The momentum calculating the moving average target. \\ $\mu_l$, $\mu_l$ & Assumption~\ref{asp:iso}, Section~\ref{sec:thm_nonlinear} & Data distribution and its $l$-th component. \\ $\sigma$ & Assumption~\ref{asp:iso}, Section~\ref{sec:thm_nonlinear} & The conditional variance of the data distribution. \\
\hline \Xhline{4\arrayrulewidth} \end{tabular} \caption{The notation in this paper. We provide the section of their definition or first appearance.} \label{tbl:notation} \end{table}
\section{Proofs in Theoretical Analysis}
\subsection{Proof of Theorem~\ref{thm:converge}} \label{sec:proof_converge}
Similar to~\cite{soudry2018implicit}, we can assume all instances are positive without the loss of generality, this is because we can always redefine $y_i{\bm{x}}_i$ as the input. In this regard, the loss to optimize in a logistic regression model under the adversarial budget ${\mathcal{S}}^{(2)}(\epsilon)$ is:
\begin{equation} \begin{aligned}
{\mathcal{L}}_{\bm{w}}({\mathbf{X}}) = \sum_{i = 1}^n l({\bm{w}}^T {\bm{x}}_i - \epsilon \|{\bm{w}}\|) \end{aligned} \end{equation}
Here $l(\cdot)$ is the logistic function: $l(x) = \frac{1}{1 + e^{-x}}$. We use ${\mathbf{X}} \in {\mathbb{R}}^{n \times m}$ to represent the training set as said in Section~\ref{sec:thm}, then the loss function ${\mathcal{L}}({\bm{w}})$ is $\|{\mathbf{X}}\|^2$-smooth, where $\|{\mathbf{X}}\|^2$ is the maximal singular value of ${\mathbf{X}}$. Since function ${\mathcal{L}}_{\bm{w}}$ is convex on ${\bm{w}}$, so gradient descent of step size smaller than $2\|{\mathbf{X}}\|^{-2}$ will asymptotically converge to the global infimum of the function ${\mathcal{L}}_{\bm{w}}$ on ${\bm{w}}$.
Before proving Theorem~\ref{thm:converge}, we first introduce the following lemma:
\begin{lemma} \label{lem:equiv} Consider the max-margin vector $\widehat{{\bm{w}}}$ of the vanilla case defined in Equation~(\ref{eq:max_margin}), we then introduce the max margin vector $\widehat{{\bm{w}}'}$ defined under the adversarial attack of budget ${\mathcal{S}}^{(2)}(\epsilon)$ as follows: \begin{equation} \begin{aligned}
\widehat{{\bm{w}}'} = \argmin_{{\bm{w}}} \|{\bm{w}}\| \ \ &s.t. \ \forall i \in \{1, 2, ..., n\}, \ {\bm{w}}^T {\bm{x}}_i - \epsilon \|{\bm{w}}\| \geq 1 \end{aligned} \label{eq:adv_max_margin} \end{equation}
Then we have $\widehat{{\bm{w}}'}$ is collinear with $\widehat{{\bm{w}}}$, i.e., $\frac{\widehat{{\bm{w}}'}}{\|\widehat{{\bm{w}}'}\|} = \frac{\widehat{{\bm{w}}}}{\|\widehat{{\bm{w}}}\|}$ \end{lemma}
\begin{proof}
We show that $\widehat{{\bm{w}}} = \frac{1}{1 + \epsilon \|\widehat{{\bm{w}}'}\|} \widehat{{\bm{w}}'}$ and prove it by contraction.
Let's assume $\exists {\bm{v}},\ s.t.\ \|{\bm{v}}\| < \frac{\|\widehat{{\bm{w}}'}\|}{1 + \epsilon \|\widehat{{\bm{w}}'}\|} \mathrm{and} \ \forall i \in \{1, 2, ..., n\}, \ {\bm{v}}^T {\bm{x}}_i \geq 1$, then we can consider ${\bm{v}}' = (1 + \|\widehat{{\bm{w}}'}\|){\bm{v}}$. The $l_2$ norm of ${\bm{v}}'$ is smaller than that of $\widehat{{\bm{w}}'}$, and we have \begin{equation} \begin{aligned}
\forall i \in \{1, 2, ..., n\}, {\bm{v}}'^T{\bm{x}}_i - \epsilon \|{\bm{v}}'\| = (1 + \epsilon\|\widehat{{\bm{w}}'}\|){\bm{v}}^T{\bm{x}}_i - \epsilon\|{\bm{v}}'\| > (1 + \epsilon \|\widehat{{\bm{w}}'}\|) - \epsilon \|\widehat{{\bm{w}}'}\| = 1 \end{aligned} \label{eq:contraction} \end{equation}
Inequality~\ref{eq:contraction} shows we can construct a vector ${\bm{v}}'$ whose $l_2$ norm is smaller than $\widehat{{\bm{w}}'}$ and satisfying the condition~(\ref{eq:adv_max_margin}), this contracts with the optimality of $\widehat{{\bm{w}}'}$. Therefore, there is no solution of condition~(\ref{eq:max_margin}) whose norm is smaller than $\frac{\|\widehat{{\bm{w}}'}\|}{1 + \epsilon \|\widehat{{\bm{w}}'}\|}$.
On the other hand, $\frac{1}{1 + \epsilon \|\widehat{{\bm{w}}'}\|} \widehat{{\bm{w}}'}$ satisfies the condition~(\ref{eq:max_margin}) and its $l_2$ norm is $\frac{\|\widehat{{\bm{w}}'}\|}{1 + \epsilon \|\widehat{{\bm{w}}'}\|}$. As a result, we have $\widehat{{\bm{w}}} = \frac{1}{1 + \epsilon \|\widehat{{\bm{w}}'}\|} \widehat{{\bm{w}}'}$. That means $\widehat{{\bm{w}}}$ and $\widehat{{\bm{w}}'}$ are collinear. \end{proof}
With Lemma~\ref{lem:equiv}, Theorem~\ref{thm:converge} is more straightforward, whose proof is shown below. Regarding the convergence analysis of the logistic regression model in non-adversarial cases, we encourage the readers to find more details in~\cite{ji2019implicit, soudry2018implicit}.
\begin{proof}
Theorem 1 in~\cite{ji2019implicit} and Theorem 3 in~\cite{soudry2018implicit} proves the convergence of the direction of the logistic regression parameters in different cases. In this regard, we can let ${\bm{w}}_\infty = \lim_{u \to \infty} \frac{{\bm{w}}(u)}{\|{\bm{w}}(u)\|}$. That is to say, for sufficiently large $u$, the direction of the parameter ${\bm{w}}(u)$ can be considered fixed. As a result, the adversarial perturbations of each data instance ${\bm{x}}_i$ is fixed, i.e., $\epsilon {\bm{w}}_\infty$.
We can then apply the conclusion of Theorem 3 in~\cite{soudry2018implicit} here, the only difference is the data points are $\{{\bm{x}}_i - \epsilon {\bm{w}}_\infty\}_{i = 1}^n$. Therefore, the parameter ${\bm{w}}(u)$ will converge to the $l_2$ max margin of the dataset $\{{\bm{x}}_i -\epsilon {\bm{w}}_\infty\}_{i = 1}^n$. When $t \to \infty$, we have ${\bm{w}}(u)^T ({\bm{x}}_i - \epsilon {\bm{w}}_\infty) = {\bm{w}}(u)^T{\bm{x}}_i - \epsilon \|{\bm{w}}(u)\|$. This is exactly the adversarial max margin condition in (\ref{eq:adv_max_margin}). Based on Lemma~\ref{lem:equiv}, we have $\lim_{u \to \infty} \frac{{\bm{w}}(u)}{\|{\bm{w}}(u)\|} = \frac{\widehat{{\bm{w}}'}}{\|\widehat{{\bm{w}}'}\|} = \frac{\widehat{{\bm{w}}}}{\|\widehat{{\bm{w}}}\|}$ \end{proof}
\subsection{Proof of Theorem~\ref{thm:main}} \label{sec:proof_main}
Given the parameter ${\bm{w}}$ of the logistic regression model, we can first calculate the robust error for the $k$-th component of the GMM model defined in~(\ref{eq:gmm}).
\begin{lemma} \label{lemma:acc} The 0-1 classification error of a linear classifier ${\bm{w}}$ under the adversarial attack of the budget ${\mathcal{S}}^{(2)}(\epsilon)$ for the $k$-th component of the GMM model defined in~(\ref{eq:gmm}) is: \begin{equation} \begin{aligned}
\widehat{{\mathcal{R}}}_k(\epsilon) = \Phi(\frac{r_k {\bm{w}}^T {\bm{\eta}}}{\|{\bm{w}}\|} - \epsilon) \end{aligned} \end{equation} where $\Phi(x) = {\mathbb{P}}(Z > x), Z \sim {\mathcal{N}}(0, 1)$. \end{lemma}
\begin{proof}
For a random drawn data instance $({\bm{x}}, y)$, the adversarial perturbation is $-y \epsilon \frac{{\bm{w}}}{\|{\bm{w}}\|}$. Let's decompose ${\bm{x}}$ as $r_k y{\bm{\eta}} + {\bm{z}}$, where ${\bm{z}} \sim {\mathcal{N}}(0, {\mathbf{I}})$. Then, we have \begin{equation} \begin{aligned}
\widehat{{\mathcal{R}}}_k(\epsilon) &= {\mathbb{P}}(y{\bm{w}}^T({\bm{x}} - y\epsilon\frac{{\bm{w}}}{\|{\bm{w}}\|}) < 0) = {\mathbb{P}}(y{\bm{w}}^T(r_k y{\bm{\eta}} + {\bm{z}} - y\epsilon\frac{{\bm{w}}}{\|{\bm{w}}\|}) < 0) \\
&= {\mathbb{P}}(-y{\bm{w}}^T{\bm{z}} > r_k {\bm{w}}^T{\bm{\eta}} - \epsilon\|{\bm{w}}\|) \end{aligned} \end{equation}
Since ${\bm{z}} \sim {\mathcal{N}}(0, {\mathbf{I}})$, we have $-y{\bm{w}}^T{\bm{z}} \sim {\mathcal{N}}(0, (-y{\bm{w}}^T)^T(-y{\bm{w}}^T)) = {\mathcal{N}}(0, {\bm{w}}^T{\bm{w}})$. Furthermore $\frac{-y{\bm{w}}^T{\bm{z}}}{\|{\bm{w}}\|} \sim {\mathcal{N}}(0, 1)$, and we can further simplify $\widehat{{\mathcal{R}}}_k(\epsilon)$ as follows:
\begin{equation} \begin{aligned}
\widehat{{\mathcal{R}}}_k(\epsilon) = {\mathbb{P}}(\frac{-y{\bm{w}}^T{\bm{z}}}{\|{\bm{w}}\|} > \frac{r_k {\bm{w}}^T{\bm{\eta}}}{\|{\bm{w}}\|} - \epsilon) = \Phi(\frac{r_k {\bm{w}}^T {\bm{\eta}}}{\|{\bm{w}}\|} - \epsilon) \end{aligned} \end{equation} \end{proof}
With Lemma~\ref{lemma:acc}, we can straightforwardly calculate the robust error for all components of the GMM model defined in~(\ref{eq:gmm}):
\begin{equation} \begin{aligned}
\widehat{{\mathcal{R}}}(\epsilon) = \sum_{k = 1}^K p_k \Phi(\frac{r_k {\bm{w}}^T {\bm{\eta}}}{\|{\bm{w}}\|} - \epsilon) \end{aligned} \label{eq:testacc} \end{equation}
On the other hand, Theorem~\ref{thm:converge} indicates the parameter ${\bm{w}}$ will converge to the $l_2$ max margin. However, for arbitrary training set, we do not have the closed form of ${\bm{w}}$, which is a barrier for the further analysis. Nevertheless, results from~\cite{wang2020benign} indicates in the over-parameterization regime, the parameter ${\bm{w}}$ will converge to min-norm interpolation of the data with high probability.
\begin{lemma}{(Directly from Theorem 1 in~\cite{wang2020benign})} \label{lemma:min_norm}
Assume $n$ training instances drawn from the $l$-th mode of the described distribution in~(\ref{eq:gmm}) and each of them is a $m$-dimensional vector. If $\frac{m}{n\log n}$ is sufficiently large\footnote{Specifically, $m$ and $n$ need to satisfy $m > 10 n \log n + n -1$ and $m > C n r_l \sqrt{\log 2 n}\|{\bm{\eta}}\|$. The constant $C$ is derived in the proof of Theorem 1 in~\cite{wang2020benign}.}, then the $l_2$ max margin vector in Equation (\ref{eq:max_margin}) will be the same as the solution of the min-norm interpolation described below with probability at least $(1 - O(\frac{1}{n}))$. \begin{equation} \begin{aligned}
\bar{{\bm{w}}} = \argmin_{{\bm{w}}} \|{\bm{w}}\| \ \ s.t. \ \forall i \in \{1, 2, ..., n\}, \ y_i = {\bm{w}}^T {\bm{x}}_i \end{aligned} \label{eq:min_norm} \end{equation} \end{lemma}
Since the min-norm interpolation has a closed solution $\bar{{\bm{w}}} = {\mathbf{X}}^T ({\mathbf{X}} {\mathbf{X}}^T)^{-1}{\bm{y}}$, Lemma~\ref{lemma:min_norm} will greatly facilitate the calculation of ${\mathbb{R}}({\bm{w}})$ in Theorem~\ref{thm:main}. To simplify the notation, we first define the following variables.
\begin{equation} \begin{aligned} {\mathbf{U}} = {\mathbf{Q}}{\mathbf{Q}}^T,\ {\bm{d}} = {\mathbf{Q}}{\bm{\eta}},\ s = {\bm{y}}^T{\mathbf{U}}^{-1}{\bm{y}},\ t = {\bm{d}}{\mathbf{U}}^{-1}{\bm{d}},\ v = {\bm{y}}^T{\mathbf{U}}^{-1}{\bm{d}} \end{aligned} \label{eq:def} \end{equation}
The proof of Theorem~\ref{thm:main} is then presented below.
\begin{proof}
Based on (\ref{eq:testacc}), the key is to simplify the term $\frac{{\bm{w}}^T {\bm{\eta}}}{\|{\bm{w}}\|}$, let's denote it by $A$, then we have: \begin{equation} \begin{aligned} A^2 = \frac{{\bm{\eta}}^T {\bm{w}} {\bm{w}}^T {\bm{\eta}}}{{\bm{w}}^T {\bm{w}}} = \frac{({\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\mathbf{X}}{\bm{\eta}})^2}{{\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\bm{y}}} \end{aligned} \label{eq:r2} \end{equation}
The key challenge here is to calculate the term $({\mathbf{X}}{\mathbf{X}}^T)^{-1}$ where ${\mathbf{X}} = r_l {\bm{y}}{\bm{\eta}}^T + Q$. Here we utilize Lemma 3 of~\cite{wang2020benign} and Woodbury identity~\cite{horn2012matrix}, we have:
\begin{equation} \begin{aligned}
{\bm{y}}^T ({\mathbf{X}}{\mathbf{X}})^{-1} = {\bm{y}}^T{\mathbf{U}}^{-1} - \frac{(r_l^2 s\|{\bm{\eta}}\|^2 + r_l^2v^2 + r_l v -r_l^2 st){\bm{y}}^T +r_l s{\bm{d}}^T}{r_l^2s(\|{\bm{\eta}}\|^2 - t) + (r_l v + 1)^2}{\mathbf{U}}^{-1} \end{aligned} \label{eq:inverse} \end{equation}
Here, $s$, $t$, $v$, ${\mathbf{U}}$ and ${\bm{d}}$ are defined in Equation (\ref{eq:def}). The scalar divisor comes from the matrix inverse calculation. Base of Equation (\ref{eq:inverse}), we can then calculate ${\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\bm{y}}$ and ${\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\mathbf{X}}{\bm{\eta}}$.
\begin{equation} \begin{aligned}
{\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\bm{y}} &= s - \frac{(r_l^2 s \|{\bm{\eta}}\|^2 + r_l^2 v^2 + r_l v - r_l^2st)s + r_l sv}{r_l^2 s(\|{\bm{\eta}}\|^2 - t) + (r_l v + 1)^2} \\
&= \frac{s}{r_l^2 s(\|{\bm{\eta}}\|^2 - t) + (r_l v + 1)^2} \end{aligned} \label{eq:yy} \end{equation}
\begin{equation} \begin{aligned} {\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\mathbf{X}}{\bm{\eta}} &= {\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}(r_l{\bm{y}}{\bm{\eta}}^T + Q){\bm{\eta}} \\
&= r_l \|{\bm{\eta}}\|^2{\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\bm{y}} + {\bm{y}}^T({\mathbf{X}}{\mathbf{X}}^T)^{-1}{\bm{d}} \\
&= \frac{r_l s(\|{\bm{\eta}}\|^2 - t) + r_l v^2 + v}{r_l^2 s(\|{\bm{\eta}}\|^2 - t) + (r_l v + 1)^2} \end{aligned} \label{eq:yd} \end{equation}
Plug Equation (\ref{eq:yy}) and (\ref{eq:yd}) into (\ref{eq:r2}), we have:
\begin{equation} \begin{aligned}
A^2 &= \frac{\left( r_l s(\|{\bm{\eta}}\|^2 - t) + r_l v^2 + v \right)^2}{s\left( r_l^2 s(\|{\bm{\eta}}\|^2 - t) + (r_l v + 1)^2 \right)} \\
&= \frac{s(\|{\bm{\eta}}\|^2 - t) + v^2}{s} - \frac{\|{\bm{\eta}}\|^2 - t}{r_l^2 s(\|{\bm{\eta}}\|^2 - t) + (r_l v + 1)^2} \\
&= \frac{s(\|{\bm{\eta}}\|^2 - t) + v^2}{s} - \frac{1}{\left( \frac{s(\|{\bm{\eta}}\|^2 - t) + v^2}{\|{\bm{\eta}}\|^2 - t} \right)r_l^2 + \frac{2v}{\|{\bm{\eta}}\|^2 - t}r_l + \frac{1}{\|{\bm{\eta}}\|^2 - t}} \end{aligned} \label{eq:final} \end{equation}
Plug (\ref{eq:final}) into (\ref{eq:testacc}), we then obtain the robust error on all components of the GMM defined in (\ref{eq:gmm}):
\begin{equation} \centering \begin{aligned} {\mathcal{R}}(r_l, \epsilon) = \sum_{k = 1}^K p_k \Phi\left( r_k g(r_l) - \epsilon \right),\ g(r_l) = (C_1 - \frac{1}{C_2 r_l^2 + C_3})^{\frac{1}{2}} \\
C_1 = \frac{s(\|{\bm{\eta}}\|^2 - t) + v^2}{s},\ C_2 = \frac{s(\|{\bm{\eta}}\|^2 - t) + v^2}{\|{\bm{\eta}}\|^2 - t},\ C_3 = \frac{2v}{\|{\bm{\eta}}\|^2 - t}r_l + \frac{1}{\|{\bm{\eta}}\|^2 - t}. \end{aligned} \label{eq:r_final} \end{equation}
We study the sign of $C_1$ and $C_2$. Consider ${\mathbf{U}} = {\mathbf{Q}}{\mathbf{Q}}^T$ is a positive semidefinite matrix, so $s = {\bm{y}}{\mathbf{U}}^{-1}{\bm{y}}^T \geq 0$. In addition, we have $\|{\bm{\eta}}\|^2 - t = {\bm{\eta}}^T\left({\mathbf{I}} - ({\mathbf{Q}}{\mathbf{Q}}^T)^{-1} \right){\bm{\eta}}$. Since ${\mathbf{I}} - ({\mathbf{Q}}{\mathbf{Q}}^T)^{-1} = ({\mathbf{I}} - ({\mathbf{Q}}{\mathbf{Q}}^T)^{-1})^T({\mathbf{I}} - ({\mathbf{Q}}{\mathbf{Q}}^T)^{-1})$ is a positive semidefinite matrix, we can obtain ${\mathbf{I}} - ({\mathbf{Q}}{\mathbf{Q}}^T)^{-1}$ is also a positive semidefinite matrix. As a result, $C_1$ and $C_2$ are both non-negative.
\end{proof}
\subsection{Proof of Corollary~\ref{coro:epsilon}} \label{sec:proof_coro}
To prove Corollary~\ref{coro:epsilon}, we first prove the following lemma:
\begin{lemma} \label{lemma:epsilon} Under the condition of Theorem~\ref{thm:main} and ${\mathcal{R}}$ in Equation (\ref{eq:r}), $\frac{\partial {\mathcal{R}}(r_l, \epsilon)}{\partial r_l}$ is negative and monotonically decreases with $\epsilon$. \end{lemma}
\begin{proof}
Based on Equation (\ref{eq:r_final}), we have:
\begin{equation} \begin{aligned} \frac{\partial {\mathcal{R}}(r_l, \epsilon)}{\partial r_l} = \sum_{k = 1}^K p_k \Phi'(r_k g(r_l) - \epsilon) \frac{\partial g(r_l)}{\partial r_l} \end{aligned} \end{equation}
Since the training data is separable, we have $\forall k, r_k {\bm{w}}^T{\bm{\eta}} - \epsilon \|{\bm{w}}\| > 0$, which is equivalent to the following:
\begin{equation} \begin{aligned} \forall k, r_k g(r_l) - \epsilon > 0 \end{aligned} \end{equation}
First, $p_k$ is a positive number by definition. Consider function $\Phi(x)$ monotonically decrease with $x$ and is convex when $x > 0$, so $\forall k, \Phi'(r_k g(r_l) - \epsilon)$ is negative and decreases with $\epsilon$. In addition, $g(r_l)$ increases with $r_l$ and is independent on $\epsilon$, so $\frac{\partial g(r_l)}{\partial r_l}$ can be considered as a positive constant. Therefore, $\frac{\partial {\mathcal{R}}(r_l, \epsilon)}{\partial r_l}$ is negative and monotonically decreases with $\epsilon$.
\end{proof}
Now, we are ready to prove Corollary~\ref{coro:epsilon}:
\begin{proof}
We subtract the left hand side from the right hand side in the inequality of Corollary~\ref{coro:epsilon}:
\begin{equation} \begin{aligned} \left[{\mathcal{R}}(r_j, \epsilon_1) - {\mathcal{R}}(r_i, \epsilon_1)\right] - \left[{\mathcal{R}}(r_j, \epsilon_2) - {\mathcal{R}}(r_i, \epsilon_2)\right] &= \int_{r_i}^{r_j} \frac{\partial {\mathcal{R}}(r_l, \epsilon_1)}{\partial r_l} d_{r_l} - \int_{r_i}^{r_j} \frac{\partial {\mathcal{R}}(r_l, \epsilon_2)}{\partial r_l} d_{r_l} \\ &= \int_{r_i}^{r_j} \left[\frac{\partial {\mathcal{R}}(r_l, \epsilon_1)}{\partial r_l} - \frac{\partial {\mathcal{R}}(r_l, \epsilon_2)}{\partial r_l}\right] d_{r_l} \\ &> 0 \end{aligned} \label{eq:substract} \end{equation}
The last inequality is based on $r_j > r_i$, $\epsilon_2 > \epsilon_1$, Lemma~\ref{lemma:epsilon}, which indicates $\left[\frac{\partial {\mathcal{R}}(r_l, \epsilon_1)}{\partial r_l} - \frac{\partial {\mathcal{R}}(r_l, \epsilon_2)}{\partial r_l}\right]$ is always positive. We reorganize (\ref{eq:substract}) and obtain ${\mathcal{R}}(r_i, \epsilon_1) - {\mathcal{R}}(r_j, \epsilon_1) < {\mathcal{R}}(r_i, \epsilon_2) - {\mathcal{R}}(r_j, \epsilon_2)$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:nonlinear}} \label{sec:nonlinear_proof}
We start with the following lemma.
\begin{lemma} \label{lemma:nonlinear_1}
Given the assumptions of Theorem~\ref{thm:nonlinear}, we define $g({\bm{x}}) = \mathbb{E}(y|{\bm{x}})$, $z({\bm{x}}) = y - g({\bm{x}})$ and consider $\gamma = \sigma^2_l + h^2(C, \epsilon) - C$, then the following inequality holds. \begin{equation} \begin{aligned} &\forall a \in (0, 1), {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}'_i))^2 \leq C) \\ & \leq 2 e^{-\frac{na^2\gamma^2}{8}} + {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n}\sum_{i = 1}^n f_{\bm{w}}({\bm{x}}_i)z({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma) \end{aligned} \label{eq:nonlinear1} \end{equation} \end{lemma}
\begin{proof} Given the definition of $h(C, \epsilon)$, we have: \begin{equation} \begin{aligned} (y_i - f_{\bm{w}}({\bm{x}}'_i))^2 &= [(y_i - f_{\bm{w}}({\bm{x}}_i)) + (f_{\bm{w}}({\bm{x}}_i) - f_{\bm{w}}({\bm{x}}'_i))]^2 \\ &\geq (y_i - f_{\bm{w}}({\bm{x}}_i))^2 + (f_{\bm{w}}({\bm{x}}_i) - f_{\bm{w}}({\bm{x}}'_i))^2 \\ &\geq (y_i - f_{\bm{w}}({\bm{x}}_i))^2 + h^2(C, \epsilon) \end{aligned} \end{equation}
For the first inequality, ${\bm{x}}'_i$ is the adversarial example which tries to maximize the loss objective, $y_i \in \{-1, +1\}$ and the range of $f_{\bm{w}}$ is $[-1, +1]$, so $\langle y_i - f_{\bm{w}}({\bm{x}}_i), f_{\bm{w}}({\bm{x}}_i) - f_{\bm{w}}({\bm{x}}'_i) \rangle \geq 0$. The second inequality is based on the definition of $h^2(C, \epsilon)$ in Definition~\ref{def:h}. As a result, we can simplify the left hand side of (\ref{eq:nonlinear1}) as follows:
\begin{equation} \begin{aligned} {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}'_i))^2 \leq C) &\leq {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}_i))^2 \leq C - h^2(C, \epsilon)) \end{aligned} \label{eq:remove_adv} \end{equation}
We consider the sequence $\{z({\bm{x}}_i)\}_{i = 1}^n$, it is i.i.d with $\mathbb{E}_{\mu_l}(z({\bm{x}})^2) = \mathbb{E}_{\mu_l}[Var(y|{\bm{x}})] = \sigma_l^2$. Since the range of the prediction is $[-1, +1]$, so $z^2({\bm{x}}) \in [0, 4]$. Then, we have the following inequality by Hoeffding's inequality~\cite{hoeffding1994probability}.
\begin{equation} \begin{aligned} \forall a \in (0, 1), {\mathbb{P}}(\frac{1}{n} \sum_{i = 1}^n z^2({\bm{x}}_i) \leq \sigma^2_l - a\gamma) \leq e^{- \frac{na^2\gamma^2}{8}} \end{aligned} \label{eq:hoeffding1} \end{equation}
Similarly, we consider the sequence $\{z({\bm{x}}_i)g({\bm{x}}_i)\}_{i = 1}^n$, the following inequality holds based on the Hoeffding's inequality and the fact $\mathbb{E}(z({\bm{x}})g({\bm{x}})) = 0$, $z({\bm{x}})g({\bm{x}}) \in [-2, +2]$.
\begin{equation} \begin{aligned} \forall a \in (0, 1), {\mathbb{P}}(\frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)g({\bm{x}}_i) \leq a\gamma) \leq e^{- \frac{na^2\gamma^2}{8}} \end{aligned} \label{eq:hoeffding2} \end{equation}
Now we study the right hand side of (\ref{eq:remove_adv}):
\begin{equation} \begin{aligned} \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}_i))^2 &= \frac{1}{n} \sum_{i = 1}^n \left( z^2({\bm{x}}_i) + (g({\bm{x}}_i) - f_{\bm{w}}({\bm{x}}_i))^2 + 2z({\bm{x}}_i)(g({\bm{x}}_i) - f_{\bm{w}}({\bm{x}}_i)) \right) \\ &\geq \frac{1}{n} \sum_{i = 1}^n \left( z^2({\bm{x}}_i) + 2z({\bm{x}}_i)g({\bm{x}}_i) - 2z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \right) \end{aligned} \end{equation}
Consider the following reasoning:
\begin{equation} \begin{aligned} \left\{ \begin{aligned} &\frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}_i))^2 \leq C - h^2(C, \epsilon) = \sigma^2_l - \gamma \\ & \frac{1}{n} \sum_{i = 1}^n z^2({\bm{x}}_i) \geq \sigma^2_l - a \gamma \\ &\frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)g({\bm{x}}_i) \geq -a \gamma \end{aligned} \right.\ \Longrightarrow \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma \end{aligned} \label{eq:reason} \end{equation}
As a result, we have:
\begin{equation} \begin{aligned} &{\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}_i) \leq C - h^2(C, \epsilon))) \\ \leq & {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z^2({\bm{x}}_i) \leq \sigma^2_l - a \gamma)
+ {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)g({\bm{x}}_i) \geq -a \gamma) +\\
&{\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma) \\
\leq & 2 e^{-\frac{na^2\gamma^2}{8}} + {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma) \end{aligned} \label{eq:reason_bound} \end{equation}
The first inequality is based on the reasoning of (\ref{eq:reason}). The second inequality is based on (\ref{eq:hoeffding1}) and (\ref{eq:hoeffding2}).
Based on the inequality (\ref{eq:remove_adv}) and (\ref{eq:reason_bound}), we conclude the proof.
\end{proof}
To further simplify the right hand side of (\ref{eq:nonlinear1}), ${\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma)$ needs to be bounded, and this is solved by the following lemma.
\begin{lemma} \label{lemma:nonlinear2} Given the assumptions of Theorem~\ref{thm:nonlinear} and the definition of $g({\bm{x}})$, $z({\bm{x}})$ in Lemma~\ref{lemma:nonlinear_1}, then the following inequality holds. \begin{equation} \begin{aligned} \forall a \in (0, 1), a_1 > 0, a_2 > 0\ \mathrm{and}\ a_1 + a_2 &= \frac{1}{2}(1 - 3a),\\
{\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma) &\leq 2 |{\mathcal{F}}| e^{- \frac{nm}{144cL^2}a^2_1 \gamma^2} + 2 e^{-\frac{n}{8}a^2_2\gamma^2} \end{aligned} \label{eq:nonlinear2} \end{equation} \end{lemma}
\begin{proof} We recall that the data points $\{{\bm{x}}_i, y_i\}_{i = 1}^n$ are sampled from the distribution $\mu_l$, which is $c$-isoperimetric. For any $L$-Lipschitz function $f$, we have: \begin{equation} \begin{aligned}
\forall t, {\mathbb{P}}[|f_{\bm{w}}({\bm{x}}) - \mathbb{E}_{\mu_l}(f_{\bm{w}})| \geq t] \leq 2 e^{-\frac{mt^2}{2cL^2}} \end{aligned} \end{equation}
Since $z({\bm{x}}) = y - g({\bm{x}}) \in [-2, +2]$, we can then bound ${\mathbb{P}}[z({\bm{x}})(f_{\bm{w}}({\bm{x}}) - \mathbb{E}_{\mu_l}(f_{\bm{w}})) \geq t]$: \begin{equation} \begin{aligned}
\forall t, {\mathbb{P}}[z({\bm{x}})(f_{\bm{w}}({\bm{x}}) - \mathbb{E}_{\mu_l}(f_{\bm{w}})) \geq t] &\leq {\mathbb{P}}[|z({\bm{x}})(f_{\bm{w}}({\bm{x}}) - \mathbb{E}_{\mu_l}(f_{\bm{w}}))| \geq t] \\
&\leq {\mathbb{P}}[|(f_{\bm{w}}({\bm{x}}) - \mathbb{E}_{\mu_l}(f_{\bm{w}}))| \geq \frac{t}{2}] \leq 2 e^{-\frac{mt^2}{8cL^2}} \end{aligned} \end{equation}
Here we utilize the proposition in~\cite{vershynin2018high, van2014probability}\footnote{Proposition 2.6.1 in \cite{vershynin2018high} and Exercise 3.1 in \cite{van2014probability}}, which claims \textit{if $\{X_i\}_{i = 1}^n$ are independent variables and all $C$-subgaussian, then $\frac{1}{\sqrt{n}}\sum_{i = 1}^n X_i$ is $18C$-subgaussian.} Therefore, we have:
\begin{equation} \begin{aligned} \forall t, {\mathbb{P}}[\frac{1}{\sqrt{n}} \sum_{i = 1}^n z({\bm{x}}_i)(f_{\bm{w}}({\bm{x}}_i) - \mathbb{E}_{\mu_l}(f_{\bm{w}})) \geq t] \leq 2 e^{-\frac{mt^2}{144cL^2}} \end{aligned} \end{equation}
Let $t = a_1\gamma\sqrt{n}$, then we have:
\begin{equation} \begin{aligned} {\mathbb{P}}[\frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)(f_{\bm{w}}({\bm{x}}_i) - \mathbb{E}_{\mu_l}(f)) \geq a_1\gamma] \leq 2 e^{-\frac{nm}{144cL^2}a^2_1\gamma^2} \end{aligned} \label{eq:p1_nonlinear2} \end{equation}
In addition, we can bound ${\mathbb{P}}[\frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)\mathbb{E}_{\mu_l}(f_{\bm{w}}) \geq a_2\gamma]$ by:
\begin{equation} \begin{aligned}
{\mathbb{P}}[\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)\mathbb{E}_{\mu_l}(f_{\bm{w}}) \geq a_2\gamma] \leq {\mathbb{P}}[\frac{1}{n} \sum_{i = 1}^n |z({\bm{x}}_i)| \geq a_2\gamma] \leq 2 e^{-\frac{n}{8}a^2_2\gamma^2} \end{aligned} \label{eq:p2_nonlinear2} \end{equation}
The first inequality is based on the fact $\mathbb{E}_{\mu_l}(f_{\bm{w}}) \in [-1, +1]$; the second inequality is based on Hoeffding's inequality.
Now, we are ready to bound the probability ${\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma)$.
\begin{equation} \begin{aligned} &{\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)f_{\bm{w}}({\bm{x}}_i) \geq \frac{1}{2}(1 - 3a)\gamma) \\ \leq &{\mathbb{P}}[\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)(f_{\bm{w}}({\bm{x}}_i) - \mathbb{E}_{\mu_l}(f)) \geq a_1\gamma] + {\mathbb{P}}[\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n z({\bm{x}}_i)\mathbb{E}_{\mu_l}(f_{\bm{w}}) \geq a_2\gamma] \\
\leq &2 |{\mathcal{F}}| e^{-\frac{nm}{144cL^2}a^2_1\gamma^2} + 2 e^{-\frac{n}{8}a^2_2\gamma^2} \end{aligned} \end{equation} The first inequality is based on the fact $a_1 + a_2 = \frac{1}{2}(1 - 3a)$; the second inequality is based on the Boole's inequality~\cite{boole1847mathematical}, inequality (\ref{eq:p1_nonlinear2}) and (\ref{eq:p2_nonlinear2}).
\end{proof}
To simplify the constant notation, we let $a = \frac{1}{8}$, $a_1 = \frac{3}{16}$ and $a_2 = \frac{1}{8}$. We plug this into the inequality (\ref{eq:nonlinear1}) and (\ref{eq:nonlinear2}), then:
\begin{equation} \begin{aligned}
{\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}'_i))^2 \leq C) \leq 4 e^{-\frac{n\gamma^2}{2^9}} + 2|{\mathcal{F}}| e^{-\frac{nm\gamma^2}{2^{12}cL^2}} \end{aligned} \label{eq:lemma_summary} \end{equation}
Now we turn to the proof of Theorem~\ref{thm:nonlinear}.
\begin{proof}
We let ${\mathcal{F}}_L = \{f_{\bm{w}}| {\bm{w}} \in {\mathcal{W}}, Lip(f_{\bm{w}}) \leq L\}$, ${\mathcal{F}}_\gamma = \{f_{\bm{w}} | {\bm{w}} \in {\mathcal{W}}, {\bm{w}} = \frac{\gamma}{4 J} \odot {\bm{z}}, {\bm{z}} \in {\mathbb{Z}}^b\}$ and ${\mathcal{F}}_{\gamma, L} = {\mathcal{F}}_\gamma \cap {\mathcal{F}}_L$. Correspondingly, we let ${\mathcal{W}}_L = \{{\bm{w}}| {\bm{w}} \in {\mathcal{W}}, Lip(f_{\bm{w}}) \leq L\}$, ${\mathcal{W}}_\gamma = \{{\bm{w}} | {\bm{w}} \in {\mathcal{W}}, {\bm{w}} = \frac{\gamma}{4 J} \odot {\bm{z}}, {\bm{z}} \in {\mathbb{Z}}^b\}$ and ${\mathcal{W}}_{\gamma, L} = {\mathcal{W}}_\gamma \cap {\mathcal{W}}_L$. Because the diameter of ${\mathcal{W}}$ is $W$, we have $|{\mathcal{F}}_{\gamma, L}| \leq |{\mathcal{F}}_\gamma| \leq \left(\frac{4WJ}{\gamma}\right)^b$. Here, $\odot$ means the element-wise multiplication.
Note that the inequality (\ref{eq:lemma_summary}) is valid for any values of $C$ as long as it satisfies $\gamma \geq 0$. Based on this, we apply the substitution $\left\{\begin{aligned}C &\leftarrow C + \frac{1}{2}\gamma \\ \gamma &\leftarrow \frac{1}{2}\gamma \end{aligned}\right.$, then:
\begin{equation} \begin{aligned}
{\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}_{\gamma, L}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}'_i))^2 \leq C + \frac{1}{2}\gamma) &\leq 4 e^{-\frac{n\gamma^2}{2^{11}}} + 2|{\mathcal{F}}| e^{-\frac{nm\gamma^2}{2^{14}cL^2}} \\ &\leq 4 e^{-\frac{n\gamma^2}{2^{11}}} + 2 e^{b\log(\frac{4WJ}{\gamma})-\frac{nm\gamma^2}{2^{14}cL^2}} \end{aligned} \end{equation}
Based on the definition of ${\mathcal{W}}_{\gamma, L}$, we can conclude that $\forall {\bm{w}}_1 \in {\mathcal{W}}_L, \exists {\bm{w}}_2 \in {\mathcal{W}}_{\gamma, L}\ s.t. \|{\bm{w}}_1 - {\bm{w}}_2\|_\infty \leq \frac{\gamma}{8J}$. Therefore, $\forall f_{{\bm{w}}_1} \in {\mathcal{F}}_L, \exists f_{{\bm{w}}_2} \in {\mathcal{F}}_{\gamma, L} s.t. \|f_{{\bm{w}}_1} - f_{{\bm{w}}_2}\|_\infty \leq \frac{\gamma}{8}$. Let choose such $f_{{\bm{w}}_2} \in {\mathcal{F}}_{\gamma, L}$ given an arbitrary $f_{{\bm{w}}_1} \in {\mathcal{F}}_L$, then:
\begin{equation} \begin{aligned} (y - f_{{\bm{w}}_1}({\bm{x}}))^2 &= (y - f_{{\bm{w}}_2}({\bm{x}}))^2 + (2y - f_{{\bm{w}}_1}({\bm{x}}) - f_{{\bm{w}}_2}({\bm{x}}))(f_{{\bm{w}}_2}({\bm{x}}) - f_{{\bm{w}}_1}({\bm{x}})) \\
&\geq (y - f_{{\bm{w}}_2}({\bm{x}}))^2 - \frac{\gamma}{8}|(2y - f_{{\bm{w}}_1}({\bm{x}}) - f_{{\bm{w}}_2}({\bm{x}}))| \\ &\geq (y - f_{{\bm{w}}_2}({\bm{x}}))^2 - \frac{\gamma}{2} \end{aligned} \label{eq:bound_net} \end{equation}
The first inequality in (\ref{eq:bound_net}) is based on H\"older's inequality; the second inequality is based on $y \in \{-1, +1\}$ and the range of $\forall f_{\bm{w}} \in {\mathcal{F}}$ is $[-1, +1]$.
We combine (\ref{eq:lemma_summary}) with (\ref{eq:bound_net}), then:
\begin{equation} \begin{aligned} {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}_{L}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}'_i))^2 \leq C) &\leq {\mathbb{P}}(\exists f_{\bm{w}} \in {\mathcal{F}}_{\gamma, L}: \frac{1}{n} \sum_{i = 1}^n (y_i - f_{\bm{w}}({\bm{x}}'_i))^2 \leq C + \frac{1}{2}\gamma) \\ &\leq 4 e^{-\frac{n\gamma^2}{2^{11}}} + 2 e^{b\log(\frac{4WJ}{\gamma})-\frac{nm\gamma^2}{2^{14}cL^2}} \end{aligned} \label{eq:conclude} \end{equation}
Note that ${\mathcal{F}}_{L}$ is the set of functions in ${\mathcal{F}}$ whose Lipschitz constant is no larger than $L$. We set the right hand side of (\ref{eq:conclude}) to be $\delta$ and then get $L = \frac{\gamma}{2^7}\sqrt{\frac{nm}{c\left(b\log(4WJ\gamma^{-1}) - \log(\delta/2 - 2e^{-2^{-11}n\gamma^2})\right)}}$. This concludes the proof.
\end{proof}
\section{Experimental Settings} \label{sec:app_exp_settings}
\subsection{General Settings} \label{sec:app_exp_settings_general}
The ResNet-18 (RN18) architecture is same as the one in~\cite{wong2020fast}; the WideResNet-34 (WRN34) architecture is same as the one in~\cite{madry2017towards}. Unless specified, the $l_\infty$ adversarial budget used for CIFAR10 dataset~\cite{krizhevsky2009learning}~\footnote{Data available for download on \href{https://www.cs.toronto.edu/~kriz/cifar.html}{https://www.cs.toronto.edu/~kriz/cifar.html}. MIT license. Free to use.} is $8 / 255$ and for SVHN dataset~\cite{netzer2011reading} \footnote{Data available for download on \href{http://ufldl.stanford.edu/housenumbers/}{http://ufldl.stanford.edu/housenumbers/}. Free for non-commercial use.} is $0.02$. In PGD adversarial training, the step size is $2 / 255$ for CIFAR10 and $0.005$ for SVHN; PGD is run for $10$ iterations for both datasets. For adversarial attacks using a different adversarial budget, the step size is always $1/4$ of the adversarial budget's size, and we always run it for $10$ iterations. To comprehensively and reliably evaluate the robustness of the model, we use AutoAttack~\cite{croce2020reliable}, which is an ensemble of $4$ different attacks: AutoPGD on cross entropy, AutoPGD on difference of logits ratio, fast adaptive boundary (FAB) attack~\cite{croce2020minimally} and square attack~\cite{andriushchenko2020square}. Unless specified, we use stochastic gradient descent (SGD) with a momentum to optimize the model parameters, we also use weight decay whose factor is $0.0005$. Unless specified, the momentum factor is $0.9$, the learning rate starts with $0.1$ and is divided by $10$ in the $1 / 2$ and $3 / 4$ of the whole training duration. The size of the mini-batch is always $128$.
We run the experiments on a machine with 4 NVIDIA TITAN XP GPUs. It takes about $6$ hours to adversarially train a RN18 model for $200$ epochs, and a whole day to adversarially train a WRN34 model for $200$ epochs.
\subsection{Settings of Experiments in Section~\ref{sec:casestudy}} \label{subsec:mitigate_setting}
\begin{algorithm} \begin{algorithmic} \STATE \textbf{Input:} training data ${\mathcal{D}}$, model $f$, batch size $B$, PGD step size $\alpha$, adversarial budget ${\mathcal{S}}^{(p)}(\epsilon)$, coefficient $\rho$, $\beta$. \FOR {Sample a mini-batch $\{{\bm{x}}_i, y_i\}_{i = 1}^B \sim {\mathcal{D}}$}
\STATE $\forall i$, obtain the initial perturbation $\Delta_i$ as in~\cite{zheng2020efficient}.
\STATE $\forall i$, one step PGD update: $\Delta_i \leftarrow \Pi_{{\mathcal{S}}^{(p)}(\epsilon)}\left[\Delta_i + \alpha sign(\triangledown_{\Delta_i} {\mathcal{L}}_\theta({\bm{x}}_i + \Delta_i, y_i)\right])$.
\STATE $\forall i$, update the cached adversarial perturbation $\Delta_i$ as in~\cite{zheng2020efficient}.
\IF {use reweight}
\STATE $\forall i$, weight $w_i = softmax[f({\bm{x}}_i + \Delta_i)]_{y_i}$
\ELSE
\STATE $\forall i$, weight $w_i = 1$
\ENDIF
\STATE $\forall i$, query the adaptive target $\tilde{{\bm{t}}}_i$ and update: $\tilde{{\bm{t}}}_i \leftarrow \rho \tilde{{\bm{t}}}_i + (1 - \rho) softmax[f({\bm{x}}_i + \Delta_i)]$.
\STATE $\forall i$, the final adaptive target ${\bm{t}}_i = \beta \mathbf{1}_{y_i} + (1 - \beta) \tilde{{\bm{t}}_i}$
\STATE Calculate the loss $\frac{1}{\sum_i^B w_i} \sum_i^B w_i {\mathcal{L}}_\theta({\bm{x}}_i + \Delta_i, {\bm{t}}_i)$ and update the parameters. \ENDFOR \end{algorithmic} \caption{One epoch of the accelerated adversarial training we use in Section~\ref{subsec:fast}.} \label{alg:fast} \end{algorithm}
\textbf{Fast Adversarial Training} Our experiment on fast adversarial training is on CIFAR10 and $\epsilon = 8 / 255$. The pseudocode of our method is demonstrated as Algorithm~\ref{alg:fast}. We use ATTA~\cite{zheng2020efficient} to initialize the perturbation of each training instance. The step size $\alpha$ of the perturbation update is $4 / 255$, same as~\cite{zheng2020efficient}. The average coefficient $\rho$ and $\beta$ is $0.9$ and $0.1$ unless explicitly stated. Our learning rate scheduler also follows~\cite{zheng2020efficient}: we train the model for $38$ epochs, the learning rate is $0.1$ on the first $30$ epochs, it decays to $0.01$ in the next $6$ epochs and further decays to $0.001$ in the last $2$ epochs. When we use adaptive targets, the first $5$ epochs are the warmup period in which we use fixed targets. Since the goal here is to accelerate adversarial training, we do not use a validation set to do model selection as in~\cite{rice2020overfitting}. We use the standard data augmentation on CIFAR10: random crop and random horizontal flip.
\textbf{Adversarial Finetuning with Additional Data} For CIFAR10, we use 500000 images from 80 Million Tiny Images dataset~\cite{torralba200880} with pseudo labels in~\cite{carmon2019unlabeled} \footnote{Data available for download on \href{https://github.com/yguooo/semisup-adv}{https://github.com/yguooo/semisup-adv}. MIT license. Free to use.}. For SVHN, we use the extra held-out set provided by SVHN itself, which contains 531131 somewhat less difficult samples. When we construct a mini-batch, half of its instances are sampled from the original training set and the other half are sampled from the additional data. The learning rate in the fine-tuning phase is always $10^{-3}$ and is always fixed. Since we only fine-tune the model for only $1$ or $5$ epochs, we do not use a validation set for model selection.
\section{Additional Experiments and Discussion} \label{sec:app_exp}
\subsection{Properties of the Difficulty Metric} \label{subsec:d_function}
To study the factors affecting the difficulty function defined in~(\ref{eq:difficulty}), let us denote by $d_1$, $d_2$ the difficulty functions obtained under two different training settings, such as different network architectures or adversarial attack strategies. We then define the difficulty distance (\textit{D-distance}) between two such functions in the following equation. In this regard, the expected D-distance between two random difficulty functions is $0.375$. \begin{equation} \begin{aligned}
D(d_1, d_2) = \mathbb{E}_{{\bm{x}} \sim U({\mathcal{D}})} |d_1({\bm{x}}) - d_2({\bm{x}})|\;. \end{aligned} \label{eq:d_distance} \end{equation}
We then study the properties of the difficulty metric of Equation (\ref{eq:difficulty}) by performing experiments on the CIFAR10 and CIFAR10-C~\cite{hendrycks2018benchmarking} dataset, varying factors of interest. In particular, we first study the influence of the network by using either a RN18 model, trained for either 100 or 200 epochs (RN18-100 or RN18-200), or a WRN34 model trained for 200 epochs (WRN34). To generate adversarial attacks, we make use of PGD with an adversarial budget based on the $l_\infty$ norm with $\epsilon = 8 / 255$. This corresponds to the settings used in other works~\cite{hendrycks2018benchmarking, madry2017towards}. The other hyper-parameters follow the general settings in Appendix~\ref{sec:app_exp_settings} In the left part of Table~\ref{tbl:cmp}, we report the D-distance for all pairs of settings. Each result is averaged over $4$ runs, and the variances are all below $0.012$. The D-distances in all scenarios are all very small and close to $0$, indicating the architecture and the training duration have little influence on instance difficulty based on our definition.
\begin{table}[!htb] \begin{minipage}{.45\linewidth} \centering \begin{tabular}{lccc} \Xhline{4\arrayrulewidth} $d_1 \backslash d_2$ & RN18-100 & RN18-200 & WRN34 \\ \hline RN18-100 & $0.0189$ & $0.0232$ & $0.0355$ \\ RN18-200 & $0.0232$ & $0.0159$ & $0.0299$ \\ WRN34 & $0.0355$ & $0.0299$ & $0.0178$ \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{.45\linewidth} \centering \begin{tabular}{lccc} \Xhline{4\arrayrulewidth} $d_1 \backslash d_2$ & Clean & FGSM & PGD \\ \hline Clean & $0.0189$ & $0.0607$ & $0.1713$ \\ FGSM & $0.0607$ & $0.0843$ & $0.1677$ \\ PGD & $0.1713$ & $0.1677$ & $0.0857$ \\ \Xhline{4\arrayrulewidth} \end{tabular} \end{minipage} \caption{D-distances between difficulty functions in different settings, including different model architectures and training duration (left table), and different types of perturbations (right table).} \label{tbl:cmp} \end{table}
We then perform experiments by varying the attack strategy using a RN18 network. As shown by the D-distances reported in the right portion of Table~\ref{tbl:cmp}, the discrepancy between values obtained with clean, FGSM-perturbed and PGD-perturbed inputs is much larger, thus indicating that our difficulty function correctly reflects the influence of an attack on an instance. In addition, Table~\ref{tbl:cmp_common_corr} demonstrates the D-distance between the difficulty functions based on clean instances, FGSM-perturbed instance, PGD-perturbed instances and different common corruptions from CIFAR10-C~\cite{hendrycks2018benchmarking}\footnote{Data available for download on \href{https://github.com/hendrycks/robustness}{https://github.com/hendrycks/robustness}. Apache License 2.0. Free to use.}. Note that~\cite{hendrycks2018benchmarking} only provides corrupted instances on the test set, so we train models on the clean training set and test model on corrupted test set in these cases. We use RN18 architecture and train it for 100 epochs in all cases, results are reported on the test set. Compared with the results in the left half of Table~\ref{tbl:cmp}, the D-distance is much larger here. This indicates the difficulty function depends on the perturbation type applied to the input, including the common corruptions.
The results in Table~\ref{tbl:cmp} and~\ref{tbl:cmp_common_corr} demonstrate that our difficulty metric mainly depends on the data and on the perturbation type; not the model architecture or the training duration.
\begin{table}[ht] \small \centering \begin{tabular}{p{1.2cm}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.65cm}<{\centering}p{1.65cm}<{\centering}} \Xhline{4\arrayrulewidth} $d_1 \backslash d_2$ & brightness & contrast & defocus & elastic & fog & gaussian\_blur \\ \hline Clean & $0.1279$ & $0.3219$ & $0.2646$ & $0.2115$ & $0.2324$ & $0.3069$ \\ FGSM & $0.1303$ & $0.3128$ & $0.2642$ & $0.2098$ & $0.2289$ & $0.3064$ \\ PGD & $0.1873$ & $0.3082$ & $0.2616$ & $0.2319$ & $0.2414$ & $0.2959$ \\ \Xhline{4\arrayrulewidth} \\ \Xhline{4\arrayrulewidth} $d_1 \backslash d_2$ & glass\_blur & jpeg & motion\_blur & pixelate & gaussian\_noise & impulse\_noise \\ \hline Clean & $0.2809$ & $0.1838$ & $0.2520$ & $0.2365$ & $0.2999$ & $0.2869$ \\ FGSM & $0.2760$ & $0.1853$ & $0.2520$ & $0.2417$ & $0.2918$ & $0.2807$ \\ PGD & $0.2825$ & $0.2026$ & $0.2605$ & $0.2551$ & $0.2980$ & $0.2866$ \\ \Xhline{4\arrayrulewidth} \\ \Xhline{4\arrayrulewidth} $d_1 \backslash d_2$ & saturate & shot\_noise & snow & spatter & zoom\_blur & speckle\_noise \\ \hline Clean & $0.1335$ & $0.2832$ & $0.2033$ & $0.1930$ & $0.2654$ & $0.2829$ \\ FGSM & $0.1329$ & $0.2754$ & $0.2003$ & $0.1946$ & $0.2657$ & $0.2759$ \\ PGD & $0.1932$ & $0.2841$ & $0.2148$ & $0.2297$ & $0.2711$ & $0.2901$ \\ \Xhline{4\arrayrulewidth} \end{tabular} \caption{D-distances between difficulty functions of vanilla / FGSM / PGD training and training based on 18 different corruptions on CIFAR10-C. We run each experiment for $4$ times and report the average value.} \label{tbl:cmp_common_corr} \end{table}
In the definition of our difficulty metric in Equation (\ref{eq:difficulty}), the difficulty of one instance is based on its average loss values during the training procedure. It is intuitive, because the values of the loss objective represents the cost that model needs to fit the corresponding data point. The bigger this cost is, the more difficulty this instance will be. To make the metric stable and prevent the metric from being sensitive to the stochasticity in the training dynamics, we use the average value of the loss objective for each instance to define its difficulty. In addition to the average loss objectives, we can also use the average 0-1 error to define the difficulty function. In Figure~\ref{fig:acc_loss_compare}, we plot the relationship between the difficulty metric based on the average loss values and the one based on the average 0-1 error for instances in the CIFAR10 training set when we train a RN18-100 model and a WRN34 model. We can see a strong correlation between them for both models. The correlation of the difficulty measured by two metrics for the same instance is $0.9466$ in the RN18-100 case and $0.9545$ in the WRN34 case. The high correlation indicates we can use either metric to measure the difficulty. Since the loss objective values are continuous and finer-grained, we choose it as the basis of the difficulty function we use in this paper.
\begin{figure}
\caption{The relationship between the difficulty function based on the average loss values and the one based on the average 0-1 errors. The left figure is based on the RN18-200 model; the right figure is based on the WRN34 model. The correlation between these two metrics are $0.9466$ (left) and $0.9545$ (right), respectively.}
\label{fig:acc_loss_compare}
\end{figure}
\subsection{Examples of Easy and Hard Instances of CIFAR10 and SVHN} \label{sec:example}
Figure~\ref{fig:example_cifar10} and~\ref{fig:example_svhn} demonstrate the easiest and the hardest examples of each class from CIFAR10 and SVHN, respectively. The difficulty of these instances is calculated based on PGD attacks. We can see most easy examples are visually consistent, while most hard examples are ambiguous or even incorrectly labeled.
\subsection{Training on a Subset} \label{sec:app_exp_subset}
\textbf{Different Optimization Method on Hard Instances} We find the failure of PGD adversarial training on the hardest 10000 instances of CIFAR10 training set does not arise from the optimizers. In Figure~\ref{fig:overfit_hardoptim}, we use SGD with different initial learning rates (``SGD, lr=1e-2'' and ``SGD, lr=1e-3'') and the adaptive optimizer such as Adam~\cite{kingma2014adam} (``Adam, lr=1e-4''). The learning rate of SGD optimizer decrease to its $1/10$ in the 100th and 150th epoch, while the learning rate of Adam optimizer is fixed during training. Although optimizers like Adam can make the model fit the training subset better, none of these methods can make the robust test accuracy significantly above the chance of random guesses, i.e., $10\%$.
\textbf{Longer Training Duration} In Figures~\ref{fig:hard10k_600epoch} and~\ref{fig:full_600epoch}, we conduct adversarial training for a longer duration until the loss on the hard training instances converges. Specifically, the model is trained for $600$ epochs, with a learning rate is initialized to $0.1$ and divided by 10 after every $100$ epochs. In Figure~\ref{fig:hard10k_600epoch}, we adversarially train a RN18 model on the hardest $10000$ training instances. Our conclusions from Section~\ref{subsec:subset} still hold: Adversarial training on the hard instances leads to much more severe overfitting, greatly widening the generalization gap. In Figure~\ref{fig:full_600epoch}, we adversarially train a RN18 model on the whole training set and calculate the average loss on the groups ${\mathcal{G}}_0$, ${\mathcal{G}}_3$, ${\mathcal{G}}_6$ and ${\mathcal{G}}_9$. Similarly to training for $200$ epochs, the model first fits the easy training instances and then the hard ones. This can be seen by the fact that the average loss of ${\mathcal{G}}_9$ decreases much faster in the beginning and quickly saturates. In other words, the harder the group, the later we see a significant decrease in its average loss value. This observation is also consistent with our findings in Section~\ref{subsec:hardoverfit}.
\textbf{Results on SVHN dataset} Figure~\ref{fig:overfit_svhn} demonstrates the learning curves of PGD adversarial training based on a subset of the easiest, the random and the hardest instances of SVHN dataset. We let the size of each subset be $20000$, because the training set of SVHN is larger than that of CIFAR10. The model architecture is RN18 in these cases. We have the same observations here: training on the hardest subset yields trivial performance, training on the random subset has significant generalization decay in the late phase of training while training on the easiest subset does not.
\textbf{Different Values of $\epsilon$ and $l_2$-based Adversarial Budget} Figure~\ref{fig:overfit_adv_budget} and Figure~\ref{fig:overfit_l2_adv_budget} demonstrate the learning curves of RN18 models under different adversarial budgets on CIFAR10, in both $l_\infty$ and $l_2$ cases. In $l_\infty$ cases, the adversarial budgets are $2 / 255$, $4 / 255$ and $6 / 255$; in $l_2$ cases, the adversarial budgets are $0.5$, $0.75$ and $1$. With the increase in the size of the adversarial budget, we can see a clear transition from the vanilla training: more and more severe generalization decay when training on the random or the hardest subset.
\textbf{Training with Different Amount of Data} In Figure~\ref{fig:easy_compare}, we compare the learning curves of PGD adversarial training on increasing more training data, with the easiest ones coming first. If we do model selection on a validation set as in~\cite{rice2020overfitting}, the selected models are still better on both CIFAR10 and SVHN when they are trained with more data, although the final models in these cases are not necessarily better. The results indicate the hard instances are still useful to improve the model's performance, but we need to utilize them in a different way.
\begin{figure}
\caption{Learning curves of training on PGD-perturbed inputs against different sizes of $l_\infty$ norm based adversarial budgets using the easiest, the random and the hardest 10000 training instances. The instance difficulty is determined by the corresponding adversarial budget and is thus different under different adversarial budgets. The dashed lines are robust training error on the selected training set, the solid lines are robust test error on the entire test set.}
\label{fig:overfit_adv_budget}
\end{figure}
\begin{minipage}{\linewidth} \centering \begin{minipage}{0.45\linewidth} \begin{figure}
\caption{Learning curves of PGD adversarially trained models on the hardest $10000$ instances in the CIFAR10 training set by different optimizers. The dashed lines are robust training error on the selected training instances, the solid lines are robust test error on the entire test set.}
\label{fig:overfit_hardoptim}
\end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.45\linewidth} \begin{figure}
\caption{Learning curves obtained by training using the easiest, the random and the hardest 20000 instances of the SVHN training set. The training error (dashed lines) is the robust error on the selected instances, and the robust test error (solid lines) is always the error on the entire test set.}
\label{fig:overfit_svhn}
\end{figure} \end{minipage} \end{minipage}
\begin{minipage}{\linewidth} \centering \begin{minipage}{0.45\linewidth} \begin{figure}
\caption{Learning curves of PGD adversarial training on the hardest $10000$ training instances. The model is trained for 600 epochs. The training error (dashed lines) is the robust error on the selected instances, and the robust test error (solid lines) is always the error on the entire test set.}
\label{fig:hard10k_600epoch}
\end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.45\linewidth} \begin{figure}\label{fig:full_600epoch}
\end{figure} \end{minipage} \end{minipage}
\begin{figure}
\caption{Learning curves of training on PGD-perturbed inputs against different size of $l_2$ norm based adversarial budgets using the easiest, the random and the hardest 10000 training instances. The instance difficulty is determined by the corresponding adversarial budget and is thus different under different adversarial budgets. The dashed lines are robust training error on the selected training set, the solid lines are robust test error on the entire test set.}
\label{fig:overfit_l2_adv_budget}
\end{figure}
\begin{figure}
\caption{Learning curves of PGD adversarial training using increasing more training data in CIFAR10 and SVHN. The dashed lines represent the robust training error on the selected training instances; the solid lines represent the robust test error on the entire test set. Left: we use the easiest 10000, 20000, 30000, 40000 and the whole training set of CIFAR10. Right: we use the easiest 20000, 30000, 40000 and the whole training set of SVHN.}
\label{fig:easy_compare}
\end{figure}
\subsection{Revisiting Existing Methods Mitigating Adversarial Overfitting} \label{subsec:app_revisit}
Existing methods mitigating adversarial overfitting can be generally divided into two categories: one is to use adaptive inputs, such as~\cite{balaji2019instance}; the other is to use adaptive targets, such as~\cite{chen2021robust, huang2020self}. Both categories aim to prevent the model from fitting hard input-target pairs. In this section, we pick one example from each category for investigation. We provide the learning curves of the methods we study in Figure~\ref{fig:case_learncurve}. We use the same hyper-parameters as in these methods' original paper, except for the training duration and learning rate scheduler, which follow our settings. These methods clearly mitigate adversarial overfitting: The robust test error does not increase much in the late phase of training, and the generalization gap is much smaller that that of PGD adversarial training.
\begin{figure}
\caption{Learning curves of PGD adversarial training (PGD AT), instance-adaptive training (IAT) and self-adaptive training (SAT). Dashed lines and solid lines represent the robust training error and the robust test error, respectively.}
\label{fig:case_learncurve}
\end{figure}
\textbf{Instance-Adaptive Training} Using an instance-adaptive adversarial budget has been shown to mitigate adversarial overfitting and yield a better trade-off between the clean and robust accuracy~\cite{balaji2019instance}. In instance-adaptive adversarial training (IAT), each training instance ${\bm{x}}_i$ maintains its own adversarial budget's size $\epsilon_i$ during training. In each epoch, $\epsilon_i$ increases to $\epsilon_i + \epsilon_\Delta$ if the instance is robust under this enlarged adversarial budget. By contrast, $\epsilon_i$ decreases to $\epsilon_i - \epsilon_\Delta$ if the instance is not robust under the original adversarial budget. Here, $\epsilon_{\Delta}$ is the step size of the adjustment.
We use the same settings as in~\cite{balaji2019instance} except that we use the same number of training epochs and learning rate scheduling as the one in other experiments for fair comparison. Specially, we set the value of $\epsilon$ and $\epsilon_{\Delta}$ to be $8 / 255$ and $1.9 / 255$, respectively, same as in~\cite{balaji2019instance}. The first $5$ epochs are warmup, when we use vanilla adversarial training~\cite{madry2017towards}.
Figure~\ref{fig:epsilon_final} demonstrate the relationship between the instancewise adversarial budget $\epsilon_i$ and the corresponding instance's difficulty $d({\bm{x}}_i)$. It is obvious that they are highly correlated: the correlation is $0.844$. Therefore, instance-adaptive training adaptively uses smaller adversarial budgets for hard training instances, which prevents the model from fitting hard input-target pairs.
\begin{figure}
\caption{Average weights of different groups in the training set of CIFAR10 during training. During the warmup period (first $90$ epochs), the weight for every training instance is $1$. The model architecture is RN18.}
\label{fig:epsilon_final}
\label{fig:casestudy_movetarget_weight}
\end{figure}
\textbf{Self-Adaptive Training} Self-adaptive training (SAT)~\cite{huang2020self} solves the adversarial overfitting issue by adapting the target. By contrast with common practice consisting of using a fixed target, usually the ground-truth, SAT adapts the target of each instance to the model's output. Specifically, after a warm-up period, the target ${\bm{t}}_i$ for an instance ${\bm{x}}_i$ is initialized as a one-hot vector by its ground-truth label $y_i$ and updated in an iterative manner after each epoch as ${\bm{t}}_i \leftarrow \rho {\bm{t}}_i + (1 - \rho) {\bm{o}}_i$. Here, $\rho$ is a predefined momentum factor and ${\bm{o}}_i$ is the output probability of the current model on the corresponding clean instance. SAT uses the loss of TRADES~\cite{zhang2019theoretically} but replaces the ground-truth label $y$ with the adaptive target ${\bm{t}}_i$: ${\mathcal{L}}_{SAT}({\bm{x}}_i) = {\mathcal{L}}({\bm{x}}_i, {\bm{t}}_i) + \lambda \max_{\Delta_i \in {\mathcal{S}}(\epsilon)} KL({\bm{o}}_i || {\bm{o}}'_i)$, where $KL$ refers to the Kullback–Leibler divergence and $\lambda$ is the weight for the regularizer. Furthermore, SAT uses a weighted average to calculate the loss of a mini-batch; the weight assigned to each instance ${\bm{x}}_i$ is proportional to the maximum element of its target ${\bm{t}}_i$ but normalized to ensure that all instances' weights sum up to $1$. By weighted averaging, the instances with confident predictions are strengthened, whereas the ambiguous instances are downplayed.
Similarly, we use the same settings as in~\cite{huang2020self} except we use the same number of training epochs and learning rate scheduling: we train the model for $200$ epochs and the first $90$ epochs are the warmup period.
Figure~\ref{fig:casestudy_movetarget_weight} demonstrates the average weight assigned to instances belonging to the group ${\mathcal{G}}_0$, ${\mathcal{G}}_3$, ${\mathcal{G}}_6$ and ${\mathcal{G}}_9$. It is clear that the hard instances are assigned much lower weights than the easy instances. For example, the weight assigned to ${\mathcal{G}}_0$, the easiest $10\%$ training instances, is close to $1$, while the weight assigned to ${\mathcal{G}}_9$, the hardest $10\%$ training instances, is only around $0.4$.
Furthermore, Figure~\ref{fig:casestudy_movetarget_acc} shows the accuracy of the group ${\mathcal{G}}_0$, ${\mathcal{G}}_3$, ${\mathcal{G}}_6$ and ${\mathcal{G}}_9$ during training using the original ground-truth label $\mathbf{1}_y$ and the adaptive target ${\bm{t}}$, respectively. For the adaptive target, one adversarial instance ${\bm{x}}'$ is considered to be correctly classified if and only if $\argmax_i f_{\bm{w}}({\bm{x}}')_i = \argmax_i {\bm{t}}_i$. For easy instances, $\mathbf{1}_y$ is mostly close to ${\bm{t}}$, so accuracy in both cases is high and the gap between them is small. For hard instances, $\mathbf{1}_y$ is usually not consistent with ${\bm{t}}$, while accuracy under the adaptive target ${\bm{t}}$ is much higher than the group-truth label $y$. This indicates self-adaptive training makes adaptive target easier to fit for the originally hard instances.
\begin{figure}
\caption{Robust training accuracy during training when we use the original groundtruth label (left) or use the adaptive target calculated during training (right).}
\label{fig:casestudy_movetarget_acc}
\end{figure}
\subsection{Extra Results and Discussion on Fast Adversarial Training} \label{subsec:app_fastadv}
We also conduct ablation study in the context of fast adversarial training. In Figure~\ref{fig:beta}, we change the value of $\beta$ in our algorithm (pseudocode in Algorithm~\ref{alg:fast}) and plot the learning curves. Lower the value of $\beta$ is, more weights assigned to the adaptive part of the
target: $\beta = 0$ means we directly utilize the moving average target as the final target, $\beta = 1$ means we use the one-hot groundtruth label. Figure~\ref{fig:beta} clearly shows us the generalization gap decreases with the decrease in $\beta$. That is to say, the adaptive target can indeed improve the generalization performance.
Figure~\ref{fig:rw} compare the learning curves of ATTA~\cite{zheng2020efficient} with and without reweighting. The first $5$ epochs are the warmup period. The results confirm that the reweighting scheme can prevent adversarial overfitting and decrease the generalization gap.
\begin{figure}
\caption{The learning curves with different values of $\beta$. The solid curve and the dashed curve represent the robust test error and the robust training error, respectively.}
\label{fig:beta}
\caption{The learning curves of ATTA with and without reweighting. The solid curve and the dashed curve represent the robust test error and the robust training error, respectively.}
\label{fig:rw}
\end{figure}
To confirm that the algorithm we use is consistent with our theoretical and empirical analysis in Section~\ref{sec:overfit} and~\ref{sec:thm}, we study the relationship between the instance difficulty and the weight assigned to them when using reweighting, as well as the soft target when using adaptive targets. Since the evaluation of model robustness is based on the PGD attack, the difficulty value here is based on the PGD perturbation. In Figure~\ref{fig:per-rw}, we demonstrate the relationship between the difficulty value and the average assigned weight for each instance when using reweighting. We calculate the correlation between these two values on the training set, it is $0.8900$. This indicates we indeed assign smaller weights for hard training instances and assign bigger weights for easy training instances. In Figure~\ref{fig:per-ada}, we show the relationship between the difficulty value and the average value of the true label's probability in the soft target when we use the adaptive targets. Similarly, we calculate the correlation between these two values on the training set, it is $0.9604$. This indicates the adaptive target is similar to the ground-truth one-hot target for the easy training instances, while the adaptive target is very different from the ground-truth one-hot target for the hard training instances. This means, adaptive targets prevent the model from fitting hard training instances while encourage the model to fit the easy training instances.
\begin{figure}
\caption{The relationship between the difficulty value and the weight assigned to each instances when using reweighting. We use the average weight across epochs. The correlation between them is $0.8900$.}
\label{fig:per-rw}
\caption{The relationship between the difficulty value and the average value of the true label's probability when using the adaptive targets. The correlation between them is $0.9604$.}
\label{fig:per-ada}
\end{figure}
\subsection{Extra Results and Discussion on Adversarial Finetuning} \label{subsec:app_finetune}
We conduct ablation study and the results are demonstrated in Table~\ref{tbl:ablation_finetune}. It is clear that both reweighting and the KL regularization term benefit the performance of the finetuned model.
\begin{table}[!ht] \centering \begin{tabular}{p{1.1cm}<{\centering}p{2.1cm}p{1.5cm}<{\centering}:p{1.1cm}<{\centering}p{2.1cm}p{1.5cm}<{\centering}} \Xhline{4\arrayrulewidth} Duration & Method & AutoAttack & Duration & Method & AutoAttack \\ \Xhline{4\arrayrulewidth} \multicolumn{3}{c:}{\textbf{WRN34 on CIFAR10, $\epsilon = 8 /255$}} & \multicolumn{3}{c}{\textbf{RN18 on SVHN, $\epsilon = 0.02$}} \\ \hline \multicolumn{2}{l}{No Fine Tuning} & 52.01 & \multicolumn{2}{l}{No Fine Tuning} & 67.77 \\ \hdashline \multirow{4}{*}{1 Epoch} & Vanilla AT & 54.11 & \multirow{4}{*}{1 Epoch} & Vanilla AT & 70.81 \\
& RW & 54.69 & & RW & 70.83 \\
& KL & 54.73 & & KL & 72.29 \\
& RW + KL & 54.69 & & RW + KL & 72.53 \\ \hdashline \multirow{4}{*}{5 Epoch} & Vanilla AT & 55.49 & \multirow{4}{*}{5 Epoch} & Vanilla AT & 72.18 \\
& RW & 56.41 & & RW & 72.72 \\
& KL & 56.55 & & KL & 73.17 \\
& RW + KL & 56.99 & & RW + KL & 73.35 \\ \Xhline{4\arrayrulewidth} \end{tabular} \caption{Ablation study on the influence of reweighting (RW) and the KL regularization term (KL) in the performance of adversarial finetuning with additional data.} \label{tbl:ablation_finetune} \end{table}
\begin{figure}
\caption{Easy and hard examples in each category of CIFAR10 dataset. In each subfigure, odd columns present the original images, and even columns present the PGD-perturbed images. Above each image, we provide the normalized difficulty defined in Equation (\ref{eq:difficulty}) as well as the labels: true labels for the original images and the predicted labels for the perturbed images.}
\label{fig:example_cifar10}
\end{figure}
\begin{figure}
\caption{Easy and hard examples in each category of SVHN dataset. In each subfigure, odd columns present the original images, and even columns present the PGD-perturbed images. Above each image, we provide the normalized difficulty defined in Equation (\ref{eq:difficulty}) as well as the labels: true labels for the original images and the predicted labels for the perturbed images.}
\label{fig:example_svhn}
\end{figure}
\end{appendices}
\end{document} | arXiv |
Edwin Bidwell Wilson
Edwin Bidwell Wilson (April 25, 1879 – December 28, 1964) was an American mathematician, statistician, physicist and general polymath.[1] He was the sole protégé of Yale University physicist Josiah Willard Gibbs and was mentor to MIT economist Paul Samuelson.[2] Wilson had a distinguished academic career at Yale and MIT, followed by a long and distinguished period of service as a civilian employee of the US Navy in the Office of Naval Research. In his latter role, he was awarded the Distinguished Civilian Service Award, the highest honorary award available to a civilian employee of the US Navy. Wilson made broad contributions to mathematics, statistics and aeronautics, and is well-known for producing a number of widely used textbooks. He is perhaps best known for his derivation of the eponymously named Wilson score interval, which is a confidence interval used widely in statistics.
Edwin Bidwell Wilson
Born(1879-04-25)April 25, 1879
Hartford, Connecticut
DiedDecember 28, 1964(1964-12-28) (aged 85)
Brookline, Massachusetts
NationalityAmerican
Alma materYale University
Harvard College
Known forWilson score interval
AwardsDistinguished Civilian Service Award (US Navy, 1960)
Superior Civilian Service Award (US Navy, 1964)
Lewis Award (American Philosophical Society, 1963)
Scientific career
FieldsMathematics
Statistics
Aeronautics
InstitutionsMassachusetts Institute of Technology
Doctoral advisorJosiah Willard Gibbs
Doctoral studentsJane Worcester
Life
Edwin Bidwell Wilson was born in Hartford, Connecticut, to Edwin Horace Wilson (a teacher and superintendent of schools in Middletown, Connecticut) and Jane Amelia (Bidwell) Wilson.[3] He had two sisters and two brothers; he and his siblings all went on to achieve high levels of education and professional success.[4] Although born in Hartford, Wilson grew up in Middleton, and for a period he attended a private school that had been set up by his father, where he was substantially younger than the other students.[5] Wilson performed at a high level academically from a young age. He recounts that (according to his mother) he taught himself arithmetic at the age of four using his mother's sixty-inch tape measure; he learned multiplication by folding a tape measure into equal-length increments and then counting the number of folded parts.[6] At the age of fifteen, Wilson sat and passed the entrance examination for Yale University (his father's alma mater), but his father would not allow him to attend at this age, as he considered him too young; he waited until he was sixteen and then attended Harvard College, being admitted on the basis of his entrance examination at Yale.[7]
Wilson attended Harvard College as an undergraduate, receiving his AB summa cum laude in 1899. He then attended Yale University for his PhD, graduating in 1901.[8] He also studied mathematics from 1902-1903 in Paris, primarily at the École normale supérieure, before returning to teach at Yale.[9] At Yale, Wilson worked under the supervision of Josiah Willard Gibbs and compiled an important textbook on vector analysis from Gibbs' lecture notes. Gibbs died when Wilson had just turned twenty-four, but he exerted a strong influence on Wilson through his early supervision and through Wilson's experience compiling Gibbs' notes. Wilson became an Assistant Professor of Mathematics at Yale in 1906, then Associate Professor of Mathematics at Massachusetts Institute of Technology (MIT) in 1907, then Professor of Mathematics in 1911, then Head of the Department of Physics in 1917, and then Professor of Vital Statistics at the Harvard School of Public Health in 1922.[10]
During World War I, Wilson gave a course in aeronautical engineering to US Army and Navy officers at MIT.[11] Wilson retired from academic work in 1945 and worked as a consultant at the Office of Naval Research until his death in 1964.[12] For his service to the US Navy during and after the war, Wilson was awarded the Superior Civilian Service Award in 1960 and the Distinguished Civilian Service Award in 1964.[13] The latter award is the highest honorary award available for a civilian employee of the US Navy. Wilson was also awarded an honorary LLD degree form Wesleyan University in 1955.[14] Wilson had a broad range of interests and skills, and he served in a number of distinguished roles in national academies for the arts and sciences. He was a member of the National Academy of Sciences and served as its Vice-President from 1949-1953; he was a Fellow of the Royal Statistical Society and the American Statistical Association, serving as President of the latter in 1929;[15] he was a member of the American Academy of Arts and Sciences, serving as President from 1927-1931; and he was a member of the American Philosophical Society.[16] Wilson won the John Frederick Lewis Award from the American Philosophical Society in 1963.[17]
Wilson married Ethel Sentner on 5 July 1911 and they had two daughters, Doris and Enid.[18] Doris Wilson graduated from McGill University in 1946 and became an analytical chemist working at the Peter Bent Brigham Hospital and Harvard School of Public Health.[19] Enid Wilson graduated from Brown University and Simmons College Library School and worked as a cataloguer at the University of Rhode Island and Boston University, also serving as a secretary in the Wellesley Historical Society.[20] Ethel Wilson (Edwin's wife) died in 1957 and he died seven years later on 28 December 1964. His daughters survived him by almost fifty years, and both died within a month of each other in 2014. Wilson and his wife and daughters are buried at Mount Auburn Cemetery in Cambridge, Massachusetts.[21]
Academic works and legacy
Wilson published scholarly papers on a wide range of topics in mathematics, statistics, physics, and economics, but most of his work was in mathematics. During his career, Wilson wrote three widely used textbooks. At the age of twenty-two he compiled the textbook Vector Analysis based on the lectures of his doctoral advisor Josiah Willard Gibbs, as Gibbs was at the time busy preparing his book on thermodynamics.[22] This textbook was widely used by mathematicians and physicists and had a lasting effect on notation in the field.[23] Wilson gave a plenary address at the International Congress of Mathematicians in 1904 in Heidelberg[24] where he summarised some further unpublished work by Gibbs (which he later published).
Later in his career, Wilson published the textbook Advanced Calculus based on his own lecture materials, and the textbook Aeronautics based on his lectures to US Army and Navy students in the first world war, as well as special tutoring sessions held with his students at MIT.[25] In Wilson (1927) harvtxt error: no target: CITEREFWilson1927 (help) he introduced the Wilson score interval, a binomial proportion confidence interval, and also derived the "plus four rule", which uses a pseudocount of two (add two to both your count of successes and failures, so four total) for estimating the probability of a Bernoulli variable with a confidence interval of two standard deviations in each direction (approximately 95% coverage).[26]
Wilson published a substantial number of papers on geometry, statistics, biostatistics, and other areas. He also conducted a number of reviews of scientific theories and works, and he was known to be critical of aspects of the works of Hilbert and Einstein.[27] In 1904 Wilson published a review of Bertrand Russell's works The Principles of Mathematics and An Essay on the Foundations of Geometry where he highlighted the strong role of Peano in shaping the foundations of mathematics. At the time Peano's works were not well-known in the US and so this review helped to establish interest in Peano's work.
Selected works
• 1901: Vector Analysis: A Text-book for the Use of Students of Mathematics & Physics, Founded Upon the Lectures of J. W. Gibbs.
• 1904: The Foundations of Mathematics. Bulletin of the American Mathematical Society 11(2), pp. 74–93. (A review of The Principles of Mathematics and An Essay on the Foundations of Geometry by Bertrand Russell.)
• 1912: Advanced Calculus. (Link from Internet Archive.)
• 1912: (with Gilbert N. Lewis) The Space-Time Manifold of Relativity. The Non-Euclidean Geometry of Mechanics and Electromagnetics Proceedings of the American Academy of Arts and Sciences 48(11), pp. 389-507.[28]
• 1920: Aeronautics: A Class Text. John Wiley & Sons. (Link from Internet Archive.)
• 1927: Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association 22(158), pp. 209-212.
See also
• Wilson score interval
Notes
1. "Obituary: Edwin B. Wilson". Physics Today. 18 (6): 88. June 1965. doi:10.1063/1.3047526.
2. How I Became an Economist by Paul A. Samuelson, 1970 Laureate in Economics, 5 September 2003
3. Hunsaker and MacLane (1973), p. 285
4. Hunsaker and MacLane (1973), p. 286
5. Lindsay and King (1964)
6. Lindsay and King (1964)
7. Lindsay and King (1964)
8. Hunsaker and MacLane (1973)
9. Hunsaker and MacLane (1973), pp. 285-286
10. Hunsaker and MacLane (1973)
11. Hunsaker and MacLane (1973), p. 285
12. Hunsaker and MacLane (1973), p. 286
13. Hunsaker and MacLane (1973), p. 286
14. Hunsaker and MacLane (1973), p. 287
15. List of ASA Fellows, retrieved 2016-07-16.
16. O'Connor and Robertson (2000)
17. American Philosophical Society. "John Frederick Lewis Award". Retrieved 19 August 2021.
18. Hunsaker and MacLane (1973), p. 286
19. George F. Doherty & Sons Funeral Homes (2014) (7 May 2014). "Obituary, Doris Wilson". Retrieved 19 August 2021.
20. George F. Doherty & Sons Funeral Home (2014). "Obituary, Enid Wilson". 14 April 2014. Retrieved 19 August 2021.
21. BillionGraves (2014). "Grave site information - Edwin Bidwell Wilson". Retrieved 19 August 2021.
22. E.B. Wilson (1902) Vector Analysis: A Text-book for the Use of Students of Mathematics and Physics, based upon the lectures of Willard Gibbs
23. Hunsaker and MacLane (1973), p. 288
24. "Products in Additive Fields von E. B. Wilson aus New Haven". Verhandlungen des dritten Internationalen Mathematiker-Kongress, Heidelberg, 1904. ICM proceedings. Leipzig: Teubner. 1905.
25. Hunsaker and MacLane (1973), p. 289-294
26. Moore, David; et al. (6 January 2017). Introduction to the Practice of Statistics. Macmillan Learning. p. 478. ISBN 9781319013387.
27. Hunsaker and MacLane (1973), p. 290-291
28. J. B. Shaw (1913) The Wilson-Lewis Algebra of Four-dimensional Space, Bulletin of the Quaternion Society via HathiTrust
References
• Jerome Hunsaker and Saunders MacLane (1973) Edwin Bidwell Wilson, Biographical Memoirs, pp. 283–320, National Academy of Sciences of USA.
• R. Bruce Lindsay and W. J. King (1964) Oral Histories, Edwin Wilson. Niels Bohr Library & Archives, American Institute of Physics. [Accessed 19 August 2021]
• O'Connor, John J.; Robertson, Edmund F., "Edwin Bidwell Wilson", MacTutor History of Mathematics Archive, University of St Andrews
• Edwin Bidwell Wilson at the Mathematics Genealogy Project
External links
• Edwin Bidwell Wilson correspondence, 1940-1945 (inclusive), 1942-1945 (bulk). H MS c364. Harvard Medical Library, Francis A. Countway Library of Medicine, Boston, Mass.
• 1907 copy of Vector Analysis
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• Spain
• Catalonia
• Germany
• Israel
• United States
• Japan
• Australia
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
| Wikipedia |
\begin{definition}[Definition:Feigenbaum Constants/Second]
The '''secondFeigenbaum constant''' $\alpha$ is the ratio between the width of a tine and the width of one of its two subtines (except the tine closest to the fold).
A negative sign is applied to $\alpha$ when the ratio between the lower subtine and the width of the tine is measured.
Its approximate value is given by:
:$\alpha \approx 2 \cdotp 50290 \, 78750 \, 95892 \, 82228 \, 39028 \, 73218 \, 21578 \cdots$
{{OEIS|A006891}}
\end{definition} | ProofWiki |
\begin{definition}[Definition:Prime Exponent Function]
Let $n \in \N$ be a natural number.
Let the prime decomposition of $n$ be given as:
:$\ds n = \prod_{j \mathop = 1}^k \paren {\map p j}^{a_j}$
where $\map p j$ is the prime enumeration function.
Then the exponent $a_j$ of $\map p j$ in $n$ is denoted $\paren n_j$.
If $\map p j$ does not divide $n$, then $\paren n_j = 0$.
We also define:
:$\forall n \in \N: \paren n_0 = 0$
:$\forall j \in \N: \paren 0_j = 0$
:$\forall j \in \N: \paren 1_j = 0$
Category:Definitions/Mathematical Logic
\end{definition} | ProofWiki |
On the number line shown, $AE=6$, $AB=1\frac{2}{3}$, $BC=1\frac{1}{4}$, and $DE=1\frac{1}{12}$. What is $CD$?
[asy]
unitsize(1cm);
draw((0,0)--(8,0),Arrows);
pair A,B,C,D,E;
A=(1,0); B=(8/3,0); C=(47/12,0);
D=(71/12,0); E=(7,0);
dot(A); dot(B); dot(C); dot(D); dot(E);
label("$A$",A,S);
label("$B$",B,S);
label("$C$",C,S);
label("$D$",D,S);
label("$E$",E,S);
[/asy]
Since $AB= 1\frac23$ and $BC= 1\frac14$, we have \[AC = AB+ BC = 1\frac23+1\frac14 = \frac53 + \frac54 = \frac{20}{12} + \frac{15}{12} = \frac{35}{12}.\]We have $AC + CD + DE = AE = 6$, so \[CD = AE - AC - DE = 6 - \frac{35}{12} - \frac{13}{12}=6-\frac{48}{12} = \boxed{2}.\] | Math Dataset |
pp. A44-A51
•https://doi.org/10.1364/JOSAA.444745
Off-axis optical scanning holography [Invited]
Yaping Zhang, Yongwei Yao, Jingyuan Zhang, Jung-Ping Liu, and Ting-Chung Poon
Yaping Zhang,1,5 Yongwei Yao,1 Jingyuan Zhang,1 Jung-Ping Liu,2,3,6 and Ting-Chung Poon2,4
1Yunnan Provincial Key Laboratory of Modern Information Optics, Kunming University of Science and Technology, Kunming, Yunnan 650500, China
2Department of Photonics, Feng Chia University, 100 Wenhwa Rd., Seatwen, Taichung 40724, Taiwan
3Digital Optics Center, Feng Chia University, 100 Wenhwa Rd., Seatwen, Taichung 40724, Taiwan
4Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, Virginia 24061, USA
5e-mail: [email protected]
6e-mail: [email protected]
Yaping Zhang https://orcid.org/0000-0002-3377-3830
Jung-Ping Liu https://orcid.org/0000-0002-9195-4736
Y Zhang
Y Yao
J Zhang
J Liu
T Poon
Yaping Zhang, Yongwei Yao, Jingyuan Zhang, Jung-Ping Liu, and Ting-Chung Poon, "Off-axis optical scanning holography [Invited]," J. Opt. Soc. Am. A 39, A44-A51 (2022)
Spatial coherence analysis for optical scanning holography
Jung-Ping Liu
Appl. Opt. 54(1) A59-A66 (2015)
Is multiplexed off-axis holography for quantitative phase imaging more spatial bandwidth-efficient...
Gili Dardikman, et al.
J. Opt. Soc. Am. A 36(2) A1-A11 (2019)
Gaussian beam analysis of optical scanning holography
Bradley D. Duncan, et al.
J. Opt. Soc. Am. A 9(2) 229-236 (1992)
Compressive holography
Holographic recording
Phase retrieval
Point spread function
Original Manuscript: October 1, 2021
Revised Manuscript: November 20, 2021
Manuscript Accepted: November 26, 2021
GENERAL OPTICAL SCANNING THEORY FOR HOLOGRAPHIC IMAGING
HOLOGRAPHIC RECORDING AND RECONSTRUCTION
Coherent Holographic Recording and Reconstruction
SIMULATIONS RESULTS
Optical scanning holography (OSH) involves the principles of optical scanning and heterodyning. The use of heterodyning leads to phase-preserving, which is the basic idea of holography. While heterodyning has numerous advantages, it requires complicated and expensive electronic processing. We investigate an off-axis approach to OSH, thereby eliminating the use of heterodyning for phase retrieval. We develop optical scanning theory for holographic imaging and show that by properly designing the scanning beam, we can perform coherent and incoherent holographic recording. Simulation results are provided to verify the proposed idea.
© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
Optical scanning holography (OSH) is a single-pixel digital holographic recording technology that is highly sophisticated with many facets and applications [1,2]. Some unique applications have been demonstrated already by OSH, such as fluorescence 3D microscopy [3–5], PSF-engineered holography [6], cryptography [7,8], compressive holography [9], 3D pattern recognition [10], preprocessing and sectioning in holography [11,12], and most recently, the direct digital recording of a cylindrical hologram [13]. OSH is based on two principles: optical scanning and heterodyning. Optical scanning processors can provide flexibility as well as accuracy as they continue to offer practical applications compared to parallel coherent optical processors. Indeed, one can find an excellent example in scanning confocal microscopes [14]. Heterodyning can handle phase-sensitive detection, implying a phase-preserving procedure found in holography [15]. Instead of heterodyning, homodyning also has been employed in OSH [16]. Since the standard optical system of OSH employs an interferometric approach for either heterodyning or homodyning, recent efforts have been proposed to partially solve the environmentally sensitive aspect of the use of an interferometer during data acquisition [17,18]. In addition, to reduce the complexity of the optical system, electronic scanning instead of physical optical scanning has been investigated [19,20]. In this paper, we investigate the use of an off-axis approach to replace the use of heterodyning or homodyning for phase retrieval. Although heterodyning can reduce the noise to the shot noise limit in optical detection, it tends to complicate the electronic processing and increase the cost for the overall system. In Section 2, we present a general optical scanning theory for holographic imaging. We show that for scanning imaging, a mask in front of a photodetector will determine the coherence of the optical system. In Section 3, we show that by choosing an off-axis plane wave and an on-axis spherical wave as a scanning beam, we achieve holographic recording coherently and incoherently, depending on the size of the mask in front of the photodetector. We also show reconstruction mathematics. In Section 4, we show some simulation results to verify our proposed ideas, and finally in the last section, we offer some conclusions.
2. GENERAL OPTICAL SCANNING THEORY FOR HOLOGRAPHIC IMAGING
In standard OSH, a holographic information of a 3D object is acquired by single 2D scanning of the object with a combination of a plane wave at temporal frequency ${\omega _0}$ and a spherical wave at frequency ${\omega _0} + {\Omega}$, as illustrated in Fig. 1(a), where we assume a slide of the 3D object represented by the transparency function $t({x,y;z})$, which is complex in general. The interference of the plane wave and the spherical wave gives a Fresnel zone plate pattern at the object transparency. However, due to the temporal difference of the two waves, we actually have a dynamic FZP, which is termed the time-dependent FZP [21]. Circular fringes within the FZP move with a speed that is proportional to the temporal frequency difference ${\Omega}$. Photodetection of the two waves exiting the transparency at a photodetector performs heterodyning and gives a baseband current ${i_{{\rm base}}}$ and a heterodyne current ${i_{{\rm het}}}$ at frequency ${\Omega}$ [21]. The heterodyne frequency is the carrier that carries the holographic information of the scanned object. Now, in Fig. 1(b), we show our proposed off-axis approach. The plane wave and the spherical wave are now not aligned along the same direction. The off-axis plane wave at an offset angle ${\theta}$ causes the overall scan beam to become a static off-axis FZP as the temporal frequency of the plane wave and the spherical wave are the same. However, the static FZP is riding on a spatial carrier due to the offset angle, which would convert to a temporal carrier in the current upon scanning. The temporal frequency is ${f_t} = v{f_s}$, where $v$ is the scan speed of the scanning beam, and ${f_s}$ is the frequency of the spatial carrier. This off-axis approach to OSH is reminiscent of traditional holography in that the larger the offset plane wave angle, the more the separation of the twin image and the focused reconstructed image. However, in off-axis OSH, recording is done electronically and the spatial carrier is converted to temporal carrier through scanning. This is one of the main advantages of using scanning holography because a high-resolution recording medium is not required. Next, we will present a general optical scanning theory for holographic imaging and then, specifically as an example, we will discuss a situation when the scanning beam consists of a spherical wave and an off-axis plane wave, which leads to holographic recording of the scanned 3D object.
Fig. 1. Scanning patterns on object: (a) time-dependent FZP, and (b) static off-axis FZP with spatial carrier.
Fig. 2. 2D scanning system with a scanning beam consisting of an off-axis plane wave and an on-axis spherical wave.
Figure 2 illustrates the 2D scanning system. Scanning can be performed either by moving the optical beams over the object or by moving the object situated on an $x-y$ scanning platform with a fixed optical beam. In Fig. 2, we show a plane wave and a spherical wave that are used to 2D scan an object transparency $t({x,y;z})$, located ${z_0} + z$ away from the point source that generates the spherical wave. Lens L is a Fourier transform lens, and the mask $m({x,y})$, located at the Fourier plane of the lens and just in front of the photodetector PD, will control the size of the active area of the photodetector. The photodetector gives a scanned output in the form of a current. In general, let us assume the complex field $z + {z_0}$ away from $t({x,y;z})$ is given by $a({x,y})$. Then the scanning beam on $t({x,y;z})$ is $b({x,y;\;z + {z_0}} )$, which is the interference of the off-axis plane wave and
$$a({x,y} )\;*\;h\!\left({x,y;z + {z_0}} \right)$$
with $h({x,y;z}) = \frac{{j{k_0}}}{{2\pi {z}}}{e^{- j{k_0}z}}{e^{\frac{{- j{k_0}({{x^2} + {y^2}})}}{{2{z}}}}}$ being the spatial impulse response in Fourier optics [15,21]. The complex field after the object slide is given by $b({x^\prime - x,y^\prime - y;\;z + {z_0}}) \times t({x^\prime ,y^\prime ;z})$. The term $b({x^\prime - x,y^\prime - y;\;z + {z_0}})$ simply means that the optical beam is scanning over $t({x^\prime ,y^\prime ;z})$ according to $x = x(t) = vt$ and $y = y(t) = vt,$ where $v$ is the scanning velocity of the optical beam. The complex field after the object slide is then Fourier transformed onto the mask $m({x,y})$ placed in front of photodetector PD, giving the complex field in the plane of the mask as
$$\begin{split}&\left[{{e^{- j\frac{{{k_0}z}}{{2{f^2}}}({{x_m}^2 + {y_m}^2} )}} \mathop{\iint}\nolimits_{- \infty}^\infty b\left({x^\prime - x,y^\prime - y;\;z + {z_0}} \right) }\right.\\&\quad\left.{t\left({x^\prime ,y^\prime ;z} \right){e^{\frac{{j{k_0}}}{f}\left({{x_m}x^\prime + {y_m}y^\prime} \right)}}{\rm d}x^\prime {\rm d}y{\rm ^\prime}} \right],\end{split}$$
where ${x_m}$ and ${y_m}$ are the coordinates in the plane of the mask. This field is caused by a single object slide. For a 3D object, we must integrate the field over the thickness $z$ of the 3D object, giving the total field just before the mask as
$$\begin{split}&\int {e^{- j\frac{{{k_0}z}}{{2{f^2}}}\left({{x_m^2} + {y_m^2}} \right)}} \mathop{\iint}\nolimits_{- \infty}^\infty b\left({x^\prime - x,y^\prime - y,\;z + {z_0}} \right)\\&\quad t\left({x^\prime ,y^\prime ;z} \right){e^{\frac{{j{k_0}}}{f}\left({{x_m}x^\prime + {y_m}y^\prime} \right)}}{\rm d}x^\prime {\rm d}y^{\prime}{\rm d}z.\end{split}$$
Finally, the complex field after the mask is
(1)$$\begin{split}\psi\! ({x,y;{x_m},{y_m}} )= \left[{\int {e^{- j\frac{{{k_0}z}}{{2{f^2}}}({{x_m}^2 + {y_m}^2} )}}\;\mathop{\iint}\nolimits_{- \infty}^\infty b\left({x^\prime - x,y^\prime - y;\;z + {z_0}} \right) }\right.\left.{t\left({x^\prime ,y^\prime ;z} \right){e^{\frac{{j{k_0}}}{f}\left({{x_m}x^\prime + {y_m}y^\prime} \right)}}{\rm d}x^\prime {\rm d}y^{\prime}{\rm d}z} \right]m({{x_m},{y_m}} ).\end{split}$$
The photodetector responding to the intensity gives out a current $i({x,y})$ as an output by spatially integrating over the active area $D$ of the detector:
(2)$$\begin{split}i({x,y} ) &\propto {\iint _D}{\left| {\psi ({x,y;{x_m},{y_m}} )} \right|^2}{\rm d}{x_m}{\rm d}{y_m}\\& = \int \left[{\int {e^{- j\frac{{{k_0}z^\prime}}{{2{f^2}}}({{x_m}^2 + {y_m}^2} )}} \mathop{\iint}\nolimits_{- \infty}^\infty b\left({x^\prime - x,y^\prime - y;\;z^\prime + {z_0}} \right) t\left({x^\prime ,y^\prime ; z^\prime} \right){e^{\frac{{j{k_0}}}{f}\left({{x_m}x^\prime + {y_m}y^\prime} \right)}}{\rm d}x^\prime {\rm d}y{\rm ^\prime}{\rm d}z^\prime} \right]\;m({{x_m},{y_m}} )\\ &\quad \times \left[{\int {e^{j\frac{{{k_0}z^{\prime \prime}}}{{2{f^2}}}({{x_m}^2 + {y_m}^2} )}} \mathop{\iint}\nolimits_{- \infty}^\infty {b^*}\left({x^{\prime \prime} - x,y^{\prime \prime} - y;\;z^{\prime \prime} + {z_0}} \right) {t^*}\left({x^{\prime \prime} ,y^{\prime \prime} ;z^{\prime \prime}} \right){e^{\frac{{- j{k_0}}}{f}\left({{x_m}x^{\prime \prime} + {y_m}y^{\prime \prime}} \right)}}{\rm d}x^{\prime \prime} {\rm d}y^{{\prime \prime}}{\rm d}z^{\prime \prime}} \right]\;{m^*}({{x_m},{y_m}} ){\rm d}{x_m}{\rm d}{y_m}.\end{split}$$
By grouping all the ${x_m}$ and ${y_m}$ variables together, we can define the coherence function of the scanning system as
(3)$${\Gamma}\left({x^\prime - x^{\prime \prime} ,y^\prime - y^{\prime \prime} ;z^\prime - z^{\prime \prime}} \right)\\ = \int |m({{x_m},{y_m}} ){|^2}{e^{j\frac{{{k_0}}}{f}[{x_m}\left({x^\prime - x^{\prime \prime}} \right) + {y_m}\left({y^\prime - y^{\prime \prime}} \right)]}}{e^{- j\frac{{{k_0}\left({z^\prime - z^{\prime \prime}} \right)}}{{2{f^2}}}({{x_m}^2 + {y_m}^2} )}}{\rm d}{x_m}{\rm d}{y_m}.$$
With the definition of the coherence function, Eq. (2) becomes
(4)$$\begin{split}i({x,y} ) &= \int {\Gamma}\left({x^\prime - x^{\prime \prime} ,y^\prime - y^{\prime \prime}; z^\prime - z^{\prime \prime}} \right)b\left({x^\prime - x,y^\prime - y;\;z^\prime + {z_0}} \right)t\left({x^\prime , y^\prime ; z^\prime} \right)\\ &\quad\times {b^*}\left({x^{\prime \prime} - x,y^{\prime \prime} - y; z^{\prime \prime} + {z_0}} \right){t^*}\left({x^{\prime \prime} ,y^{\prime \prime} ; z^{\prime \prime}} \right){\rm d}x^\prime {\rm d}y^\prime {\rm d}z^\prime {\rm d}x^{\prime \prime} {\rm d}y^{\prime \prime} {\rm d}z^\prime .\end{split}$$
Thus, the equation is fairly complicated for a general situation. We shall investigate two special cases.
A. Coherent Processing
For a point detector [i.e., $|m({x,y}){|^2} = \delta ({x,y})]$, Eq. (3) becomes
(5)$${\Gamma}\left({x^\prime - x^{\prime \prime} ,y^\prime - y^{\prime \prime} ;z^\prime - z^{\prime \prime}} \right)\\ = \int \delta ({{x_m},{y_m}} ){e^{j\frac{{{k_0}}}{f}[{x_m}\left({x^\prime - x^{\prime \prime}} \right) + {y_m}\left({y^\prime - y^{\prime \prime}} \right)]}}{e^{- j\frac{{{k_0}\left({z^\prime - z^{\prime \prime}} \right)}}{{2{f^2}}}({{x_m}^2 + {y_m}^2} )}}{\rm d}{x_m}{\rm d}{y_m} = 1.$$
With this result, Eq. (4) becomes
$$i({x,y} ) = \int b\left({x^\prime - x,y^\prime - y;\;z^\prime + {z_0}} \right)t\left({x^\prime ,y^\prime ;z^\prime} \right) \times {b^*}(x^{\prime \prime} - x,y^{\prime \prime} - y;z^{\prime \prime} + {z_0}){t^*}(x^{\prime \prime} ,y^{\prime \prime} ;z^{\prime \prime}){\rm d}x^\prime {\rm d}y^\prime {\rm d}z^\prime {\rm d}x^{\prime \prime} {\rm d}y^{\prime \prime} {\rm d}z^{\prime \prime}.$$
Since the prime and double prime integrations can be performed separately, we rewrite the above equation to become
(6)$$\begin{split}i({x,y} ) & = \int b\left({x^\prime - x,y^\prime - y;\;z^\prime + {z_0}} \right)t\left({x^\prime ,y^\prime ;z^\prime} \right){\rm d}x^\prime {\rm d}{y^\prime}{\rm d}z^\prime \int {b^*}\left({x^{\prime \prime} - x,y^{\prime \prime} - y;\;z^{\prime \prime} + {z_0}} \right){t^*}\left({x^{\prime \prime} ,y^{\prime \prime}; z^{\prime \prime}} \right){\rm d}{x^{\prime \prime}}{\rm d}y^{\prime \prime} {\rm d}{z^{\prime \prime}}\\[-4pt] & = {\left| {\int b\left({x^\prime - x,y^\prime - y;\;z^\prime + {z_0}} \right)t\left({x^\prime ,y^\prime ;z^\prime} \right){\rm d}x^\prime {\rm d}{y^\prime}{\rm d}z^\prime} \right|^2}\\ & = {\left| {\int t({x,y;z} )*b\left({- x, - y;\;z + {z_0}} \right){\rm d}z} \right|^2},\end{split}$$
where $*$ denotes a 2D convolution involving $x$ and $y$ coordinates. This corresponds to coherent imaging with a coherent point spread function of the scanning system given by $b({- x, - y;\;z + {z_0}})$.
Fig. 3. Practical implementation of the scanning system for a 3D object.
B. Incoherent Processing
For an integrating detector (i.e., $|m({x,y}){|^2} = 1)$, Eq. (3) becomes
$$\begin{split}{\Gamma}\left({x^\prime - x^{\prime \prime} ,y^\prime - y^{\prime \prime}; z^\prime - z^{\prime \prime}} \right)= \int {e^{j\frac{{{k_0}}}{f}[{x_m}\left({x^\prime - x^{\prime \prime}} \right) + {y_m}\left({y^\prime - y^{\prime \prime}} \right)]}}{e^{- j\frac{{{k_0}\left({z^\prime - z^{\prime \prime}} \right)}}{{2{f^2}}}({{x_m}^2 + {y_m}^2} )}}{\rm d}{x_m}{\rm d}{y_m} \sim \delta \left({x^\prime - x^{\prime \prime} ,y^\prime - y^{\prime \prime}; z^\prime - z^{\prime \prime}} \right)\end{split}$$
because we may consider that the above integral takes the Fourier transform of a spherical wave of radius of curvature of ${f^2}/({z^\prime - z^{\prime \prime}})$, which can be made arbitrarily large.
(7)$$\begin{split}i({x,y} ) &= \int \delta \left({x^\prime - x^{\prime \prime} ,y^\prime - y^{\prime \prime} ;z^\prime - z^{\prime \prime}} \right)b\left({x^\prime - x,y^\prime - y;z^\prime + {z_0}} \right)t\left({x^\prime ,y^\prime ;z^\prime} \right) \\ &\quad\times {b^*}\left({x^{\prime \prime} - x,y^{\prime \prime} - y,z^{\prime \prime} + {z_0}} \right){t^*}\left({x^{\prime \prime} , y^{\prime \prime} ; z^{\prime \prime}} \right){\rm d}x^\prime {\rm d}y^\prime {\rm d}z^\prime {\rm d}x^{\prime \prime} {\rm d}y^{\prime \prime} {\rm d}z^{\prime \prime}. \\ &= \int {\left| {b\left({x^\prime - x,y^\prime - y;z^\prime + {z_0}} \right)} \right|^2}{\left| {t\left({x^\prime ,y^\prime ;z^\prime} \right)} \right|^2}{\rm d}x^\prime {\rm d}y^\prime {\rm d}z^\prime \\ &= \int {\left| {t({x,y;z} )} \right|^2}*{\left| {b\left({- x, - y;z + {z_0}} \right)} \right|^2}{\rm d}z.\end{split}$$
This performs incoherent processing as it manipulates and processes the intensity distribution of the 3D object. ${| {b({- x, - y;z + {z_0}})} |^2}$ is the intensity point spread function of the incoherent system.
We have considered extreme cases with different mask sizes. In one extreme, the mask is wide open (i.e., $|m({x,y}){|^2} = 1)$, leading to incoherent imaging; in the other extreme case, the mask is extremely small [i.e., $|m({x,y}){|^2} = \delta ({x,y})],$ leading to coherent imaging. It is not hard to envision that we have partial coherent imaging using a mask of some finite size [22,23].
3. HOLOGRAPHIC RECORDING AND RECONSTRUCTION
By manipulating the structure of the scanning beam $b({x,y;z + {z_0}})$, we would achieve a holographic recording of a 3D object being raster scanned. Figure 3 shows a practical implementation of Fig. 2, where we have shown the location of the 3D object.
${p_1}({x,y})$ and ${p_2}({x,y})$ are the pupil functions, where they are located in the front focal plane of Lens L1. For ${p_1}({x,y}) = A\delta ({x - {x_0},y})$ and ${p_2}({x,y}) = B$, the scanning beam complex profile $b({x,y;z + {z_0}})$ on the transparency can be modeled as the interference of an off-axis plane wave and an on-axis spherical wave shown in Fig. 3:
(8)$$b\left({x,y;z + {z_0}} \right) = A{e^{j{k_0} {\rm sin}\theta x}} + B\frac{{j{k_0}}}{{2\pi (z + {z_0})}}{e^{\frac{{- j{k_0}}}{{2\left({z + {z_0}} \right)}}({{x^2} + {y^2}} )}},$$
where $\sin \theta = {x_0}/f$, and we assume $A$ and $B$ are real.
A. Coherent Holographic Recording and Reconstruction
For simplicity, let us assume there is a planar object located at $z = 0$ given by $t({x,y;z}) = {t_0}({x,y})\delta (z).$ For the coherent case, from Eq. (6), we have
(9)$$\begin{split}i({x,y} ) &= {\left| {\int t({x,y;z} )*b\left({- x, - y;z + {z_0}} \right){\rm d}z} \right|^2} \\ &= {\left| {\int {t_0}({x,y} )\delta \left(z \right){\rm *}b\left({- x, - y;z + {z_0}} \right){\rm d}z} \right|^2} \\&= {\left| {{t_0}({x,y} ){*}b\left({- x, - y;{z_0}} \right)} \right|^2},\end{split}$$
where $b({x,y;{z_0}}) = A{e^{j{k_0} \sin \theta x}} + B\frac{{j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{- j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}})}}$ explicitly.
Writing out Eq. (9), we have a coherent hologram, ${H_{\textit{co}}}({x,y})$:
(10)$$\begin{split}i({x,y} ) &= {H_{\textit{co}}}({x,y} ) \\&= {\left| {{t_0}({x,y} )*\left[{A{e^{- j{k_0} \sin\theta x}} + B\frac{{j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{- j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right]} \right|^2}.\end{split}$$
Let us evaluate the first term:
$$\begin{split}&{t_0}({x,y} )*A{e^{- j{k_0} \sin\theta x}}\\& = {{\cal F}^{- 1}}\left\{{{\cal F}\left\{{{t_0}({x,y} )} \right\}{\cal F}\left\{{A{e^{- j{k_0} \sin\theta x}}} \right\}} \right\} \\ &= {{\cal F}^{- 1}}\left\{{{T_0}({{k_x},{k_y}} )4{\pi ^2}A\delta \left({{k_x} - {k_0} \sin\theta ,{k_y}} \right)} \right\} \\ &= {{\cal F}^{- 1}}\left\{{{T_0}\left({{k_0} \sin\theta ,0} \right)4{\pi ^2}A\delta \left({{k_x} - {k_0} \sin\theta ,{k_y}} \right)} \right\} \\&= D{e^{- j{k_0} \sin\theta x}},\end{split}$$
where ${\cal F}\{{{t_0}({x,y})} \} = {T_0}({{k_x},{k_y}})$ is the Fourier transform of ${t_0}({x,y})$ defined as
$${\cal F}\{{{t_0}({x,y} )} \} = {T_0}({{k_x},{k_y}} ) = \iint \nolimits_{- \infty}^\infty {t_0}({x,y} ){e^{j{k_x}x + j{k_y}y}}{\rm d}x{\rm d}y,$$
with ${k_x}$ and ${k_y}$ being the radian spatial frequencies corresponding to the coordinates $x$ and $y$, respectively. Hence, the coherent hologram from Eq. (10) becomes
(11)$$\begin{split}{H_{\textit{co}}}({x,y} ) &= {\left| {D{e^{- j{k_0} \sin\theta x}} + {t_0}({x,y} )*B\frac{{j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{- j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right|^2} \\ &= {\left| D \right|^2} + {\left| {{t_0}({x,y} )*B\frac{{j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{- j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right|^2} \\&\quad+ {D^*}{e^{j{k_0} \sin\theta x}}\left[{{t_0}({x,y} )*B\frac{{j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{- j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right] \\&\quad +D{e^{- j{k_0} \sin\theta x}}\left[{t_0^*({x,y} )*B\frac{{- j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right].\end{split}$$
The coherent hologram can be written to a spatial light modulator (SLM) for optical reconstruction. For example, we can illuminate the SLM with a conjugate beam (i.e., the reconstruction plane wave is conjugate to the off-axis plane wave used to scan the object). Therefore, the conjugate beam is given by $\;{e^{- j{k_0} \sin\theta x}}$. The first two terms with the hologram equation above give rise to a zeroth-order beam upon reconstruction. The third term will give rise to a virtual image reconstruction of ${t_0}({x,y})$ at a location ${z_0}$ behind the SLM (assuming transmissive) as
(12)$$\begin{split}{e^{- j{k_0} \sin\theta x}} &\times {D^*}{e^{j{k_0} \sin\theta x}}\left[{{t_0}({x,y} )*B\frac{{j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{- j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right]\\&*{h^*}\left({x,y;{z_0}} \right) \propto {t_0}({x,y} ).\end{split}$$
The conjugate of ${t_0}({x,y})$ is formed at $x = {z_0} \sin\theta$ as a real image reconstruction as
(13)$$\begin{split}{e^{- j{k_0} \sin\theta x}} &\times D{e^{- j{k_0} \sin\theta x}}\left[{t_0^*({x,y} )*B\frac{{- j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right]\\&*h\left({x,y;{z_0}} \right) \propto {e^{- j2{k_0} \sin\theta x}}t_0^*\left({x - 2{z_0} \sin\theta ,y} \right).\end{split}$$
The reconstruction of the coherent hologram is shown in Fig. 4. Note that the complex amplitudes of the object are reconstructed.
Fig. 4. Real and virtual image reconstruction from coherent hologram.
B. Incoherent Holographic Recording and Reconstruction
Again, we assume $t({x,y;z}) = {t_0}({x,y})\delta (z).$ For the incoherent case, Eq. (7) becomes
(14)$$i({x,y} ) = {\left| {{t_0}({x,y} )} \right|^2}*{\left| {b\left({- x, - y;{z_0}} \right)} \right|^2}.$$
From Eq. (8), we have
(15)$$\begin{split}{\left| {b\left({- x, - y;{z_0}} \right)} \right|^2} &= {\left| {A{e^{- j{k_0} \sin\theta x}} + B\frac{{j{k_0}}}{{2\pi {z_0}}}{e^{\frac{{- j{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} )}}} \right|^2} \\ &= {\left| A \right|^2} + {\left| {B\frac{{{k_0}}}{{2\pi {z_0}}}} \right|^2} + AB\frac{{{k_0}}}{{\pi {z_0}}}\\&\quad\times\sin\left[ {\frac{{{k_0}}}{{2{z_0}}}\left( {{x^2} + {y^2}} \right) - {k_0}x\sin\theta } \right]\\ &= E + F \sin\left[{\frac{{{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} ) - {k_0}x \sin\theta} \right].\end{split}$$
Substituting Eq. (15) into Eq. (14), we have an incoherent hologram, ${H_{\rm{inco}}}({x,y})$:
(16)$$\begin{split}i({x,y} ) &= {H_{\rm{inco}}}({x,y} ) = {\left| {{t_0}({x,y} )} \right|^2}\\&\quad*\left\{{E + F \sin\left[{\frac{{{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} ) - {k_0}x \sin\theta} \right]} \right\} \\ &= G + F{\left| {{t_0}({x,y} )} \right|^2}* \sin\left[{\frac{{{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} ) - {k_0}x \sin\theta} \right],\end{split}$$
as ${| {{t_0}({x,y})} |^2}*E$ gives some constant $G$. Again, let us use the conjugate beam ${e^{- j{k_0} \sin\theta x}}$ as a reconstruction wave if the hologram is displayed on a SLM. The first term of the hologram equation in Eq. (16) gives a zeroth-order beam. Using $\sin\vartheta = ({{e^{{j\vartheta}}} - {e^{- j\vartheta}}})/2j$, the second term of the hologram equation gives rise to a real image:
(17)$$\begin{split}&{e^{- j{k_0} \sin\theta x}}\left\{{{{\left| {{t_0}({x,y} )} \right|}^2}*{e^{j\left[{\frac{{{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} ) - {k_0}x \sin\theta} \right]}}} \right\}\\&*h\left({x,y;{z_0}} \right) \\ &\propto {\left| {{t_0}\left({x - 2{z_0} \sin\theta ,y} \right)} \right|^2}{e^{- j{k_0}x \sin\theta}}.\end{split}$$
The third term gives a virtual image reconstruction:
(18)$$\begin{split}&{e^{- j{k_0} \sin\theta x}}\left\{{{{\left| {{t_0}({x,y} )} \right|}^2}*{e^{- j\left[{\frac{{{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} ) - {k_0}x \sin\theta} \right]}}} \right\}\\&*{h^*}\left({x,y;{z_0}} \right) \\&\propto {\left| {{t_0}(x )} \right|^2}{e^{- j{k_0}x \sin\theta}}.\end{split}$$
The reconstruction diagram is shown in Fig. 5. Note that the intensity distributions of the object are reconstructed.
Fig. 5. Real and virtual image reconstruction for incoherent hologram.
4. SIMULATIONS RESULTS
We present some simulations to demonstrate the coherent and incoherent processing. For coherent holographic processing, in Fig. 6(a), we show the Chinese character "Light" as a coherent object ${t_0}({x,y})$. Figure 6(b) is the intensity plot of the coherent point spread function (CPSF) given by Eq. (8) [i.e., $|b{({x,y;z + {z_0}})^2}|$]. Note that the intensity of the CPSF is of the same form of the intensity point spread function (IPSF) of the incoherent case [see Eq. (15)], a property that is identical to that obtained in heterodyne OSH [23,24]. Figures 6(c) and 6(d) are the coherent hologram given by Eq. (10) and its spectrum, respectively. Figures 7(a) and 7(b) show the real and virtual image reconstructions of the coherent hologram calculated by Eqs. (12) and (13), respectively. Note that in these reconstructions, we observe some coherent noise from the zeroth-order beam and its twin image on the focused image planes. However, this will not be a problem if the observer is far from the reconstruction plane.
Fig. 6. (a) Original Chinese character "Light" as an coherent input object, (b) intensity plot of coherent point spread function, (c) coherent hologram of (a), and (d) magnitude spectrum of coherent hologram. In (c), we deliberately make the figure larger to observe the fine fringes in the hologram.
Fig. 7. (a) Real image coherent reconstruction and (b) virtual image coherent reconstruction.
Fig. 8. (a) Original Chinese character "Light" as an incoherent input object, (b) intensity point spread function, (c) incoherent hologram of (a), and (d) magnitude spectrum of (c).
Fig. 9. (a) Real image incoherent reconstruction and (b) virtual image incoherent reconstruction.
For incoherent holographic processing, in Fig. 8(a) we show the incoherent counterpart of Fig. 6. Figure 8(a) is the Chinese character "Light" as an incoherent object ${| {{t_0}({x,y})} |^2}$. Figure 8(b) is the plot of the IPSF given by Eq. (15). Note that this is an intensity distribution. Figure 8(c) is the incoherent hologram calculated by Eq. (16) and its magnitude spectrum is plotted in Fig. 8(d). Finally in Figs. 9(a) and 9(b), we show the real and virtual image reconstruction calculated by Eqs. (17) and (18), respectively, of the incoherent hologram. Interestingly, the reconstructed images are edge-enhanced, and this can be explained as follows. By inspecting the incoherent hologram given by Eq. (16), let us concentrate on the term
$${\left| {{t_0}({x,y} )} \right|^2}* \sin\left[{\frac{{{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} ) - {k_0}x \sin\theta} \right].$$
Its spectrum is
$$\begin{split}&{\cal F}\left\{{{{\left| {{t_0}({x,y} )} \right|}^2}} \right\}{\cal F}\left\{{\sin\left[{\frac{{{k_0}}}{{2{z_0}}}({{x^2} + {y^2}} ) - {k_0}x \sin\theta} \right]} \right\} \\[-5pt]& = {\cal F}\left\{{{{\left| {{t_0}({x,y} )} \right|}^2}} \right\}{e^{\frac{{j\left[{{{\left({{k_x} - {k_0} \sin\theta} \right)}^2} + { k}_y^2} \right]z_0}}{{2{k_0}}}}} + {\cal F}\left\{{{{\left| {{t_0}({x,y} )} \right|}^2}} \right\}\\[-5pt]&\quad{e^{\frac{{j\left[{{{\left({{k_x} + {k_0} \sin\theta} \right)}^2} + { k}_y^2} \right]z_0}}{{2{k_0}}}}}.\end{split}$$
Note that the center of ${\cal F}\{{{{| {{t_0}({x,y})} |}^2}} \}$ is at $({{k_x},{k_y}}) = ({0,0})$, while the centers of the transfer functions are at $({{k_x},{k_y}}) = ({\pm {k_0} \sin\theta ,0})$. Because only one side of ${\cal F}\{{{{| {{t_0}({x,y})} |}^2}} \}$ overlaps with the transfer function, it accomplishes single-sided filtering and the reconstructions only retrieve the high frequency of the object, thereby exhibiting the edge enhancement illustrated in Fig. 9(a) and 9(b). This aspect of incoherent holographic processing is actually an interesting and unexpected result. For holographic imaging, we should, of course, try to recover the original image. However, edge enhancement would be important if we plan to use the reconstructed image for applications such as image recognition. We also want to point out that the achievable edge enhancement is performed only perpendicular to the axis of the plane wave tilt. This would correspond to anisotropic edge extraction in image processing.
We have developed a general theory of holographic imaging by optical scanning. By suitably designing the structured scanning beam, we can perform holographic recording of a 3D object. The coherence of the optical system depends on the size of the mask in front of the photodetector. For a point detector, where the mask is a pinhole, we have coherent holographic imaging. For an integrating detector, where the mask opens widely, we have incoherent holographic imaging. It is possible to perform partial coherent holographic imaging with a mask of a finite size. We have performed computer simulations to demonstrate coherent as well as incoherent holographic imaging. For incoherent holographic imaging, we have found an interesting, yet unexpected, result: edge enhancement of the original object.
National Natural Science Foundation of China (11762009, 61865007); Yunnan Provincial Science and Technology Department (2019FA025); Yunnan Provincial Program for Foreign Talent (202105AO130015); Ministry of Science and Technology, Taiwan (109-2221-E-035-076-MY3).
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
1. T.-C. Poon, "Scanning holography and two-dimensional image processing by acousto-optic two-pupil synthesis," J. Opt. Soc. Am. A 2, 521–527 (1985). [CrossRef]
2. W.-C. Chieh, D. S. Dilworth, E. Liu, and E. N. Leith, "Synthetic-aperture chirp confocal imaging," Appl. Opt. 45, 501–510 (2006). [CrossRef]
3. B. W. Schilling, T.-C. Poon, G. Indebetouw, B. Storrie, K. Shinoda, Y. Suzuki, and M. H. Wu, "Three-dimensional holographic fluorescence microscopy," Opt. Lett. 22, 1506–1508 (1997). [CrossRef]
4. E. Y. Lam, X. Zhang, H. Vo, T.-C. Poon, and G. Indebetouw, "Three-dimensional microscopy and sectional image reconstruction using optical scanning holography," Appl. Opt. 48, H113–H119 (2009). [CrossRef]
5. J. Swoger, M. Martínez-Corral, J. Huisken, and E. H. K. Stelzer, "Optical scanning holography as a technique for high-resolution three-dimensional biological microscopy," J. Opt. Soc. Am. A 19, 1910–1919 (2002). [CrossRef]
6. T.-C. Poon, "Optical scanning holography–a review of recent progress," J. Opt. Soc. Korea 13, 406–415 (2009). [CrossRef]
7. T.-C. Poon, T. Kim, and K. Doh, "Optical scanning cryptography for secure wireless transmission," Appl. Opt. 42, 6496–6503 (2003). [CrossRef]
8. A. Yan, Y. Wei, Z. Hu, J. Zhang, P. W. M. Tsang, and T.-C. Poon, "Optical cryptography with biometrics for multi-depth objects," Sci. Rep. 7, 12933 (2017). [CrossRef]
9. P. W. M. Tsang, J.-P. Liu, and T.-C. Poon, "Compressive optical scanning holography," Optica 2, 476–483 (2015). [CrossRef]
10. T. Kim and T.-C. Poon, "Optical image recognition of three-dimensional objects," Appl. Opt. 38, 370–381 (1999). [CrossRef]
11. Y. Zhang, T.-C. Poon, P. W. M. Tsang, R. Wang, and L. Wang, "Review on feature extraction for 3-D incoherent image processing using optical scanning holography, IEEE Trans. Ind. Informat. 15, 6146–6154 (2019). [CrossRef]
12. Y. Zhang, R. Wang, P. Tsang, and T.-C. Poon, "Sectioning with edge extraction in optical incoherent imaging processing," OSA Contin. 3, 698–708 (2020). [CrossRef]
13. J.-P. Liu, W.-T. Chen, H.-H. Wen, and T.-C. Poon, "Recording of a curved digital hologram for orthoscopic real image reconstruction," Opt. Lett. 45, 4353–4356 (2020). [CrossRef]
14. T. Wilson and C. Sheppard, Theory and Practice of Optical Scanning Microscopy (Academic, 1984).
15. T.-C. Poon and T. Kim, Engineering Optics with MATLAB, 2nd ed. (World Scientific, 2018), p. 242.
16. J. Rosen, G. Indebetouw, and G. Brooker, "Homodyne scanning holography," Opt. Express 14, 4280–4285 (2006). [CrossRef]
17. T. Kim and T. Kim, "Coaxial scanning holography," Opt. Lett. 45, 2046–2049 (2020). [CrossRef]
18. C.-M. Tsai, H.-Y. Sie, T.-C. Poon, and J.-P. Liu, "Optical scanning holography with a polarization directed flat lens," Appl. Opt. 60, B113–B118 (2021). [CrossRef]
19. N. Yoneda, Y. Saita, and T. Nomura, "Motionless optical scanning holography," Opt. Lett. 45, 3184–3187 (2020). [CrossRef]
20. N. Yoneda, Y. Saita, and T. Nomura, "Spatially divided phase-shifting motionless optical scanning holography," OSA Contin. 3, 3523–3535 (2020). [CrossRef]
21. T.-C. Poon, Optical Scanning Holography with MATLAB (Springer, 2007), p. 102.
22. J.-P. Liu, C.-H. Guo, W.-J. Hsiao, T.-C. Poon, and P. W. M. Tsang, "Coherence experiments in single-pixel digital holography," Opt. Lett. 40, 2366–2369 (2015). [CrossRef]
23. J.-P. Liu, "Spatial coherence analysis for optical scanning holography," Appl. Opt. 54, A59–A66 (2015). [CrossRef]
24. T.-C. Poon and G. Indebetouw, "Three-dimensional point spread functions of an optical heterodyne scanning image processor," Appl. Opt. 42, 1485–1492 (2003). [CrossRef]
T.-C. Poon, "Scanning holography and two-dimensional image processing by acousto-optic two-pupil synthesis," J. Opt. Soc. Am. A 2, 521–527 (1985).
W.-C. Chieh, D. S. Dilworth, E. Liu, and E. N. Leith, "Synthetic-aperture chirp confocal imaging," Appl. Opt. 45, 501–510 (2006).
B. W. Schilling, T.-C. Poon, G. Indebetouw, B. Storrie, K. Shinoda, Y. Suzuki, and M. H. Wu, "Three-dimensional holographic fluorescence microscopy," Opt. Lett. 22, 1506–1508 (1997).
E. Y. Lam, X. Zhang, H. Vo, T.-C. Poon, and G. Indebetouw, "Three-dimensional microscopy and sectional image reconstruction using optical scanning holography," Appl. Opt. 48, H113–H119 (2009).
J. Swoger, M. Martínez-Corral, J. Huisken, and E. H. K. Stelzer, "Optical scanning holography as a technique for high-resolution three-dimensional biological microscopy," J. Opt. Soc. Am. A 19, 1910–1919 (2002).
T.-C. Poon, "Optical scanning holography–a review of recent progress," J. Opt. Soc. Korea 13, 406–415 (2009).
T.-C. Poon, T. Kim, and K. Doh, "Optical scanning cryptography for secure wireless transmission," Appl. Opt. 42, 6496–6503 (2003).
A. Yan, Y. Wei, Z. Hu, J. Zhang, P. W. M. Tsang, and T.-C. Poon, "Optical cryptography with biometrics for multi-depth objects," Sci. Rep. 7, 12933 (2017).
P. W. M. Tsang, J.-P. Liu, and T.-C. Poon, "Compressive optical scanning holography," Optica 2, 476–483 (2015).
T. Kim and T.-C. Poon, "Optical image recognition of three-dimensional objects," Appl. Opt. 38, 370–381 (1999).
Y. Zhang, T.-C. Poon, P. W. M. Tsang, R. Wang, and L. Wang, "Review on feature extraction for 3-D incoherent image processing using optical scanning holography, IEEE Trans. Ind. Informat. 15, 6146–6154 (2019).
Y. Zhang, R. Wang, P. Tsang, and T.-C. Poon, "Sectioning with edge extraction in optical incoherent imaging processing," OSA Contin. 3, 698–708 (2020).
J.-P. Liu, W.-T. Chen, H.-H. Wen, and T.-C. Poon, "Recording of a curved digital hologram for orthoscopic real image reconstruction," Opt. Lett. 45, 4353–4356 (2020).
T. Wilson and C. Sheppard, Theory and Practice of Optical Scanning Microscopy (Academic, 1984).
T.-C. Poon and T. Kim, Engineering Optics with MATLAB, 2nd ed. (World Scientific, 2018), p. 242.
J. Rosen, G. Indebetouw, and G. Brooker, "Homodyne scanning holography," Opt. Express 14, 4280–4285 (2006).
T. Kim and T. Kim, "Coaxial scanning holography," Opt. Lett. 45, 2046–2049 (2020).
C.-M. Tsai, H.-Y. Sie, T.-C. Poon, and J.-P. Liu, "Optical scanning holography with a polarization directed flat lens," Appl. Opt. 60, B113–B118 (2021).
N. Yoneda, Y. Saita, and T. Nomura, "Motionless optical scanning holography," Opt. Lett. 45, 3184–3187 (2020).
N. Yoneda, Y. Saita, and T. Nomura, "Spatially divided phase-shifting motionless optical scanning holography," OSA Contin. 3, 3523–3535 (2020).
T.-C. Poon, Optical Scanning Holography with MATLAB (Springer, 2007), p. 102.
J.-P. Liu, C.-H. Guo, W.-J. Hsiao, T.-C. Poon, and P. W. M. Tsang, "Coherence experiments in single-pixel digital holography," Opt. Lett. 40, 2366–2369 (2015).
J.-P. Liu, "Spatial coherence analysis for optical scanning holography," Appl. Opt. 54, A59–A66 (2015).
T.-C. Poon and G. Indebetouw, "Three-dimensional point spread functions of an optical heterodyne scanning image processor," Appl. Opt. 42, 1485–1492 (2003).
Brooker, G.
Chen, W.-T.
Chieh, W.-C.
Dilworth, D. S.
Doh, K.
Guo, C.-H.
Hsiao, W.-J.
Hu, Z.
Huisken, J.
Indebetouw, G.
Kim, T.
Lam, E. Y.
Leith, E. N.
Liu, E.
Liu, J.-P.
Martínez-Corral, M.
Nomura, T.
Poon, T.-C.
Rosen, J.
Saita, Y.
Schilling, B. W.
Sheppard, C.
Shinoda, K.
Sie, H.-Y.
Stelzer, E. H. K.
Storrie, B.
Suzuki, Y.
Swoger, J.
Tsai, C.-M.
Tsang, P.
Tsang, P. W. M.
Vo, H.
Wang, L.
Wang, R.
Wei, Y.
Wen, H.-H.
Wilson, T.
Wu, M. H.
Yan, A.
Yoneda, N.
Zhang, J.
Zhang, X.
Zhang, Y.
Appl. Opt. (7)
IEEE Trans. Ind. Informat. (1)
J. Opt. Soc. Am. A (2)
J. Opt. Soc. Korea (1)
Optica (1)
OSA Contin. (2)
(1) a(x,y)∗h(x,y;z+z0)
(2) [e−jk0z2f2(xm2+ym2)∬−∞∞b(x′−x,y′−y;z+z0)t(x′,y′;z)ejk0f(xmx′+ymy′)dx′dy′],
(3) ∫e−jk0z2f2(xm2+ym2)∬−∞∞b(x′−x,y′−y,z+z0)t(x′,y′;z)ejk0f(xmx′+ymy′)dx′dy′dz.
(1) ψ(x,y;xm,ym)=[∫e−jk0z2f2(xm2+ym2)∬−∞∞b(x′−x,y′−y;z+z0)t(x′,y′;z)ejk0f(xmx′+ymy′)dx′dy′dz]m(xm,ym).
(2) i(x,y)∝∬D|ψ(x,y;xm,ym)|2dxmdym=∫[∫e−jk0z′2f2(xm2+ym2)∬−∞∞b(x′−x,y′−y;z′+z0)t(x′,y′;z′)ejk0f(xmx′+ymy′)dx′dy′dz′]m(xm,ym)×[∫ejk0z′′2f2(xm2+ym2)∬−∞∞b∗(x′′−x,y′′−y;z′′+z0)t∗(x′′,y′′;z′′)e−jk0f(xmx′′+ymy′′)dx′′dy′′dz′′]m∗(xm,ym)dxmdym.
(3) Γ(x′−x′′,y′−y′′;z′−z′′)=∫|m(xm,ym)|2ejk0f[xm(x′−x′′)+ym(y′−y′′)]e−jk0(z′−z′′)2f2(xm2+ym2)dxmdym.
(4) i(x,y)=∫Γ(x′−x′′,y′−y′′;z′−z′′)b(x′−x,y′−y;z′+z0)t(x′,y′;z′)×b∗(x′′−x,y′′−y;z′′+z0)t∗(x′′,y′′;z′′)dx′dy′dz′dx′′dy′′dz′.
(5) Γ(x′−x′′,y′−y′′;z′−z′′)=∫δ(xm,ym)ejk0f[xm(x′−x′′)+ym(y′−y′′)]e−jk0(z′−z′′)2f2(xm2+ym2)dxmdym=1.
(9) i(x,y)=∫b(x′−x,y′−y;z′+z0)t(x′,y′;z′)×b∗(x′′−x,y′′−y;z′′+z0)t∗(x′′,y′′;z′′)dx′dy′dz′dx′′dy′′dz′′.
(6) i(x,y)=∫b(x′−x,y′−y;z′+z0)t(x′,y′;z′)dx′dy′dz′∫b∗(x′′−x,y′′−y;z′′+z0)t∗(x′′,y′′;z′′)dx′′dy′′dz′′=|∫b(x′−x,y′−y;z′+z0)t(x′,y′;z′)dx′dy′dz′|2=|∫t(x,y;z)∗b(−x,−y;z+z0)dz|2,
(11) Γ(x′−x′′,y′−y′′;z′−z′′)=∫ejk0f[xm(x′−x′′)+ym(y′−y′′)]e−jk0(z′−z′′)2f2(xm2+ym2)dxmdym∼δ(x′−x′′,y′−y′′;z′−z′′)
(7) i(x,y)=∫δ(x′−x′′,y′−y′′;z′−z′′)b(x′−x,y′−y;z′+z0)t(x′,y′;z′)×b∗(x′′−x,y′′−y,z′′+z0)t∗(x′′,y′′;z′′)dx′dy′dz′dx′′dy′′dz′′.=∫|b(x′−x,y′−y;z′+z0)|2|t(x′,y′;z′)|2dx′dy′dz′=∫|t(x,y;z)|2∗|b(−x,−y;z+z0)|2dz.
(8) b(x,y;z+z0)=Aejk0sinθx+Bjk02π(z+z0)e−jk02(z+z0)(x2+y2),
(9) i(x,y)=|∫t(x,y;z)∗b(−x,−y;z+z0)dz|2=|∫t0(x,y)δ(z)∗b(−x,−y;z+z0)dz|2=|t0(x,y)∗b(−x,−y;z0)|2,
(10) i(x,y)=Hco(x,y)=|t0(x,y)∗[Ae−jk0sinθx+Bjk02πz0e−jk02z0(x2+y2)]|2.
(16) t0(x,y)∗Ae−jk0sinθx=F−1{F{t0(x,y)}F{Ae−jk0sinθx}}=F−1{T0(kx,ky)4π2Aδ(kx−k0sinθ,ky)}=F−1{T0(k0sinθ,0)4π2Aδ(kx−k0sinθ,ky)}=De−jk0sinθx,
(17) F{t0(x,y)}=T0(kx,ky)=∬−∞∞t0(x,y)ejkxx+jkyydxdy,
(11) Hco(x,y)=|De−jk0sinθx+t0(x,y)∗Bjk02πz0e−jk02z0(x2+y2)|2=|D|2+|t0(x,y)∗Bjk02πz0e−jk02z0(x2+y2)|2+D∗ejk0sinθx[t0(x,y)∗Bjk02πz0e−jk02z0(x2+y2)]+De−jk0sinθx[t0∗(x,y)∗B−jk02πz0ejk02z0(x2+y2)].
(12) e−jk0sinθx×D∗ejk0sinθx[t0(x,y)∗Bjk02πz0e−jk02z0(x2+y2)]∗h∗(x,y;z0)∝t0(x,y).
(13) e−jk0sinθx×De−jk0sinθx[t0∗(x,y)∗B−jk02πz0ejk02z0(x2+y2)]∗h(x,y;z0)∝e−j2k0sinθxt0∗(x−2z0sinθ,y).
(14) i(x,y)=|t0(x,y)|2∗|b(−x,−y;z0)|2.
(15) |b(−x,−y;z0)|2=|Ae−jk0sinθx+Bjk02πz0e−jk02z0(x2+y2)|2=|A|2+|Bk02πz0|2+ABk0πz0×sin[k02z0(x2+y2)−k0xsinθ]=E+Fsin[k02z0(x2+y2)−k0xsinθ].
(16) i(x,y)=Hinco(x,y)=|t0(x,y)|2∗{E+Fsin[k02z0(x2+y2)−k0xsinθ]}=G+F|t0(x,y)|2∗sin[k02z0(x2+y2)−k0xsinθ],
(17) e−jk0sinθx{|t0(x,y)|2∗ej[k02z0(x2+y2)−k0xsinθ]}∗h(x,y;z0)∝|t0(x−2z0sinθ,y)|2e−jk0xsinθ.
(18) e−jk0sinθx{|t0(x,y)|2∗e−j[k02z0(x2+y2)−k0xsinθ]}∗h∗(x,y;z0)∝|t0(x)|2e−jk0xsinθ.
(26) |t0(x,y)|2∗sin[k02z0(x2+y2)−k0xsinθ].
(27) F{|t0(x,y)|2}F{sin[k02z0(x2+y2)−k0xsinθ]}=F{|t0(x,y)|2}ej[(kx−k0sinθ)2+ky2]z02k0+F{|t0(x,y)|2}ej[(kx+k0sinθ)2+ky2]z02k0. | CommonCrawl |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1}
\if11 {
\title{\bf Nonnested model selection based on empirical likelihood}
\author{Jiancheng Jiang\\
Department of Mathematics and Statistics \& School of Data Science, \\ University of North Carolina at Charlotte, NC 28223, USA.\\\\
Xuejun Jiang \thanks{
Correspondence: Xuejun Jiang, Department of Statistics and Data Science, Southern University of Science and Technology, Shenzhen, 518055, China. Email: \it{[email protected]}}\hspace{.2cm}\\
Department of Statistics and Data Science,\\ Southern University of Science and Technology, Shenzhen, 518055, China.\\\\
Haofeng Wang \\
Department of Mathematics,\\ Harbin Institute of Technology, Harbin, 150001, China.}
\maketitle } \fi
\if01 {
\begin{center}
{\LARGE\bf Nonnested model selection based on \\ [0.5cm]
empirical likelihood} \end{center}
} \fi
\begin{abstract} We propose an empirical likelihood ratio test for nonparametric model selection, where the competing models may be nested, nonnested, overlapping, misspecified, or correctly specified. It compares the squared prediction errors of models based on the cross-validation and allows for heteroscedasticity of the errors of models. We develop its asymptotic distributions for comparing additive models and varying-coefficient models and extend it to test significance of variables in additive models with massive data. The method is applicable to other model comparison problems. To facilitate implementation of the test, we provide a fast calculation procedure. Simulations show that the proposed tests work well and have favorable finite sample performance over some existing approaches. The methodology is validated on an empirical application. \end{abstract}
\noindent {\it Keywords: Empirical likelihood ratio; Distributed computation; Model selection; Nonparametric smoothing; Prediction error. }
\spacingset{1.9}
\addtolength{\textheight}{.5in} \section{Introduction} \label{sec1}
In application, one often needs to decide which model works better for a given dataset among a set of misspecified models since no model is right. This motivates us to introduce a novel empirical likelihood ratio (\textsc{elr}) test to model selection. The proposed method is applicable to model selection between any two supervising learning models, which may be nested, nonnested, overlapping, misspecified, or correctly specified.
Most existing model selection methods use likelihood or information criteria, such as \textsc{aic}, \textsc{bic}, \textsc{lasso} or \textsc{scad}, etc. They are widely used in statistical theory and have made great success in practice, but cannot be directly applied to nonnested model selection. Consider, for example, selecting important genes in the non-Hodgkin's lymphoma data in Dave et al. (2004) using the famous Cox's model and the additive hazard model, based on the \textsc{lasso}. Each model may lead to a different group of important genes, but there is no general tool to judge which model is better. Since the two models are nonnested, the likelihood comparison does not make sense, and hence the \textsc{aic} and \textsc{bic} criteria cannot be used. Comparison of nonnested models also arises in time series modeling, for instance, assessment of an ARCH(7) model versus a GARCH(1,2) model.
In other situations, even if models are nested, one may have difficulty in making a decision on selecting a better model. For example, suppose there are two candidate models with \textsc{aic} values equal to 100 and 102. Then the model with an \textsc{aic} value of 100 is preferred according to this criterion. However, one cannot conclude that it is definitely better,
because one \textsc{aic} value is smaller than the other possibly due to randomness of the sample.
In other words, one does not have a clear cutoff for the difference of \textsc{aic} values to judge which model is significantly better. Therefore, there is a genuine need to develop a formal test that furnishes a critical value for nonnested model selection.
There exist many works on hypothesis testing for nonnested model selection. Cox (1961, 1962) pioneered a likelihood ratio (\textsc{lr}) test for two separate families of hypotheses and heuristically argued its asymptotic normality, which was rigorously proven by White (1982a) under regularity conditions.
In a seminal article, Vuong (1989) used the Kullback-Leibler information criterion (\textsc{klic}) to measure the closeness of a model to the true data generating process (DGP). He introduced an \textsc{lr} test for competing models, which may be nested or non-nested and correctly specified or misspecified.
However, it works only for parametric models with known distributions. Rivers and Voung (2002) extended this approach by replacing the likelihood with general lack of fit criteria, which allows for more estimation approaches, but it still works only for parametric dynamic models. Chen, Hong and Shum (2007) advanced a nonparametric \textsc{lr} test for comparing a parametric likelihood model with a parametric moment condition model, based on the \textsc{klic} criterion, which can be regarded as extensions to the \textsc{lr} test of Vuong (1989). McElroy (2016) proposed a Whittle \textsc{lr} test for nonnested model selection. This approach also employs the \textsc{klic} criterion and is designed for comparing two parametric time series models with spectral densities.
However, the above \textsc{klic}-criterion based tests have different limiting distributions, depending on whether the two models are overlapping or not, and whether one of the model is correctly specified or not. They require one to pretest which distribution to use before applying them. As a consequence, they are basically two-step test procedures, which may induce the nonuniformity phenomenon of tests (Leeb and P\"otscher, 2005) and result in size distortions (Shi, 2015; Schennach and Wilhelm, 2017). In order to deal with this problem, Shi (2015) proposed a one-step nondegenerated test for nonnested models, which is a modification of the Vuong test, and Schennach and Wilhelm (2017) suggested a reweighted \textsc{lr} test for nonnested model selection. Both of the tests achieve uniform size control, but they are tailored for parametric models with densities.
Some authors introduced nonparametric extensions to the \textsc{lr} test. Fan et al. (2001) proposed generalized likelihood ratio (\textsc{glr}) tests and showed that the Wilks type of results hold for a variety of useful models, including univariate non-parametric models, varying-coefficient models, and their extensions.
Fan and Jiang (2005) developed the \textsc{glr} test for additive models based on the local polynomial fitting
and the backfitting algorithm.
Fan et al. (2001), Fan and Huang (2005), and
Fan and Jiang (2005, 2007) showed the generality of the Wilks phenomenon and enriched the applicability of the \textsc{glr} tests. However, the \textsc{glr} tests work only for nested models, require the working models contains the DGP, and generally assume homogeneity of variance. Moreover, the asymptotic distributionbs of \textsc{glr} tests explicitly depend on the bandwidth. It remains unknown if the \textsc{glr} test can be modified for nonnested model selection. Liao and Shi (2020) proposed a nondegenerate Vuong test to comparison of nonnested nonparametric models, which employs sieve approximations for M-estimation of the models, but the test requires correcting two bias terms and estimating the complicate variance, which explicitly depends on the tuning parameter in the sieve approximation. In addition, it cannot deal with heteroscedastic errors, because their assumption 4.1(a) and the natrure of their M-estimate in eq.(3.3) assume the error has a constant variance
(see also Example~\ref{ex3}).
Last but not the least, it is worthy of mentioning that there are various metrics for model comparison within the Bayesian framework. Two popular approaches among them are Bayes factors (Lewis and Raftery, 1997) and the deviance information criterion (Spiegelhalter et al., 2002). However, these methods are designed only for comparison of parametric models.
In this paper, we propose a general nonparametric test approach to model selection.
It is known that the prediction error criterion allows one to compare any two supervised statistical learning methods (parametric or nonparametric). In practice, a statistical learning procedure with a smaller average (absolute or squared) prediction error ($APE$) is usually preferred. However, if the $APE$s are close among competing models, one does not know if the $APE$s are significantly different. Furthermore, a model with a smaller $APE$ may be caused by randomness of the sample, but not because of a better model. These problems have hovered around statisticians over decades. In an effort to solve them and to perform an accurate model selection, we will resort to the idea of Owen (1988, 1989) and propose some new \textsc{elr} tests to compare prediction errors of competing models, based on the cross-validation method. The proposed tests possess the following appealing characteristics:
{\spacingset{1.3} \begin{itemize} \item[(i)] They are nonparametric tests without requiring a specific parametric structure or likelihood. \item[(ii)] The \textsc{elr} tests allow for heteroscedasticity of the errors, and their asymptotic distributions and power do not depend on the smoothing parameters. \item[(iii)] It allows one to fast implement the tests. \item[(iv)] The tests have power to detect all $\sqrt{n}$ local alternatives. \item[(v)]
The idea is applicable to comparison between any two supervising statistical learning models, nested, non-nested, overlapping, correctly specified, or misspecified. \end{itemize} }
Because of the above features, the proposed \textsc{elr} test is robust against heteroskedasticity, a striking contrast to the \textsc{glr} tests, and it can be applied to post model inference (Tibshirani et al. 2016), for example, comparison between two post \textsc{lasso} models, nested or nonnested. The \textsc{elr} test targets at comparing forecast equivalence of nonnested models, so can it be used to measure importance of explanatory variables for forecast in big data settings where mere significance tests do not make much sense (see Section~\ref{sec:bigdata}).
The empirical likelihood has been demonstrated as a powerful nonparametric tool for interval estimates (Owen, 2001). The method has many advantages over the normal approximation-based method and the bootstrap method for constructing confidence intervals,
such as the transformation respecting, the range of parameter preserving, the Bartlett correctable property, no requirement for estimating scale and skewness, and no predetermined shape requirement (Hall and La Scala, 1990). There exists a vast literature devoted to the empirical likelihood for parametric models, but relatively less work for nonparametric models. For interval estimation and hypothesis testing based on the empirical likelihood, they include but not limited to Hall (1990),
Fan and Zhang (2004),
Chen and Keilegom (2009), etc. However, all these works formulate the \textsc{elr} with some moment constraints from the estimation equations for the parameter of interest, and no one acts for nonnested models. In the construction of the proposed \textsc{elr} tests for nonnested models, we do not use moment constraints from the estimation equations. Existing techniques for the \textsc{elr} tests work only for correctly specified models and cannot be used to derive the asymptotic distributions of the proposed \textsc{elr} tests. These endow our work with challenges and intelligence. Since the \textsc{elr} tests employ the leave-one-out cross-validation (LOOCV) to calculate the prediction errors, it is computationally expensive if one directly fits the models to the data with each observation held out. Due to the nature of global polynomial spline smoother used for competing models, we are able to introduce a fast computation procedure for implementation of the \textsc{elr} tests. This procedure requires us to fit the models to the data only once. Furthermore, it is extended to test signficance of variables in additive models with massive or distributed data, and a distributed \textsc{elr} test is developed and posesses the same performance as the ideal \textsc{elr} test in the sense that there is no limited memory constraint and full data can be run on one machine.
This article is organized as follows. In Section~\ref{secRPM} we describe the methodology. The asymptotic distributions of our \textsc{elr} statistic are established whether or not the models are nested or misspecified, from which a decision rule of model selection is proposed. The fast implementation of the test is also considered. In Section~\ref{sec:bigdata}, we develop the distributed \textsc{elr} test for massive data. In Section~\ref{sec:sim} we investigate finite sample performance of \textsc{elr} tests via simulation, and in Section~\ref{sec:real} we provide an example of \textsc{elr} test on a real dataset. Conditions and technical proofs are delegated to the Appendix.
\section{Methods}\label{secRPM}
Our main objective is to develop the \textsc{elr} theory for model selection.
To expose our idea, we consider model comparison between the additive model and the varying coefficient model. For other model comparison problems, our procedure can still applied but needs to be studied on a case-by-case basis.
\subsection{Model comparison based on prediction errors}\label{sec2}
Nonlinearity relationship exists widely in statistical theory and practice. Suppose we have a random sample $\{y_{i},\mathbf X_{i}, z_i\}_{i=1}^{n}$, where $\mathbf X_i=(x_{i,1},\ldots,x_{i,p})^\top$,
and we have found that there exists some in-sample significant evidence of ``nonlinearity" between $y_{i}$ and $\mathbf X_{i}.$ We are interested in further investigating whether the documented ``nonlinearity" is the true nonlinearity between $y_i$ and $\mathbf X_i$,
or is due to the functional coefficients in a linear regression model. To deal with this problem, we conduct model selection between the functional coefficient model (Hastie and Tibshirani, 1993; Fan and Zhang, 1999; Cai, Fan and Yao, 2000) \begin{equation} y_i=\beta_0(z_i)+\sum_{j=1}^px_{i,j}\beta_j(z_i)+u_i, \label{3.1} \end{equation} and the nonparametric additive model (Hastie and Tibshirani, 1990) \begin{equation} y_i=\alpha+\sum_{j=1}^p m_j(x_{i,j})+v_i,\label{3.2} \end{equation} in the framework that both models may be wrongly specified, where $z_i$ may be a component of $X_i$ or not, and for identifiability it is assumed that $E\{m_{j}(x_{i,j})\}=0$ . Obviously, models (\ref{3.1}) and (\ref{3.2}) are nonnested in general, but nested when $p=1$ and $z_i=x_{i,1}$. They also overlap at the region where $y_i$ and $\mathbf X_i$ are linearly related.
To get the prediction errors, we first need to estimate the unknown functions of models (\ref{3.1}) and (\ref{3.2}). Various estimation methods can be applied, such as
the kernel smoother (Opsomer and Ruppert, 1997, 1998; Mammen and Park, 2006),
the spline method (Stone, 1986; Zhou, Shen and Wolfe, 1998; Huang and Shen, 2004; Li and Liang, 2008), and even the boosting learning algorithms (Freund and Schapire, 1997; Friedman, 2001). In fact, one regards an estimation approach for a given supervising statistical model as a learning algorithm, and our \textsc{elr} test can compare any two learning algorithms that provide predictions.
Then we need a good measure for assessing the performance of models (\ref{3.1}) and (\ref{3.2}).
A natural one is the prediction error from the widely used $K$-fold cross validation (CV) (Hastie and Tibshirani, 1990), even though other measures may be used. This method randomly partitions the data into $K$ roughly equal-sized parts. For the $k$th part, one uses the other $K-1$ parts of the data for training and calculates the prediction error of each fitted model when predicting the $k$th part of the data. As in Hastie et al. (2009), we let $\theta:\{1,\ldots, n\} \to \{1,\ldots, K\}$ be an indexing function that indicates the partition to which observation $i$ is allocated by the randomization, and let $\hat\alpha^{[-k]}$, $\hat{\beta}_j^{[-k]}(\cdot)$ and $\hat{m}_j^{[-k]}(\cdot)$ be fitted functions, computed with the $k$th part of the data removed. In particular, when $K=n$, $\theta(i)=i$, which corresponds to the leave-one-out CV (LOOCV). Denote by
$\hat{\varepsilon}_{1,i}= y_i-\hat{\beta}_0^{[-\theta(i)]}(z_i)-\sum_{j=1}^px_{i,j}\hat{\beta}_j^{[-\theta(i)]}(z_{i})$ and $\hat{\varepsilon}_{2,i}=y_i-\hat{\alpha}^{[-\theta(i)]}-\sum_{j=1}^p \hat{m}_j^{[-\theta(i)]}(x_{i,j})$ the prediction errors for model (\ref{3.1}) and (\ref{3.2}), respectively. Then the average (squared) prediction errors ($APE$) are $$APE_1=n^{-1}\sum_{i=1}^n \hat{\varepsilon}_{1,i}^2\,\,\, \mbox{\rm and}\,\,\, APE_2=n^{-1}\sum_{i=1}^n \hat{\varepsilon}_{2,i}^2$$ for model (\ref{3.1}) and (\ref{3.2}), respectively. Let $\hat{\xi}_{i}=\hat{\varepsilon}_{1,i}^2-\hat{\varepsilon}_{2,i}^2$. Then the difference $$APE_1-APE_2=n^{-1}\sum_{i=1}^n\hat{\xi}_i$$
is an appropriate estimate of the difference of mean squared prediction errors $$\mu_{\xi}=E(\hat{\xi}_i)=E(\hat{\varepsilon}_{1,i}^2) -E(\hat{\varepsilon}_{2,i}^2)$$ between the two models, and it can be used to compare the performance of the two models in terms of prediction. When it is significantly different from zero, it signals that the two models are not competing. Otherwise, it is an indication of forecast equivalence.
\subsection{The ELR test}
Most existing model selection methods employ the likelihood or information criteria to measure the distance of a working model to the DGP. However, for nonparametric models, the generalized likelihood ratio method works only for nested models and has some disadvantages, and the information approach \textsc{klic} is not applicable, as discussed in Section~\ref{sec1}. We here introduce an \textsc{el} approach to evaluate probability of forecast equivalence of two competing models.
As a nonparametric method, the \textsc{el} (Owen, 2001) has become a standard approach to construct interval estimates.
To use the \textsc{el}, one must specify estimating equations for the parameters of interest, but it is not necessary to estimate the variances of the estimators of parameters. The latter property endows the \textsc{el} with ability of handling heteroscedastic and asymmetric errors. Note that the nonparametric likelihood for forecast equivalence of the competing models is characterized by $$\sup\{\prod_{i=1}^n p_i: p_i\ge 0, \sum_i p_i=1,\sum_i p_i\hat\xi_i=0\}.$$ Following the idea of (Owen, 1988, 1990; Qin and Lawless, 1994), the above likelihood can be compared with nonparametric likelihood
of a saturated model without any constraints, in which all $p_i$ are equal to $1/n$. Hence, we define the logarithm of the \textsc{elr} \begin{equation}\label{elik1} R_{n,1}=-2\log\sup\bigl\{\prod_{i=1}^{n}(np_i):\, p\in \mathcal{G}\bigr\}, \end{equation} where $\mathcal{G}= \{p:\, p_i\geqslant 0, \sum_ip_i=1, \sum_i p_i \hat\xi_i=0 \}.$ Note that $\min\hat\xi_i\le \sum_i p_i \hat\xi_i\le \max \hat\xi_i.$ If $0\notin [\min\hat\xi_i, \max \hat\xi_i]$, then $\mathcal{G}$ is empty and we set $R_{n,1}=+\infty.$
In the above construction, we do not set any moment constraints from the estimation equations for both models (\ref{3.1}) and (\ref{3.2}). This is remarkably different from the classical \textsc{elr} statistics where some moment constraints from the estimation equations are placed. Using the Lagrange multiplier technique, when $\min\hat\xi_i\le 0\le \max \hat\xi_i$, we obtain that $p_i=n^{-1}\frac{1}{1+\lambda\hat\xi_i},$
where $\lambda$ satisfies that \begin{equation}\label{eq3} f(\lambda)\equiv \sum_{i=1}^{n}\hat\xi_{i}/(1+\lambda\hat\xi_i)=0. \end{equation} Let $\hat\lambda$ be the solution of equation (\ref{eq3}). Then the logarithm of the \textsc{elr} becomes \begin{equation}\label{eq4} R_{n,1}=2\sum_{i=1}^{n}\log(1+\hat\lambda\hat\xi_{i}). \end{equation}
If $\mu_{\xi}=0$, then the two models have equivalent performance in the sense that they have same prediction error on average, that is, forecast equivalence (McElroy, 2016).
The empirical likelihood ratio $R_{n,1}$ can be used to assess which model is better in terms of $APE$. In fact, if the two models perform equivalently, $R_{n,1}$ will be like a chi-squared random variable; if one model is significantly better than the other, then $R_{n,1}$ will go to infinity (see Theorem~\ref{th1} below).
Given the competing models (\ref{3.1}) and (\ref{3.2}), one usually selects the model with a smaller $APE$, but it is still unknown if the selected model is significantly better. In other words, we need to develop a test to distinguish if $APE_1-APE_2$ is significantly different from zero.
Therefore, we consider the following hypothesis testing problem: \begin{equation}\label{hypo} H_0^{(1)}: \mu_\xi= 0\,\ \mbox{\rm against}\,\ H_a^{(1)}: \mu_\xi\neq 0.
\end{equation} The null $H_0$ means that models (\ref{3.1}) and (\ref{3.2}) perform equivalently according to prediction, and the alternative represents one model is sufficiently better than the other.
\subsection{Asymptotic distributions and the decision rule}\label{sec2.3}
The $K$-fold CV is easy to implement, but for a given sample one has to select a value of $K$. For ease of notations and for convenience of technical arguments, we only consider the LOOCV. Even though our results hold for a general $K$-fold CV, but it requires $K$ to depend on sample size $n$ and involves complicated specification of the rate of $K$ going to $\infty$ as $n\to\infty$, because theoretically the $K$-fold CV provides an asymptotic unbiased prediction only when $K\to\infty.$
For fitting models (\ref{3.1}) and (\ref{3.2}), we need a smoothing method. Different smoothers can be employed, and examples include the local linear smoother (Fan and Zhang, 1999; Fan and Jiang, 2005) and the global polynomial spline smoothing (Stone, 1986; Li and Liang, 2008; Jiang and Jiang, 2011), among other. For illustration, we consider only the global polynomial spline smoothing,
which has stable performance and allows one to fast implement in the LOOCV (see the next section).
For model~\eqref{3.2}, we estimate $\alpha$ by $\bar{y}=n^{-1}\sum_{i=1}^ny_i$ and use B-spline basis functions to approximate each $m_{j}(\cdot)$. Without loss of generality, assume $\mathbf X=(x_{1},\ldots,x_{p})^\top$ takes values in $\mathcal{W}=[0,1]^p$. For approximating function $m_{j}(\cdot)$, we need a knot sequence $\bar\phi_{j}=\{\phi_{j,k}\}_{k=0}^{q_j+1}$ such that $0=\phi_{j,0}<\phi_{j,1}<\cdots<\phi_{j,q_j+1}=1$. Denoted by $\mathcal{S}(\ell_j,\bar\phi_j)$ the space of polynomial splines of order $\ell_j$ and knot sequence $\bar\phi_{j}$.
Since $\mathcal{S}(\ell_j,\bar\phi_j)$ is a $\kappa_j$-dimensional linear space with $\kappa_j=q_j+\ell_j$, for any $m_{j}\in \mathcal{S}(\ell_j,\bar\phi_j)$, there exists a local basis $\{B_{j,k}(\cdot)\}_{k=1}^{\kappa_j}$ for $\mathcal{S}(\ell_j,\bar\phi_j)$, such that $m_{j}(x_j)=\sum_{k=1}^{\kappa_j} b_{jk}B_{j,k}(x_j)$
for $j=1,\ldots,p$ (Schumaker, 1981; Jiang and Jiang, 2011). The local basis $\{B_{j,k}(\cdot)\}_{k=1}^{\kappa_j}$ depends on the knot sequence $\bar\phi_{j}$ and order $\ell_j$.
Let $\mathbf b_{j}=(b_{j1},\ldots,b_{j\kappa_j})^\top$, $\mathbf b=(\mathbf b_1^\top,\ldots,\mathbf b_p^\top)^\top$, $\mbox{\boldmath{$\Pi$}}_j(x_j)=(B_{j,1}(x_j),\ldots,B_{j,\kappa_j}(x_j))^\top,$
and
$\mbox{\boldmath{$\Pi$}}(\mathbf X)=(\mbox{\boldmath{$\Pi$}}_1^\top(x_1),\ldots,\mbox{\boldmath{$\Pi$}}_p^\top(x_p))^\top$. For simplicity, denoted by $\mathrm{y}_i=y_i-\bar y$. For any $1\leq i\leq n$, we minimize the approximated sum of squared errors \begin{equation}\label{bbs1}
\sum_{j=1(\neq i) }^n \{\mathrm{y}_j-\mbox{\boldmath{$\Pi$}}(\mathbf X_j)^\top\mathbf b\}^2 \end{equation} over $\mathbf b$, which leads to the minimizer
\begin{equation}\label{sufb}
\hat\mathbf b^{(-i)}=\bigl\{\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}\bigr\}^{-1} \bigl\{\frac{1}{n-1}\sum_{j=1(\neq i)}^{n}\mbox{\boldmath{$\Pi$}}(\mathbf X_j)\mathrm{y}_j\bigr\}, \end{equation} where $\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}=\frac{1}{n-1}\sum_{j=1(\neq i) }^n
\mbox{\boldmath{$\Pi$}}(\mathbf X_j)\mbox{\boldmath{$\Pi$}}(\mathbf X_j)^\top.$
Let
$ \hat{m}(\mathbf X_{i})=\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\hat\mathbf b^{(-i)}. $
Then the prediction error of model~\eqref{3.2} is given by \begin{eqnarray}
\hat\varepsilon_{2,i}=\mathrm{y}_i-\hat{m}(\mathbf X_{i}).\label{APE2} \end{eqnarray}
For functional coefficient model~(\ref{3.1}), we also assume $z_i$ takes values in $[0,1]$. Similarly, there exits a local basis $\{B_{j,k}(\cdot)\}_{k=1}^{\widetilde\kappa_j}$ such that $\beta_{j}(z)=\sum_{k=1}^{\widetilde\kappa_j}B_{j,k}(z)c_{jk}$ for $j=0,1,\ldots,p$. Let $\mathbf c_{j}=(c_{j1},\ldots,c_{j\widetilde\kappa_j})^\top$, $\mathbf c=(\mathbf c_0^\top,\mathbf c_1^\top,\ldots,\mathbf c_p^\top)^\top$, $\mbox{\boldmath{$\Gamma$}}_j(z)=(B_{j,1}(z),\ldots,B_{j,\widetilde\kappa_j}(z))^\top,$
and
$\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)=(\mbox{\boldmath{$\Gamma$}}_0^\top(z),x_{1}\mbox{\boldmath{$\Gamma$}}_1^\top(z),\ldots,x_p\mbox{\boldmath{$\Gamma$}}_p^\top(z))^\top.$ For any $1\leq i\leq n$, we minimize \begin{eqnarray*}
\sum_{j=1(\neq i) }^n \{y_j-\mbox{\boldmath{$\Gamma$}}(\mathbf X_j,z_j)^\top\mathbf c\}^2 \end{eqnarray*} over $\mathbf c$ and get the minimizer
$\hat\mathbf c^{(-i)}=\bigl\{\mathbf G_{n}^{(-i)}\bigr\}^{-1} \bigl\{\frac{1}{n-1}\sum_{j=1(\neq i)}^{n-1}\mbox{\boldmath{$\Gamma$}}(\mathbf X_j,z_j)y_j\bigr\},$ where $\mathbf G_{n}^{(-i)}=\frac{1}{n-1}\sum_{j=1(\neq i) }^n
\mbox{\boldmath{$\Gamma$}}(\mathbf X_j,z_j)\mbox{\boldmath{$\Gamma$}}(\mathbf X_j,z_j)^\top$. Then the prediction error of model~(\ref{3.1}) is
\begin{eqnarray}
\hat\varepsilon_{1,i}=y_i-\mbox{\boldmath{$\Gamma$}}(\mathbf X_i,z_i)^\top\hat\mathbf c^{(-i)}. \label{APE1} \end{eqnarray}
Given
$\hat{\xi}_{i}=\hat{\varepsilon}_{1,i}^2-\hat{\varepsilon}_{2,i}^2$,
we can calculate the \textsc{elr} statistic $R_{n,1}$ in \eqref{eq4}. The following theorem describes its asymptotic null distribution.
\begin{theorem}\label{th1} {\rm Assume that conditions A1 - A3 in Appendix A hold. Under $H_0^{(1)}$, $ R_{n,1}\rightarrow \chi^2_1$ in distribution, where $\chi^2_1$ is the chi-squared distribution with one degree of freedom. }\end{theorem}
Let $\chi_{1,1-\alpha}^2$ be the $(1-\alpha)$th percentile of $\chi^2_1$. By Theorem~\ref{th1}, at significance level $\alpha$, the rejection region of the \textsc{elr} test is $W=\{R_{n,1}> \chi_{1,1-\alpha}^2\}.$ To investigate the power of the proposed test, we consider the contiguous alternative of form: \begin{equation}\label{hypoab}
H_{a,n}^{(1)}: \mu_{\xi}=a_n\sigma_{\xi} n^{-1/2}, \end{equation} where $ \mu_{\xi}=E(\hat\xi_1)$, $\sigma_{\xi}$ is the standard deviation of $\hat{\xi}_1$ and greater than zero, and $a_n$ is a sequence of real numbers such that $\lim_{n\to\infty} a_n=a$. Then the power of the \textsc{elr} test can be approximated using the following theorem.
\begin{theorem}\label{th1a} {\rm Suppose conditions A1 - A3 hold. Under $H_{a,n}^{(1)}$,
$R_{n,1}\rightarrow \chi^2_1(a^2)$ if $|a|<+\infty$, and
$P( R_{n,1}\to +\infty)\to 1$ if $|a|=+\infty$, where $\chi^2_1(a^2)$ is the noncentral chi-squared distribution with one degree of freedom and noncentral parameter $a^2$.} \end{theorem}
Theorems~\ref{th1}-\ref{th1a} have an interesting implication. We can approximate the power of the test by $$P_{H_{a,n}^{(1)}}(W)\approx P(\chi^2_1(a^2)> \chi_{1,1-\alpha}^2)
=1-\{\Phi(|a|+\sqrt{\chi_{1,1-\alpha}^2})-\Phi(|a|-\sqrt{\chi_{1,1-\alpha}^2})\},$$ where $\Phi(\cdot)$ is the distribution function of ${\mathcal N}(0,1).$
The power function is increasing in $|a|$ and shares the same formula as that of the likelihood ratio test for testing $H_0:\, \mu=0$ against $H_{1n}:\,\mu=a_n\sigma n^{-1/2}$, based on an iid sample of size $n$ from the normal population ${\mathcal N}(\mu,\sigma^2).$ This suggests that the proposed test is powerful.
From Theorems \ref{th1}-\ref{th1a}, given a significant level $\alpha$, we conduct a model selection procedure based on the following decision rule: {\spacingset{1.3} \begin{itemize} \item[(i)] If $R_{n,1}<\chi_{1,1-\alpha}^2$, then we cannot reject $H_0^{(1)}: \mu_{\xi} = 0$, and we say the two models are asymptotically equivalent. \item[(ii)] If $R_{n,1}>\chi_{1,1-\alpha}^2$, one model is sufficiently better than the other. Furthermore, \begin{itemize} \item[(a)]
if $APE_1<APE_2$, model (\ref{3.1}) is better than model (\ref{3.2}); \item[(b)] if $APE_1>APE_2$, model (\ref{3.2}) is better than model (\ref{3.1}). \end{itemize} \end{itemize} }
Our \textsc{elr} test is asymptotically chi-squared under $H_0$ that the two models are forecast equivalent, no matter if the models are nested, nonnested, overlapping, correctly specified, or misspecified. As shown in Theorem~\ref{th1a}, it has nontrivial power against all local alternatives $H_{a,n}^{(1)}$ which converge to the null at rate
$\sqrt{n}$ or faster ($|a|\le +\infty$). Unlike Vuong's type of tests, we do not pretest which distribution to use for calculating the critical value, and thus the \textsc{elr} test can uniformly control the size of test as in Shi (2015) and Schennach and Wilhelm (2017). We test the forecast equivalence against non-equivalence of the two models. If the null is rejected, we retain the model with smaller $APE$; otherwise, we believe both models provide equal forecast performance. In any cases, we make this kind of conclusions, no matter if the models are nested or not and misspecified or not, which is consistent with the framework of likelihood inference under model misspecification in White (1982b).
\subsection{Fast implementation}\label{sec2.4}
The \textsc{elr} test involves the LOOCV for calculating the prediction errors, which requires fitting the models to each subsample with one observation held out. In the following we introduce a fast algorithm for computing the prediction errors.
Define the projection matrices $\mathbf P_{A}=\mathbf D(\mathbf D^\top\mathbf D)^{-1}\mathbf D^\top$ and $\mathbf P_{V}= \mathbf E(\mathbf E^\top\mathbf E)^{-1}\mathbf E^\top,$ where $\mathbf D=(\mbox{\boldmath{$\Pi$}}(\mathbf X_1), \ldots,\mbox{\boldmath{$\Pi$}}(\mathbf X_n))^\top$ and $\mathbf E=(\mbox{\boldmath{$\Gamma$}}(\mathbf X_1,z_1),\ldots,\mbox{\boldmath{$\Gamma$}}(\mathbf X_n,z_n))^\top$. Let the residual vectors be
$\mathbf e_{1}=(e_{1,1},\ldots,e_{1,n})^\top=\mathbf Y-\mathbf P_{V}\mathbf Y$ and $\mathbf e_{2}=(e_{2,1},\ldots,e_{2,n})^\top=\mathrm{Y}-\mathbf P_{A}\mathrm{Y},$ where $\mathbf Y=(y_1,\ldots,y_n)$ and $\mathrm{Y}=(\mathrm{y}_{1},\ldots,\mathrm{y}_{n})^\top$. Then, using the classical technique in linear models, we obtain that $$ \hat{\mathbf b}^{(-i)}=\hat{\mathbf b} -(1-p_{A,i})^{-1} (\mathbf D^\top\mathbf D)^{-1}\mbox{\boldmath{$\Pi$}}(\mathbf X_i)e_{2,i}, $$ $$\hat{\mathbf c}^{(-i)}=\hat{\mathbf c} -(1-p_{V,i})^{-1} (\mathbf E^\top\mathbf E)^{-1}\mbox{\boldmath{$\Gamma$}}(\mathbf X_i,z_i)e_{1,i},$$ where
$\hat\mathbf b=(\mathbf D^\top\mathbf D)^{-1}\mathbf D^\top\mathrm{Y},$
$\hat\mathbf c=(\mathbf E^\top\mathbf E)^{-1}\mathbf E^\top\mathbf Y,$
and
$p_{A,i}$ and $p_{V,i}$ are the $i$th diagonal entries of the matrices $\mathbf P_{A}$ and $\mathbf P_{V}$, respectively. Furthermore, \begin{equation}\label{eqs1} \hat\varepsilon_{1,i}=(1-p_{V,i})^{-1}e_{1,i} \,\,\mbox{\rm and}\,\, \hat\varepsilon_{2,i}=(1-p_{A,i})^{-1}e_{2,i}. \end{equation} Hence, $\hat{\xi}_{i}=\hat{\varepsilon}_{1,i}^2-\hat{\varepsilon}_{2,i}^2$ can be calculated by fitting the models to full data only.
Since $f'(\lambda)=\sum_{i=1}^n \hat{\xi}_i^2/(1+\lambda\hat{\xi}_i)^2<0,$ $f(\lambda)$ is strictly decreasing.
Then
evaluation of the \textsc{elr} statistic $R_{n,1}$ is straightforward while solving equation \eqref{eq3} to obtain $\hat\lambda$ by the Newton-Raphson iterations with initial value $\lambda=0$.
\section{Application to big data}\label{sec:bigdata}
In most cases the sample size of big data is huge, and existing statistical inference methods face up to challenges. Consider fitting a linear model with big data, for example, the p value of a t-statistic for an individual coefficient is possibly less than $5\%$. Since no model is right, the p value goes to zero as sample size $n$ goes to $\infty$, no matter how small the coefficient is.
Even if the model is correct, the p value may be very small for a nonzero coefficient as sample size $n$ gets large enough.
However, for a very small coefficient the corresponding covariate may not be of practical importance at all. That is, statistical significance may not imply practical importance in a big data setting. Naturally, one may ask how to measure practical importance of a covariate if there is a huge sample. In other words, we need some measures to calibrate the importance of explanatory variables (or their functional forms), rather than merely assessing their statistical significance. This is expected to be a challenge in statistical analysis for massive data where the memory of one machine cannot fit all the data.
\subsection{Distributed ELR test}
As we discussed before, no model is right, and any statistical model can be misspecified in practice. Therefore, it will make much more sense to make comparison between misspecified models than concentrating on statistical significance of covariates in big data settings. This is particularly relevant to economic modeling, because it is possible that more than one economic models, some of which can be even conflicting to each other, coexist in explaining the same economic phenomenon, and the existing econometric tools cannot distinguish them from each other for various reasons.
Obviously, our \textsc{elr} test can be used for this task, and in particular it can be used to compare the two models with or without an explanatory variable.
Consider modeling a massive dataset, for example, using the additive model (\ref{3.2}). To evaluate importance of the $\ell$th variable for forecast, we compare model (\ref{3.2}) with \begin{equation} y_i=\alpha+\sum_{j=1(\neq \ell)}^p m_j(x_{i,j})+v_i, \,\,\, i=1,\ldots,n.\label{3.2a1} \end{equation} Let $\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X)=(\mbox{\boldmath{$\Pi$}}_1^\top(x_1),\ldots,\mbox{\boldmath{$\Pi$}}_{\ell-1}^\top(x_{\ell-1}),\mbox{\boldmath{$\Pi$}}_{\ell+1}^\top(x_{\ell+1}),\ldots,\mbox{\boldmath{$\Pi$}}_p^\top(x_p))^\top$ and $$\mbox{\boldmath{$\Sigma$}}^{(-i)}_{n,-\ell}=\frac{1}{n-1}\sum_{j=1(\neq i) }^n
\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_j)\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_j)^\top.$$
Then, similar to \eqref{sufb}, the spline coefficient $\mathbf b$ is estimated by
\begin{equation}\label{eqj115}
\hat\mathbf b^{(-i)}_{-\ell}=\bigl\{\mbox{\boldmath{$\Sigma$}}_{n,-\ell}^{(-i)}\bigr\}^{-1} \bigl\{\frac{1}{n-1}\sum_{j=1(\neq i) }^n\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_j)\mathrm{y}_j\bigr\}. \end{equation} Similar to \eqref{APE2}, we obtain the prediction errors from model \eqref{3.2a1}: \begin{eqnarray}\label{APE3}
\hat\varepsilon_{3,i}
=\mathrm{y}_i-\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_i)^\top\hat\mathbf b^{(-i)}_{-\ell}. \end{eqnarray}
Then the difference of squared prediction errors between model \eqref{3.2a1} and \eqref{3.2} are
given by
$\hat{\eta}_i=\hat\varepsilon^{2}_{3,i}-\hat\varepsilon^{2}_{2,i},$
which can be calculated quickly
using the same technique as in \eqref{eqs1}.
Similar to \eqref{hypo}, comparing model (\ref{3.2}) to model~\eqref{3.2a1} reduces to testing
\begin{equation}\label{hypo2} H_0^{(2)}: \mu_\eta= 0\,\ \mbox{\rm against}\,\ H_a^{(2)}: \mu_\eta\neq 0,
\end{equation}
where
$\mu_\eta=E(\hat{\eta_1}).$
Using the same argument as for $R_{n,1}$, we obtain the \textsc{elr} statistic \begin{equation}\label{eq4a1} R_{n,2}=2\sum_{i=1}^{n}\log(1+\hat\nu\hat\eta_i), \end{equation} where $\hat\nu$ satisfies that $\sum_{i=1}^{n}\hat\eta_{i}/(1+\hat\nu\hat\eta_i)=0.$ As argued for $R_{n,1}$, if $0\notin [\min\hat\eta_i, \max \hat\eta_i]$, we set $R_{n,2}=+\infty.$
Large values suggest rejection of $H_0^{(2)}.$ If $H_0^{(2)}$ is rejected, then it suggests that the $\ell$th covariate is practically important for forecasting the response. This is a variable selection problem in which both models are nested but may be misspecified. Existing approaches deal with it by assuming the larger model is correctly specified. However, our \textsc{elr} test does not require this condition.
Some challenges arise when we use $R_{n,2}$ for massive or distributed data. Practically, we need to solve the computation problem since the classical computation methods for estimating $m_j$'s
and for empirical likelihood ratio (Hall and La Scala, 1990) are computationally infeasible for massive data.
We need to develop some distributed computing methods to solve this problem. The existing divide-and-conquer method (Zhang et al., 2013; Chen and Xie, 2014; Chen et al., 2019; Battey et al., 2018) can be employed in general, but the resulting limiting distribution of the test statistic should be consistent with that of the original test with full data, or some other inference methods such as the bootstrap procedure adaptive massive data (Chen and Peng, 2018) are to be advanced. In the following we will work on these problems and provide a distributed \textsc{elr} test for nonnested model selection with massive data. Remarkably, our distributed test will perform the same as the original test.
Suppose we have a massive sample of size $n=Nm$. Then we randomly split the entire dataset $\{\mathrm{y}_i,\mathbf X_i, 1\leq i\leq n\}$ into $N$ subsamples $\mathcal D_1,\ldots,\mathcal D_N$, each of which has the same size $m=n/N$. For distributed data, the full sample consists of these subsamples installed on $N$ machines at different sites. If different machines have different subsample sizes, our procedure can be straightforwardly extended. Typically, using the divide-and-conquer algorithm one fits the models with each subsample on each machine and gets the prediction error for each subsample point on each machine, and integrates them to form the \textsc{elr} test. Since each prediction error uses information from only a subsample, the resulting \textsc{elr} test will not be as powerful as the original \textsc{elr} test with full data. Even if one calculates the prediction error with full sample information, the resulting \textsc{elr} test will not have the same finite sample performance as the original \textsc{elr} test.
Instead of fitting the models to the subsample on every machine, we calculate only some sufficient statistics from each subsample and use them to estimate the spline coefficients. Then the estimated coefficients are feedback to each slaver so that the LOOCV error can be calculated.
Specifically, let $\mathcal D_k=\{\mathrm{y}_j^{(k)},\mathbf X^{(k)}_j,j=1,\ldots,m\}$ be the subsample distributed on the $k$th machine for $1\leq k\leq N$. Then there exists a one to one mapping $\nu:\, \{1,\ldots,m\}\otimes \{1,\ldots,N\}\to \{1,\ldots,n\}$ such that \begin{equation}\label{18a} i=\nu(j,k)\,\,\ \mbox{\rm and}\,\,\
(\mathrm{y}^{(k)}_{j},\mathbf X_{j}^{(k)})=(\mathrm{y}_i,\mathbf X_i)
\,\,\ \mbox{\rm for }\,\,\ i=1,\ldots,n.
\end{equation} For example, $\nu(j,k)=j+(k-1)m$ is such a mapping.
Note that the B-spline basis vector $\mbox{\boldmath{$\Pi$}}(\mathbf X_{i})$ depends on the knot sequences $\{\bar\phi_{j}\}_{j=1}^p$ with $\bar\phi_{j}=\{\phi_{j,k}\}_{k=0}^{q_j+1}$. For each subset $\mathcal D_k$, we can compute $\{\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_{j}), j=1,\ldots,m\}$ and $\{\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_{j}), j=1,\ldots,m\}$ with some given knot sequences $\{\bar\phi_{j}\}_{j=1}^p$ independent of $k$. Choice of such knot sequences for massive or distributed data will be discussed in Section~\ref{3.2a}. It follows from \eqref{18a} that, for $\alpha=0,\ell$, \begin{eqnarray*}\label{Dbspline}
\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{j})=\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X_{i}), \end{eqnarray*} where, with a little abuse of notations, we denote $\mbox{\boldmath{$\Pi$}}(\mathbf X_{i})$ by $\mbox{\boldmath{$\Pi$}}_{-0}(\mathbf X^{(k)}_{j})$ for convenience. Note that the LOOCV estimators for models \eqref{3.2} and \eqref{3.2a1} involve only statistics: $$\mathcal{X}_{\alpha}^{(-i)}\equiv\sum_{s=1(\neq i)}^{n}\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X_{s})\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X_{s})^\top \,\,\ \mbox{\rm and}\,\,\ \mathfrak{F}_{\alpha}^{(-i)}\equiv\sum_{s=1(\neq i)}^{n}\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X_{s})\mathrm{y}_{s}$$ for $i=1,\ldots,n$. Let $$\mathcal{A}_{\alpha}=\sum_{k=1}^{N}A_{-\alpha}^{(k)} \,\,\ \mbox{\rm and}\,\,\
\mathcal{B}_{\alpha}=\sum_{k=1}^{N}B^{(k)}_{-\alpha}, $$ where $A_{-\alpha}^{(k)}=\sum_{s=1}^{m}\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{s})\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{s})^\top$ and $B^{(k)}_{-\alpha}=\sum_{s=1}^{m}\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{s})\mathrm{y}_{s}^{(k)}$ are sufficient statistics for the subsample on the $k$th machine. These sufficient statistics can be calculated on individual machines. Then \begin{eqnarray*} \mathcal{X}_{\alpha}^{(-i)} = \mathcal{A}_{\alpha}-\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{j})\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{j})^\top\,\,\ \mbox{\rm and}\,\,\ \mathfrak{F}_{\alpha}^{(-i)} = \mathcal{B}_{\alpha}-\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{j})\mathrm{y}^{(k)}_{j}. \end{eqnarray*}
Hence,
the LOOCV estimators in \eqref{sufb} and \eqref{eqj115} are rewritten as
$$\hat\mathbf b^{(k)}_{\alpha,j}=\bigl\{ \mathcal{A}_{\alpha}-\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{j})\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{j})^\top\bigr\}^{-1} \bigl\{\mathcal{B}_{\alpha}-\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_{j})\mathrm{y}^{(k)}_{j}\bigr\},$$ respectively for $\alpha=0,\ell$.
Then distributed prediction errors from
model~\eqref{3.2} and \eqref{3.2a1} are given by \begin{eqnarray*}\label{APE2a}
\hat\varepsilon^{(k)}_{2,j}=\mathrm{y}^{(k)}_j-\mbox{\boldmath{$\Pi$}}_{-0}(\mathbf X^{(k)}_j)^\top \hat\mathbf b_{0,j}^{(k)}\,\,\ \mbox{\rm and}\,\,\
\hat\varepsilon^{(k)}_{3,j}
= \mathrm{y}_j^{(k)}-\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_j)^\top\hat\mathbf b_{\ell,j}^{(k)} , \end{eqnarray*}
respectively. Therefore, for $\mathrm{y}_j^{(k)}$ the difference of squared prediction errors from models \eqref{3.2a1} and \eqref{3.2}
is given by
$\hat{\eta}_j^{(k)}=|\hat\varepsilon^{(k)}_{3,j}|^2-|\hat\varepsilon^{(k)}_{2,j}|^2$. Then, similar to \eqref{eq4a1}, we obtain the distributed \textsc{elr} statistic \begin{equation}\label{eq4a2} R_{n,3}=2\sum_{k=1}^{N}\sum_{j=1}^{m}\log(1+\hat\tau\hat\eta^{(k)}_j), \end{equation} where $\hat\tau$ satisfies that $\sum_{k=1}^{N}\sum_{j=1}^{m}\hat\eta^{(k)}_{j}/(1+\hat\tau\hat\eta^{(k)}_j)=0.$ Again, if $0\notin [\min\hat\eta^{(k)}_j, \max \hat\eta^{(k)}_j]$, we set $R_{n,3}=+\infty.$ The root $\hat{\tau}$ of the above equation can be found via the distributed Newton-Raphson iterations, that is, be implemented on each machine.
Since $\hat\eta_{j}^{(k)}=\hat\eta_{i}$, we have $R_{n,2}=R_{n,3}$. That is, they have the same finite-sample and asymptotic performance, and distributed test $R_{n,3}$ has the same power as ideal test $R_{n,2}$ with no memory constraint. The following theorem depicts the asymptotic null distributions of the \textsc{elr} tests.
\begin{theorem}\label{th2} {\rm Suppose conditions A2 and A4 hold. Then, under $H_0^{(2)}$,
$R_{n,3}\rightarrow \chi^2_1$ in distribution. }\end{theorem}
To study the power of test, we consider testing $H_{0}^{(2)}$ against a sequence of contiguous alternatives: \begin{equation}\label{hypo1} H^{(2)}_{a,n}: \mu_{\eta}=a_n\sigma_{\eta}n^{-1/2},
\end{equation} where $\mu_{\eta}=E(\hat\eta_{1})$, $\sigma_{\eta}$ is the standard deviation of $\hat\eta_1$ and greater than zero, and $a_n$ is the same as in \eqref{hypoab}. In the following we present the alternative distributions of the \textsc{elr} tests.
\begin{theorem}\label{th2a} {\rm Assume that conditions A2 and A4 hold. Then, under $H^{(2)}_{a,n}$,
$R_{n,3}\rightarrow \chi^2_1(a^2)$
if $|a|<+\infty$, and
$P(R_{n,3}\to +\infty)\to 1$ if $|a|=+\infty$. }\end{theorem}
From Theorems \ref{th2}-\ref{th2a}, given a significant level $\alpha$, we conduct a variable selection procedure based on the following decision rule: {\spacingset{1.3} \begin{itemize} \item[(i)] If $R_{n,3}<\chi_{1,1-\alpha}^2$, then we cannot reject $H_0^{(2)}$. According to Occam's razor, we choose model~\eqref{3.2a1} without the $\ell$th covariate. \item[(ii)] If $R_{n,3}>\chi_{1,1-\alpha}^2$, it suggests the $\ell$th covariate is practically important for forecasting the response. We choose model~\eqref{3.2} as the working model. \end{itemize} }
In the above decision rule, we choose model~\eqref{3.2} when the null is rejected. This agrees with choosing the working model close to the true. Since model~\eqref{3.2} is larger, it is closer to the true than model~\eqref{3.2a1}.
\subsection{Knot selection with massive data }\label{3.2a} There are two popular ways of deciding the knots. One is to place equally spaced knot sequence, and the other is to use the
quantile knot sequence
from the empirical distribution of the underlying variable. The 1st knot choice can be easily computed since it is independent of the data. For the 2nd knot choice, it seems that the sample quantiles of the x-variables for massive or distributed data are not easy to get, but the median-searching algorithm in Harris (2012) can be adapted to the current situation. Specifically, we consider how to get the $q$th quantile of the sample $\{x_{i,1}, i=1,\ldots, n\}$
for any $q\in (0,1)$. Let $x_{(1),1}\leq x_{(2),1}\leq \cdots \leq x_{(n),1}$ be the order statistics. Then, the $q$th sample quantile is defined by \begin{equation}\label{quantile} x_{(\lfloor h\rfloor),1}+(h-\lfloor h\rfloor)\{x_{(\lceil h\rceil),1}-x_{(\lfloor h\rfloor),1}\}, \end{equation}
where $\lfloor h\rfloor$(or $\lceil h\rceil$) denotes the nearest integer to $h\equiv (n-1)q+1$, which is less (or larger) than $h$. This is the default way of defining sample quantile in software R, and is equivalent to the Excel and Python optional ``inclusive'' methods.
According to \eqref{quantile},
it suffices to find
$x_{(l),1}$ for any $1\leq l\leq n$.
Let us split
sample $\{x_{i,1}\}_{i=1}^n$
into $N$ sets $\mathcal{F}_{k}=\{x_{j,1}^{(k)},\,j=1,\ldots,m\}$ for $1\leq k\leq N$. Then our distributed algorithm proceeds as follows: {\spacingset{1.3} \begin{itemize}
\item [(i)] Randomly select a set $\mathcal{F}_{k}$ and an element $a\in\mathcal{F}_{k}$;
\item [(ii)] Compute subset $\mathcal{C}_{k'}=\{x\in \mathcal{F}_{k'}:\, x\leq a\}$ for $k'=1,\ldots,N$
and
$\mathcal{N}=\sum_{k'=1}^{N}|\mathcal{C}_{k'}|$ with $|\mathcal{C}_{k'}|$ being the number of elements of $\mathcal{C}_{k'}$;
\item [(iii)] If $\mathcal{N}=l$, the algorithm stops and returns $a$
as $x_{(l),1}$;
\item [(iv)] If $\mathcal{N}>l$, renew $\mathcal{F}_{k'}=\mathcal{C}_{k'}$ for $k'=1,\ldots,N$ and go to step (i);
\item [(v)] If $\mathcal{N}<l$, renew $\mathcal{F}_{k'}=\mathcal{F}_{k'}/\mathcal{C}_{k'}$ for $k'=1,\ldots,N$ and $l=l-\mathcal{N}$,
and go to step (i). \end{itemize} } As mentioned in Harris (2012), the computational complexity of the above algorithm is $O\{n/N + N\log(n/N)\}$. When $N\ll \sqrt{n}$, the computational complexity is simply $O(n/N)$, which decreases as $N$ increases.
For the split-and-conquer method, it usually assumes the technical condition $N\ll \sqrt{n}$, but our method works for any $1\le N\le n$.
{\spacingset{1.5} \begin{algorithm}[htbp]\caption{ Distributed \textsc{elr} algorithm }\label{A1} \footnotesize \KwIn{$\{y_i,\mathbf X_i, 1\leq i\leq n\}$, $n$, $m$, $N$, $\tau_0=0$, $\{\bar\phi_{j}\}_{j=1}^p$, $\omega$ and $\varphi$;} \KwOut{$R_{n,3}$; }
{\bf Initialization:} Randomly partition $\{\mathrm{y}_i,\mathbf X_i, 1\leq i\leq n\}$ into $N$ subsets $\{\mathrm{y}^{(k)}_j,\mathbf X^{(k)}_j,\,j=1,\ldots,m\}$ for $1\leq k\leq N$ and distribute them on $N$ machines\; {\bf Circulation:}
\For{$k=1:N$}
{ { With the knot sequences $\{\bar\phi_{j}\}_{j=1}^p$,
compute B-spline basis
$\{\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_{s}),\, s=1,\ldots,m\}$
and
$\{\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_{s}), \,s=1,\ldots,m\}$}\;
{$A_{-0}^{(k)}=\sum_{s=1}^{m}\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_{s})\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_{s})^\top$
and $A^{(k)}_{-\ell}=\sum_{s=1}^{m}\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_{s})\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_{s})^\top$}\;
{$B^{(k)}_{-0}=\sum_{s=1}^{m}\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_{s})\mathrm{y}_{s}^{(k)}$ and
$B^{(k)}_{-\ell}=\sum_{s=1}^{m}\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_{s})\mathrm{y}_{s}^{(k)}$ }\;
}
For $\alpha=0,\ell$,
compute $\mathcal{A}_{\alpha}=\sum_{k=1}^{N}A_{-\alpha}^{(k)}$, $\mathcal{B}_{\alpha}=\sum_{k=1}^{N}B^{(k)}_{-\alpha}$,
$\hat\mathbf b_{-\alpha}=\mathcal{A}_{\alpha}^{-1}\mathcal{B}_{\alpha}$;
{\bf Circulation:} \For{$k=1:N, j=1:m$}
{
{$e_{2,j}^{(k)}=\mathrm{y}^{(k)}_j-\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_j)^\top\hat\mathbf b_{-0}$,
$e_{3,j}^{(k)}=\mathrm{y}^{(k)}_j-\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_j)^\top\hat\mathbf b_{-\ell}$
}\;
{$p_{0,j}^{(k)}=\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_j)^\top\mathcal{A}_{0}^{-1}\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_j)$,
$p_{\ell,j}^{(k)}=\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_j)^\top\mathcal{A}_{\ell}^{-1}\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_j)$
}\;
{$\hat\varepsilon^{(k)}_{2,j}=(1-p_{0,j}^{(k)})^{-1}e_{2,i}^{(k)}$,
$\hat\varepsilon^{(k)}_{3,j}=(1-p_{\ell,j}^{(k)})^{-1}e_{3,j}^{(k)}$
}\;
$\hat\eta_{j}^{(k)}=|\hat\varepsilon^{(k)}_{3,j}|^2-|\hat\varepsilon^{(k)}_{2,j}|^{2}$;
} \uIf{$\min_{j,k}\hat\eta_{j}^{(k)}\leq 0\leq \max_{j,k}\hat\eta_{j}^{(k)}$}{
{\bf Iteration:} \For{$t=1:\omega$}{
{\bf Circulation:} \For{$k=1:N$} {$D_{1k}=\sum_{j=1}^{m}\frac{\hat\eta^{(k)}_{j}}{1+\tau_{t-1}\hat\eta_j^{(k)}},$
$D_{2k}= \sum_{j=1}^{m}\frac{|\hat\eta^{(k)}_{j}|^{2}}{|1+\tau_{t-1}\hat\eta_j^{(k)}|^2}$; }
{$D_1=\sum_{k=1}^{N}D_{1k}$,
$D_2=-\sum_{k=1}^{N}D_{2k}$},
$\tau_{t}=\tau_{t-1}-D_1/D_2$\;
\If{$|\tau_{t}-\tau_{t-1}|<\varphi$}{
$\hat\tau=\tau_{t}$\;
{\bf Circulation:} \For{$k=1:N$}
{$R_{n,3}^{(k)}=2\sum_{i=1}^{m}\log(1+\hat\tau\hat\eta^{(k)}_{i})$}
$R_{n,3}=\sum_{k=1}^{N}R_{n,3}^{(k)}$; } }
} \Else{ $R_{n,3}=10^{16};$ }
{\bf Return} $R_{n,3}$. \end{algorithm} }
\subsection{A distributed ELR algorithm}
Computational details of the distributed \textsc{elr} algorithm is listed in Algorithm~\ref{A1}. Note that
the full sample estimators of coefficient vector $\mathbf b$
for models \eqref{3.2} and \eqref{3.2a1} are
$\hat\mathbf b_{-\alpha}=\mathcal{A}_{\alpha}^{-1}\mathcal{B}_{\alpha}$,
respectively for $\alpha=0,\ell$. Let $e_{2,j}^{(k)}=\mathrm{y}^{(k)}_j-\mbox{\boldmath{$\Pi$}}(\mathbf X^{(k)}_j)^\top\hat\mathbf b_{-0}$ and
$e_{3,j}^{(k)}=\mathrm{y}^{(k)}_j-\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X^{(k)}_j)^\top\hat\mathbf b_{-\ell}$.
Then, similar to \eqref{eqs1},
we have the following fast calculation formulas for the LOOCV prediction errors:
$$\hat\varepsilon^{(k)}_{2,j}=(1-p_{0,j}^{(k)})^{-1}e_{2,j}^{(k)}\ \ \,
\mbox{\rm and}\,\,\
\hat\varepsilon^{(k)}_{3,j}=(1-p_{\ell,j}^{(k)})^{-1}e_{3,j}^{(k)},$$
where
$p_{\alpha,j}^{(k)}=\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_j)^\top\mathcal{A}_{\alpha}^{-1}\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X^{(k)}_j)$
for $\alpha=0,\ell$. The above formulas have been incorporated into Algorithm~\ref{A1}. In this algorithm, we calculate the sufficient statistics $A_{-\alpha}^{(k)}$ and $B_{-\alpha}^{(k)}$ on individual machines, with which we obtain
$\mathcal{A}_{\alpha}$ and $\mathcal{B}_{\alpha}$. Then
the full sample coefficient estimator $\hat\mathbf b_{-\alpha}$ and the LOOCV prediction errors are calculated. At last, we evaluate test statistic $R_{n,3}$.
\section{Simulations}\label{sec:sim}
To investigate the size and power of our \textsc{elr} test for nonnested models,
we conduct simulations for model selection
in different situations. For each of the following examples, we run 600 simulations,
and for each simulation we generated an iid sample
from the DGP. For each simulation, the cubic B-splines were used to estimate the unknown functions in the working models, and the leave-one-out CV method was employed to calculate the $APE$. The number of knots is chosen by an adjusted $APE$ criterion. Specifically, for model~\eqref{3.2}
$\{\kappa_{j}, j=1,\ldots,p\}$
were chosen by minimizing the adjusted $APE$
$$APE_{adj}= \frac{1}{n-\kappa}\sum_{i=1}^n|\hat\varepsilon_{2,i}|^{2},$$ where $\kappa=\sum_{j=1}^p\kappa_{j}$.
For models~\eqref{3.1} and~\eqref{3.2a1}, the $APE_{adj}$ is defined similarly but with $(\hat\varepsilon_{2,i},\kappa)$ replaced by $(\hat\varepsilon_{1,i},\tilde\kappa)$ and $(\hat\varepsilon_{3,i},\kappa-\kappa_{\ell})$, respectively.
In Example \ref{ex2}, we investigate if the proposed \textsc{elr} test works for mis-specified nonnested models with heteroscedasticity. In Example \ref{ex3}, we compare our test with the \textsc{glr} test in Fan and Jiang (2005, JASA) and the uniform Vuong (\textsc{unv}) test in Liao and Shi (2020), we also study robustness of these tests.
In Example~\ref{ex4}, we consider our distributed \textsc{elr} test for massive data.
\begin{example}\label{ex2}{\rm Consider model selection between varying-coefficient model $y_{i}=\beta_0(z_{i})+\beta_1(z_{i})x_{i,1}+\beta_2(z_{i})x_{i,2}+u_{i}$ and additive model $y_{i}=\alpha+m_{1}(x_{i,1})+m_{2}(x_{i,2})+v_{i}$, with iid samples generated from \begin{eqnarray}\label{sim2} y_i&=&0.5(x_{i,1}+x_{i,2})+\theta\{x_{i,1}\exp(1+z_{i})+x_{i,2}1(z_i>0.5)+1.5\cos(\pi z_{i})\}\nonumber\\ &&+\tau\{\exp(x_{i,1})\cos( x_{i,1})+0.5\sin(x_{i,2})\}+\sin(\pi x_{i,1})\varepsilon_i, \end{eqnarray} where $(x_{i,1}, x_{i,2})$ are bivariate normally distributed with standard normal marginals and correlation coefficient $0.5$,
$z_i\sim U(0,1)$,
and $\varepsilon_i\sim N(0,1).$
The varying coefficient models and the additive model are nonnested. When $\theta=\tau=0$, both models are correctly specified; when $\theta=0$ and $\tau\neq 0$, the additive model is correctly specified; when $\theta\neq 0$ and $\tau=0$, the varying coefficient model is correctly specified; when $\theta\neq 0$ and $\tau\neq 0$, both models are misspecified. We set different values of $\theta$ and $\tau$ to evaluate the size and power of our test. Since the GDP has changing variance, it allows us to evaluate the performance of our \textsc{elr} test when the error is heteroscedastic. {\spacingset{1.3} \begin{table}[htbp] \centering\small \caption{Null rejection rates (\%) of \textsc{elr} tests at significance level 5\%(left cell) and 10\%(right cell) for Example~\ref{ex2}}\label{tab2}
\begin{tabular}{|c| c| c| c | c| c | c|} \hline n & \multicolumn{6}{c}{$(\theta,\tau)$}\vline\\ \cline{2-7}
& \multicolumn{1}{c}{(0, 0)} & \multicolumn{1}{c}{(0,0.07)} &\multicolumn{1}{c}{ (0, 0.09)} & \multicolumn{1}{c}{(0, 0.12)} & \multicolumn{1}{c}{(0, 0.15)} & \multicolumn{1}{c}{(0, 0.18)} \vline\\\hline 1000 & (4.00,9.00) & (12.3,16.5) &(27.0,34.8) & (55.7,65.8) & (83.8,86.8)& (94.0,95.8)\\ 1500 &(5.33,9.33) & (38.3,44.7) &(63.5,70.8) &(87.0,91.2) & (96.5,97.8) &(99.2,100) \\ \hline Model selection &\multicolumn{1}{c}{Both}\vline &\multicolumn{1}{c}{Additive}\vline &\multicolumn{1}{c}{Additive}\vline &\multicolumn{1}{c}{Additive} \vline&\multicolumn{1}{c}{Additive} \vline&\multicolumn{1}{c}{Additive} \vline \\ \hline Sign of DAPE & &\multicolumn{1}{c}{+}\vline &\multicolumn{1}{c}{+}\vline &\multicolumn{1}{c}{+} \vline&\multicolumn{1}{c}{+} \vline&\multicolumn{1}{c}{+} \vline \\ \hline n & \multicolumn{6}{c}{$(\theta,\tau)$}\vline\\ \cline{2-7}
& \multicolumn{1}{c}{(0, 0)} & \multicolumn{1}{c}{(0.05, 0)} &\multicolumn{1}{c}{ (0.075, 0)} & \multicolumn{1}{c}{(0.1, 0)} & \multicolumn{1}{c}{(0.125, 0)} & \multicolumn{1}{c}{(0.15, 0)} \vline\\\hline 1000 & (4.00,9.00) & (45.8,58.3) & (68.5,80.7) & (88.7,94.5)& (96.7,98.3)& (99.0,99.3)\\ 1500 &(5.33,9.33) & (57.8,71.0) &(80.2,87.7) &(96.0,97.8) & (99.5,99.7) &(100,100) \\ \hline Model selection &\multicolumn{1}{c}{Both}\vline &\multicolumn{1}{c}{Varying}\vline &\multicolumn{1}{c}{Varying}\vline &\multicolumn{1}{c}{Varying} \vline&\multicolumn{1}{c}{Varying} \vline&\multicolumn{1}{c}{Varying} \vline \\ \hline Sign of DAPE & &\multicolumn{1}{c}{$-$}\vline &\multicolumn{1}{c}{$-$}\vline &\multicolumn{1}{c}{$-$} \vline&\multicolumn{1}{c}{$-$} \vline&\multicolumn{1}{c}{$-$} \vline \\ \hline n & \multicolumn{6}{c}{$(\theta,\tau)$}\vline\\ \cline{2-7}
& \multicolumn{1}{c}{(0.05, 0.05)} & \multicolumn{1}{c}{(0.18, 0.1)} &\multicolumn{1}{c}{ (0.18,0.05)} & \multicolumn{1}{c}{(0.05, 0.18)} & \multicolumn{1}{c}{(0.1, 0.18)} & \multicolumn{1}{c}{(0.18, 0.18)} \vline\\\hline 1000 & (13.3,22.3) &(96.5,98.5) & (99.7,100) & (64.5,74.3)& (16.3,24.0) &(29.8,39.5)\\ 1500 & (8.33,14,5) & (99.7,99.8) & (100,100) & (92.5,95.3)& (36.3,46.2)&(26.5,34.8)\\ \hline Model selection &\multicolumn{1}{c}{Varying}\vline &\multicolumn{1}{c}{Varying}\vline &\multicolumn{1}{c}{Varying}\vline &\multicolumn{1}{c}{Additive} \vline&\multicolumn{1}{c}{Additive} \vline&\multicolumn{1}{c}{Varying} \vline \\ \hline Sign of DAPE & \multicolumn{1}{c}{$-$}\vline&\multicolumn{1}{c}{$-$}\vline &\multicolumn{1}{c}{$-$}\vline &\multicolumn{1}{c}{$+$} \vline&\multicolumn{1}{c}{$+$} \vline&\multicolumn{1}{c}{$-$} \vline \\ \hline \end{tabular} {DAPE - average of the differences of APEs between model~\eqref{3.1} and model \eqref{3.2} in 600 simulations } \end{table} }
For each paired values of $(\theta,\tau)$, we calculated the null rejection rates of our \textsc{elr} tests for testing problem (\ref{hypo}) at $5\%$ and $10\%$ significance levels. The simulation results are summarized in Table~\ref{tab2}. It is seen that our \textsc{elr} test uniformly controls size over different significance levels, since the reject rates are all close to the nominal size at $(\theta,\tau)=(0,0)$. When one of $\theta$ and $\tau$ goes far away from $0$ and the other is fixed at $0$, the alternative runs further away from the null, and the rejection rate becomes higher and higher, which reveals that our test gets more and more powerful. When both $\theta$ and $\tau$ are nonzero, the two models are nonnested and misspecified, and the power gets higher as the distance between $\theta$ and $\tau$ increases. This demonstrates that our \textsc{elr} test works great here. $
\diamond$ }\end{example}
{\spacingset{1.3} \begin{table}[htbp] \centering\small \caption{Null rejection rates (\%) of the \textsc{elr}, \textsc{unv},
and \textsc{glr} tests
for Example~\ref{ex3}}\label{tab3}
\begin{tabular}{|c|c|c|c | c| c | c| c | c|} \hline DGP& Test &n &\multicolumn{6}{c}{$\tau$}\vline\\ \cline{4-9} & & &\multicolumn{1}{c}{0} \vline &\multicolumn{1}{c}{0.06}\vline & \multicolumn{1}{c}{0.08}\vline & \multicolumn{1}{c}{0.1} \vline & \multicolumn{1}{c}{0.12}\vline & \multicolumn{1}{c}{0.16} \vline\\ \hline \multirow{6}{*}{normal} &\multirow{2}{*}{ELR} &1000 & 5.17 & 16.5 &39.0 & 78.3 & 93.3& 97.3\\
& &1500 &4.50 & 26.0 &64.0 &94.7 & 98.3 &99.2 \\ \cline{2-9}
&\multirow{2}{*}{UNV} &1000 & 4.17 & 23.0 &49.3 & 83.8& 93.8& 98.0\\
& &1500 &3.83 & 34.0 &71.8 &96.2 & 99.3 &99.8 \\\cline{2-9}
&\multirow{2}{*}{GLR} &1000 & 5.83 & 20.8 &45.5 & 79.5 & 90.7& 96.7\\
& &1500 &5.00 & 34.7 &73.7 &93.2 & 99.0 &99.7 \\ \hline \multirow{6}{*}{conditionally normal} &\multirow{2}{*}{ELR} &1000 & 5.17 & 45.2 & 79.0 & 96.8& 98.3& 99.2\\
& &1500 &6.50 & 71.2 &95.5 &98.5 & 99.0&99.7 \\ \cline{2-9} &\multirow{2}{*}{UNV} &1000 & 2.80 & 61.7 &88.5 & 98.2 & 99.3 & 99.5\\
& &1500 &1.33 & 84.5 &99.0 &99.8 & 100 &100 \\\cline{2-9}
&\multirow{2}{*}{GLR} &1000 & 14.8 & 62.8 &88.7 & 97.7 & 99.5& 100\\
& &1500 & 13.3 & 79.5 &98.3 & 100 & 100& 100\\ \hline \multirow{6}{*}{conditional t(6)} &\multirow{2}{*}{ELR} &1000 & 5.17 & 23.5 &65.2 & 90.3 & 96.7&98.3 \\
& &1500 &5.67 &46.8 &89.0 &98.2 &99.3 &100\\\cline{2-9} &\multirow{2}{*}{UNV} &1000 & 2.00 & 32.3 &75.3 & 94.2 & 97.8& 98.5\\
& &1500 &0.83 & 55.0 &93.0 &99.2 & 100 &100 \\\cline{2-9}
&\multirow{2}{*}{GLR} &1000 & 13.8 & 40.0 &73.5 & 95.0 & 98.2& 99.8\\
& &1500 &12.3 & 54.0 &91.2 &99.3 & 99.5 &100 \\ \hline \multirow{6}{*}{mixed normal} &\multirow{2}{*}{ELR}&1000 & 5.83 & 12.5 & 27.0 & 62.5& 83.2& 93.2\\
& &1500 &5.33 &17.2 &51.8 &88.8 &95.7 &98.3\\\cline{2-9} &\multirow{2}{*}{UNV} &1000 & 2.00 & 12.5 &34.3 & 69.7 & 85.1& 94.5\\
& &1500 &2.00 & 16.7 &53.0 &88.3 & 95.6 &98.8 \\\cline{2-9} \ &\multirow{2}{*}{GLR} &1000 & 5.67 & 15.7 &32.0 & 60.5 & 81.0& 94.0\\
& &1500 &5.17 & 20.2 &48.8 &82.7 & 94.2 &98.2 \\ \hline \end{tabular} \end{table} }
\begin{example}\label{ex3}{\rm
Let us consider model comparison between varying-coefficient model $y_{i}=\beta_0(z_{i})+\beta_1(z_{i})x_{i,1}+u_{i}$
and additive model $y_{i}=\alpha+m_{1}(x_{i,1})+m_{2}(x_{i,2})+v_{i},$ when the true DGP is $$y_i=0.5x_{i,1}+0.25x_{i,1}\cos(x_{i,1})+\tau\exp(x_{i,2})\cos( x_{i,2})+\varepsilon_{i}, $$ where $(x_{i,1}, x_{i,2})$ are the same as in Example~\ref{ex2}, and
$\varepsilon_{i}$ is $N(0,1)$ (normal), $\sin(x_{i,2})N(0,1)$ (conditionally normal), $\sin(x_{i,2})t(6)$ (conditional t(6)), and $0.95N(0,1)+0.05 N(0,3^2)$ (mixed normal), respectively. This allows us to assess robustness of our \textsc{elr} test for model comparison under a variety of noises. We set $z_i=x_{i,1}$ to make the additive model contains the varying coefficient model, so that the \textsc{glr} test can be applied. These tests are also compared to the \textsc{unv} test.
Table~\ref{tab3} reports the powers of the three tests.
It is seen that the \textsc{elr} test not only keeps the size but also is nearly most powerful and robust against the error distributions.
For the normal and mixed normal errors, our \textsc{elr} test has nearly the same power as the \textsc{glr} test. As expected, the \textsc{glr} and \textsc{unv} tests cannot keep the size when the errors are heteroscedastic.
$
\diamond$
}\end{example}
\begin{example}\label{ex4}{\rm Consider comparing models $y_{i}=\alpha+m_{1}(x_{i,1})+m_{2}(x_{i,2})+v_{i}$ and $y_{i}=\alpha+m_{2}(x_{i,2})+v_{i}$ with massive data, when the true DGP is \begin{eqnarray}\label{sim4} y_i=\tau\exp(x_{i,1})\cos(x_{i,1})+0.1x_{i,2}(1+x_{i,2})+\varepsilon_i, \end{eqnarray} where $\varepsilon_i$ is $\sin(\pi x_{i,2}){\mathcal N}(0, 1)$ and $(x_{i,1},x_{i,2})$ are the same as in Example~\ref{ex2}. Obviously, this is a model with heteroscedascity.
{\spacingset{1.3} \begin{table}[!htbp] \centering \centering\small \caption{Null rejection rates (\%) of \textsc{elr} tests
for Example~\ref{ex4}}\label{tab4}
\begin{tabular}{|c|c|c | c| c | c| c | c|} \hline N&m & \multicolumn{6}{c}{$\tau$}\vline\\ \cline{3-8}
& & \multicolumn{1}{c}{0} \vline & \multicolumn{1}{c}{0.01} \vline &\multicolumn{1}{c}{0.015}\vline & \multicolumn{1}{c}{0.02}\vline & \multicolumn{1}{c}{0.025}\vline & \multicolumn{1}{c}{0.03} \vline\\\hline \multirow{2}{*}{1}&21000 & 5.50 & 15.0 & 48.0 &90.2 & 98.3 &99.5 \\ &42000 &5.67 & 33.3 & 84.5 &99.0 &100 &100 \\ \hline \multirow{2}{*}{50}&420 & 5.50 & 15.0 & 48.0 &90.2 & 98.3 &99.5 \\ &840 &5.67 & 33.3 & 84.5 &99.0 &100 &100 \\ \hline \multirow{2}{*}{100}&210& 5.50 & 15.0 & 48.0 &90.2 & 98.3 &99.5 \\ &420 &5.67 & 33.3 & 84.5 &99.0 &100 &100 \\ \hline \multirow{2}{*}{150}&140 & 5.50 & 15.0 & 48.0 &90.2 & 98.3 &99.5 \\ &280 &5.67 & 33.3 & 84.5 &99.0 &100 &100 \\ \hline \end{tabular} \end{table} }
We set sample size $n=21,000$ and $42,000$ for comparing full sample test $R_{n,2}$ with distributed test $R_{n,3}$. Table~\ref{tab4} reports the rejection rate of $H_0^{(2)}$ at 5\% significance level. When $\tau=0$,
the null and the alternative coincide,
and the power is the type I error probability.
As shown in the table, all powers are close to the nominal level 5\% at $\tau=0$,
indicating our tests keep the size. As $\tau$ increases, the alternative moves further away from the null,
and the rejection rate of the null gets higher and higher. Furthermore,
for different numbers of machines $N=1, 50,100,150$, $R_{n,2}$ and $R_{n,3}$ have the same performance. This implies that our distributed \textsc{elr} tests can exactly recovery the results of the original \textsc{elr} test with the whole data running on one machine.
\iffalse
\hl{GLR tests can be used for this example, but we need the codes for generating the same samples.
Since the DGP is a special additive model, we use GLR to test
$H_0:\, m_1=0$ againts $H_1:\, m_1\neq 0.$ If the null is rejected, then $X_1$ is significantly useful, and we choose the alternative model; otherwise, we choose the null model. In this way,
the power of test is just the relative frequency of choosing the alternative.
The results are reported in Table 3.*** }
\fi }\end{example}
\section{A real example}\label{sec:real}
We illustrate our method by analyzing the Boston housing dataset. This dataset contains information collected by the U.S Census Service concerning housing in the area of Boston, MA. It is available at the StatLib archive ({{\url{http://lib.stat.cmu.edu/datasets /boston}}}). The dataset consists of median values of owner-occupied homes in 506 homes and several variables that might explain the variation of housing value (Harrison and Rubinfeld, 1978; Fan and Huang, 2005). Fan and Huang (2005) considered the following seven variables: CRIM (per capita crime rate by town), RM (average number of rooms per dwelling), TAX (full-value property-tax rate per $\$$10,000), NOX (nitric oxides concentration parts per 10 million), PTRATIO (pupil-teacher ratio by town), AGE (proportion of owner-occupied units built prior to 1940),
and LSTAT (lower status of the population). For simplicity, the variables CRIM, RM, $\log(\text{TAX})$, NOX, PTRATIO and AGE are denoted by $x_{1}, x_{2},\ldots, x_{6}$, respectively.
Let $y$ be the response (median value of owner-occupied homes)
and
$z=\log(\text{LSTAT})$.
The object is to study the association between $y$ and $\mathbf X=(x_{1}, x_{2},\ldots, x_{6})$, given a sample
$\{y_i,\mathbf X_i, z_i, i=1,\ldots,n\}$ with size $n=506$.
Many authors analyzed the dataset using different models.
Examples include the additive models in Opsomer and Ruppert (1998) and Fan and Jiang (2005), and the varying coefficient model in Fan and Huang (2005), among others.
However, there is no formal model comparison among them. In the following we use our \textsc{elr} test to do this work. In all cases, the significance level is taken as $5\%$.
\begin{itemize} \item[(i)] (Varying coefficient model vs additive model)
Fan and Huang (2005) considered the varying coefficent model: \begin{equation}\label{eq3.1a1a}
E(y_i|z_i,\mathbf X_i)=\beta_0(z_i)+\sum_{j=1}^6x_{i,j}\beta_j(z_i). \end{equation}
We are interested in further investigating whether the documented ``nonlinearity" is the true nonlinearity between $y_i$ and $\mathbf X_i$,
or is due to the functional coefficients in a linear regression model. Thus, we consider the following nonparametric additive model for comparison: \begin{equation}\label{eq3.2a1a}
E(y_i-\bar y|z_i,\mathbf X_i)=m_{0}(z_i)+\sum_{j=1}^6 m_j(x_{i,j}), \end{equation} with $\bar y=n^{-1}\sum_{i=1}^n y_i$, which contains the models studied in Opsomer and Ruppert (1998) and Fan and Jiang (2005). This reduces to model selection between models~\eqref{eq3.1a1a} and~\eqref{eq3.2a1a}. Based on the sample, the value of ELR statistic is $19.33$, greater than the critical vale $\chi_{1,0.95}^2=3.84$,
and the average difference of squared prediction errors between models \eqref{eq3.1a1a} and~\eqref{eq3.2a1a}
is given by $n^{-1}\sum_{i=1}^n\hat\xi_i=19.36$. Hence, according to the decision rule below Theorem~\ref{th1a}, we choose model~\eqref{eq3.2a1a}.
\item[(ii)] (Comparison between additive models)
Opsomer and Ruppert (1998) analyzed the dataset via a four dimensional additive model: \begin{eqnarray}\label{3.2aa}
E(y_i-\bar y|z_i,\mathbf X_i)=m_{0}(z_i)+m_2(x_{i,2})+m_{3}(x_{i,3})+m_{5}(x_{i,5}). \end{eqnarray}
Based on the \textsc{glr} test for the above model,
Fan and Jiang (2005) confirmed to fit the dataset with the following semiparametric model:
\begin{eqnarray}\label{3.3aa}
E(y_i-\bar y|z_i,\mathbf X_i)=a_0z_i+m_2(x_{i,2})+a_3x_{i,3}+a_5x_{i,5}. \end{eqnarray}
First, we consider model selection between models~\eqref{3.3aa} and~\eqref{3.2aa} using our \textsc{elr} test. The realized value of ELR statistic is $14.57$, which is greater than the critical value $\chi_{1,0.95}^2$, and the average difference of squared prediction errors
between models~\eqref{3.3aa} and~\eqref{3.2aa} is given by $n^{-1}\sum_{i=1}^n\hat\xi_i=2.67$. This suggests us to choose~\eqref{3.2aa}, which agrees with the \textsc{glr} test.
Next, we compare model~\eqref{3.2aa} with model~\eqref{eq3.2a1a}. The \textsc{elr} statistic is $30.53>\chi_{1,0.95}^2$. The average difference of squared prediction errors
is given by $n^{-1}\sum_{i=1}^n\hat\xi_i=4.39$. This leads to selection of model~\eqref{eq3.2a1a}. That is, at least one of $m_{1}(\cdot)$, $m_{4}(\cdot)$ and $m_{6}(\cdot)$ are not zero.
Then, we test $H_{0\ell}:\, m_{\ell}(\cdot)=0$ against
$H_{1\ell}:\, m_{\ell}(\cdot)\neq 0$
for each $\ell=1,4,6$
in model~\eqref{eq3.2a1a}, using the \textsc{elr} test.
The results are reported in Table~\ref{R3}. {\spacingset{1.3} \begin{table}[htbp] \centering\centering\small \caption{ELR testing whether a nonparamatric function is zero} \label{R3} \begin{tabular}{cccc} \hline &$m_{1}(\cdot)$&$m_{4}(\cdot)$&$m_{6}(\cdot)$\\ \hline ELR& 8.20 & 10.92& 0.82 \\ \hline Equivalent(=)&$\neq$ &$\neq$ &$=$\\ \hline \end{tabular} \end{table} } Obviously, $m_1$ and $m_{4}$ are statistically significant, but $m_6(\cdot)$ not at $5\%$ significance level, based on individual \textsc{elr} tests or the multiple \textsc{elr} test with the Bonferroni correction. This leads to the model
\begin{eqnarray}\label{3.2aaa}
E(y_i-\bar y|z_i,\mathbf X_i)=m_{0}(z_i)+m_{1}(x_{i,1})+m_2(x_{i,2})+m_{3}(x_{i,3})+m_{4}(x_{i,4})+m_{5}(x_{i,5}). \end{eqnarray}
Last, we compare model~\eqref{3.2aaa} with model~\eqref{eq3.2a1a}. The ELR statistic is $0.82<\chi_{1,0.95}^2$. Thus, models~\eqref{eq3.2a1a} and~\eqref{3.2aaa} are equivalent. Since model \eqref{3.2aaa} is simpler, it is preferable according to the Occam's razor. This selection agrees to Table~\ref{R3}.
\end{itemize}
\begin{center} {\bf\large Appendix}: Notations and Conditions \end{center}
\newtheorem{Lemma}{Lemma} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \setcounter{equation}{0}
For ease of exposure, we introduce some notations which will be used throughout the remainder of the paper.
Let $\mbox{\boldmath{$\Sigma$}}_{A}=E\{\mbox{\boldmath{$\Pi$}}(\mathbf X)\mbox{\boldmath{$\Pi$}}(\mathbf X)^\top\}$, $\mbox{\boldmath{$\Sigma$}}_{A,-l}=E\{\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X)\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X)^\top\}$,
and $\mbox{\boldmath{$\Sigma$}}_{C}=E\{\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)^\top\}$. Put
$\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)=\mbox{\boldmath{$\Sigma$}}_{A}^{-1/2}\mbox{\boldmath{$\Pi$}}(\mathbf X_i)$,
$\widetilde\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_i)=\mbox{\boldmath{$\Sigma$}}_{A,-l}^{-1/2} \mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_i)$,
and $\widetilde\mbox{\boldmath{$\Gamma$}}(\mathbf X_i,z_i)=\mbox{\boldmath{$\Sigma$}}_{C}^{-1/2}\mbox{\boldmath{$\Gamma$}}(\mathbf X_i,z_i)$. For $\alpha=0,-\ell$, let $$\tilde\mathbf b_{-\alpha}=\arg\min_{\mathbf b}E\{\mathrm{y}-\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X)^\top\mathbf b\}^2\,\, \ \mbox{\rm and}\,\,\ \tilde\mathbf c=\arg\min_{\mathbf c}E\{y-\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)^\top\mathbf c\}^2.$$
Then, by the first order condition, the above minimizers admit closed formulas: $$\tilde\mathbf b_{-\alpha}=\{E(\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X)\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X)^\top)\}^{-1}E\{\mbox{\boldmath{$\Pi$}}_{-\alpha}(\mathbf X)\mathrm{y}\};\ \ \tilde\mathbf c=\{E(\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)^\top)\}^{-1}E\{\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)y\}. $$ The population versions of prediction errors for models \eqref{3.1}, \eqref{3.2} and \eqref{3.2a1} are $\tilde\varepsilon_{1,i}=y_i-\mbox{\boldmath{$\Gamma$}}(\mathbf X_i,z_i)^\top\tilde\mathbf c$, $\tilde\varepsilon_{2,i}=\mathrm{y}_i-\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\tilde\mathbf b_{-0}$,
and $\tilde\varepsilon_{3,i}=\mathrm{y}_i-\mbox{\boldmath{$\Pi$}}(\mathbf X_i)_{-\ell}^\top\tilde\mathbf b_{-\ell}$, respectively. It is straightforward to verify that
\begin{equation}\label{8a} \tilde\varepsilon_{1,i}=y_i-\widetilde\mbox{\boldmath{$\Gamma$}}(\mathbf X_i,z_i)^\top E\{\widetilde\mbox{\boldmath{$\Gamma$}}(\mathbf X_1,z_1)y_1\},
\end{equation} \begin{equation}\label{9a} \tilde\varepsilon_{2,i}=\mathrm{y}_i-\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top E\{\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\},
\end{equation}
\begin{equation}\label{10a} \tilde\varepsilon_{3,i}=\mathrm{y}_i-\widetilde\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_i)^\top E\{\widetilde\mbox{\boldmath{$\Pi$}}_{-\ell}(\mathbf X_1)\mathrm{y}_1\}.
\end{equation} Then we define $\tilde\xi_i=\tilde\varepsilon_{1,i}^2-\tilde\varepsilon_{2,i}^2$
and $\tilde\eta_i=\tilde\varepsilon_{3,i}^2-\tilde\varepsilon_{2,i}^2$,
which are population versions of $\hat{\xi}_i$ and $\hat{\eta}_i$,
respectively.
To establish our theoretical results, we need some technical conditions. Let $\mathcal H_r$ be a space of functions whose $d$th order derivative is H$\ddot{o}$lder continuous of order $v$. That is,
$\mathcal H_r=\{h(\cdot):\, |h^{(d)}(a')-h^{(d)}(a)|\leq C|a'-a|^v,a,a'\in[0,1]\}$, where $h^{(d)}(\cdot)$ is $d$th derivative and $r= d+v$. If $v=1$, then $h^{(d)}(\cdot)$ is Lipschitz continuous. Assume the following conditions hold:
{\spacingset{1.3} \begin{itemize}\label{C1} \item[A1] (Varying coefficient model) (i) The eigenvalues of matrix $E\{\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)^\top\}$ are bounded away from $0$ and $\infty$; (ii) Assume that $\beta_{j}\in \mathcal H_r$,
and $\widetilde\kappa_j=O( n^{1/(2r+1)})$ for some $r> 1.5$ and $0\leq j\leq p$;
(iii) Assume there exists some $\gamma>2$ such that
$E\|\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)\|_2^{2\gamma}=O(\widetilde\kappa^\gamma)$, and
$E|y|^{2\gamma}<+\infty$, where $\widetilde\kappa=\sum_{j=0}^p\widetilde\kappa_{j}$.
\item[A2] (Additive model) (i) The eigenvalues of matrix $E\{\mbox{\boldmath{$\Pi$}}(\mathbf X)\mbox{\boldmath{$\Pi$}}(\mathbf X)^\top\}$ are bounded away from $0$ and $\infty$; (ii) Assume that $m_{j}(\cdot)\in \mathcal H_r$
and $\kappa_j=O( n^{1/(2r+1)})$ for $r>1.5$ and $1\leq j\leq p$; (iii) Assume there exists some $\gamma>2$ such that
$E\|\mbox{\boldmath{$\Pi$}}(\mathbf X)\|_2^{2\gamma}=O(\kappa^\gamma),$
and
$E|y|^{2\gamma}<+\infty$,
where $\kappa=\sum_{j=1}^p\kappa_{j}$.
\item[A3] (Varying coefficient and additive models) Assume that
$E|\tilde\varepsilon_{1,1}|^{2\gamma}=O(1), E|\tilde\varepsilon_{2,1}|^{2\gamma}=O(1)$,
and $\text{Var}(\tilde\xi_1)> c_1$ for some constants $c_1>0$ and $\gamma>2$.
\item[A4] (Additive model) Assume that
$E|\tilde\varepsilon_{2,1}|^{2\gamma}=O(1),$
$E|\tilde\varepsilon_{3,1}|^{2\gamma}=O(1)$, and $\text{Var}(\tilde\eta_1)>c_1$ for some constants $c_1>0$ and $\gamma>2$.
\end{itemize} }
The above conditions are wild.
By Lemma 7 of Tang et al. (2013),
condition A1(i) holds.
Condition A2(i) is the same as condition A.2 of Belloni et al. (2015).
Conditions A1(ii) and A2(ii) were assumed in Theorem~1 of Tang et al. (2013). For B-spline series, Newey (1997) assumed $\sup_{x_{j}}\|\mbox{\boldmath{$\Pi$}}_{j}(x_{j})\|_{2}=O(\sqrt{\kappa_{j}})$, which implies
our condition
$E\|\mbox{\boldmath{$\Pi$}}(\mathbf X)\|_{2}^{2\gamma}=O(\kappa^{\gamma})$ in A2(iii). Notice that $\mbox{\boldmath{$\Gamma$}}_j(z)=(B_{j,1}(z),\ldots,B_{j,\widetilde\kappa_j}(z))^\top$ and $\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)=(\mbox{\boldmath{$\Gamma$}}_0^\top(z),x_{1}\mbox{\boldmath{$\Gamma$}}_1^\top(z),\ldots,x_p\mbox{\boldmath{$\Gamma$}}_p^\top(z))^\top$.
If $E(|x_j|^{2\gamma}\mid z_j=z)$ is a bounded function of $z$ and
$E\|\mbox{\boldmath{$\Gamma$}}_j(z)\|_{2}^{2\gamma}=O(\widetilde\kappa^{\gamma}),$ then the condition
$E\|\mbox{\boldmath{$\Gamma$}}(\mathbf X,z)\|_{2}^{2\gamma}=O(\widetilde\kappa^{\gamma})$ in A1(iii) holds.
Since the squared prediction errors in the \textsc{elr} test are compared, conditions A3 and A4 assume that the $\gamma$th moments of their population versions
must be bounded away from $+\infty$, i.e. $E(|\tilde\varepsilon_{j,1}|^{2\gamma})=O(1)$ for $j=1,2,3$. This can be relaxed if one compares the median of prediction errors, but it will complicate the technical proofs of theorems. Furthermore, it is assumed in condition A3 that $\text{Var}(\tilde\xi_1)> c_1$. This condition, combined with Lemma~\ref{le4}(i), ensures that $\sigma_\xi>0$. Otherwise,
there is no need to develop a test for comparson of the two competing models. Similarly, in condition A4 it is reasonable to assume $\text{Var}(\tilde\eta_1)>c_1$.
\noindent{\bf Supplementary Material}
To save space, all technical proofs of theorems are included in the online supplementary material.
{\spacingset{1.2}
}
\setcounter{page}{1}
\begin{center} {\Large\bf Supplementary material for ``Nonnested model selection based on empirical likelihood''} \end{center}
Now we give technical proofs of our theorems. To streamline our arguments, we first introduce some technical lemmas whose proofs are reported after the proofs of theorems.
\begin{Lemma}\label{le1} \rm{ Assume conditions A1 - A3 hold. Then, for $j=1,2$,
\begin{itemize} \item[(i)]
$\max_{1\leq i\leq n}|\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i}|=O_{P}(n^{\frac{1}{2\gamma}+\frac{3}{4r+2}-\frac{1}{2}})$; (ii) $n^{-1}\sum_{i=1}^{n}(\hat\varepsilon_{j,i}-\tilde\varepsilon_{j,i})^2=O_{P}(n^{\frac{2}{2r+1}-1})$. \end{itemize}
} \end{Lemma}
\begin{Lemma}\label{le2} \rm{ Assume conditions A1 - A3 hold. Then \begin{itemize} \item[(i)] $n^{-1}\sum_{i=1}^n(\hat\xi_i-\tilde\xi_i)=o_{P}(n^{-1/2})$ ; (ii) $n^{-1}\sum_{i=1}^n (\hat\xi_i^2-\tilde\xi_i^2)=o_{P}(1)$. \end{itemize} } \end{Lemma}
\begin{Lemma}\label{le4} \rm{ Assume conditions A1 - A3 hold. Then \begin{itemize} \item[(i)] $\text{\mbox{\rm Var}}(\tilde\xi_1)=\sigma_{\xi}^2+o(1)$; \item[(ii)]
$\max_{1\leq i\leq n}|\tilde\xi_i|=O_{P}(n^{1/\gamma})$, $\max_{1\leq i\leq n}|\hat\xi_i|=O_{P}(n^{1/\gamma})$,
$E|\tilde\xi_1|^{\gamma}=O(1)$,
and $n^{-1}\sum_{i=1}^n\tilde\xi_i^2=E\tilde\xi_1^2+o_{P}(1)$;
\item[(iii)]under $H^{(1)}_{a,n}$, $a_n^{-1}\sqrt{n}E\tilde\xi_1/\sigma_{\xi}\to 1$ when $|a|=+\infty$,
and $\sqrt{n}E\tilde\xi_1/\sigma_{\xi}\to a$ when $|a|<\infty$;
\item[(iv)]under $H^{(1)}_{a,n}$, if $|a|<+\infty$, then $n^{-1}\sum_{i=1}^{n}\tilde\xi_i=O_{P}(n^{-1/2})$ and $n^{-1}\sum_{i=1}^{n}\hat\xi_i=O_{P}(n^{-1/2})$. \end{itemize} } \end{Lemma}
\begin{Lemma}\label{le6} \rm{ Assume coditions A1 - A3 hold. Under $H^{(1)}_{a,n}$,
if $|a|<\infty$,
then
$\hat\lambda=O_{P}(n^{-1/2}).$ } \end{Lemma}
\textbf{Proofs of Theorems \ref{th1}-\ref{th1a}}. Since Theorem~\ref{th1} can be proven along the same line as Theorem~\ref{th1a} (with $a_n=0$), we omit the proof of Theorem~\ref{th1}.
Case (i): $|a|<\infty$.
Notice that $\hat\lambda$ solves the equation
$\sum_{i=1}^n\hat\xi_i/(1+\hat\lambda\hat\xi_{i})=0$, which can be rewritten as $$ 0 =\sum_{i=1}^n\hat\xi_i \{1-\hat\lambda\hat\xi_i+ (\hat\lambda\hat\xi_i)^2/(1+\hat\lambda\hat\xi_i)\}. $$ Then
\begin{eqnarray}
\hat\lambda&=&\Bigl(\sum_{i=1}\hat\xi_i^2\Bigr)^{-1}\sum_{i=1}^n\hat\xi_i \{1+(\hat\lambda\hat\xi_i)^2/(1+\hat\lambda\hat\xi_i)\}\nonumber\\ &=&\Bigl(\sum_{i=1}\hat\xi_i^2\Bigr)^{-1}\sum_{i=1}^n\hat\xi_i+ \Bigl(\sum_{i=1}\hat\xi_i^2\Bigr)^{-1}\sum_{i=1}^n\hat\lambda^2\hat\xi_i^3/(1+\hat\lambda\hat\xi_i).\label{th5} \end{eqnarray} Applying Taylor's expansion to $\sum_{i=1}^n\log(1+\hat\lambda\hat\xi_i)$ leads to \begin{eqnarray}
\sum_{i=1}^n\log(1+\hat\lambda\hat\xi_i) &=&\sum_{i=1}^n \Big\{\hat\lambda\hat\xi_i-(\hat\lambda\hat\xi_i)^2/2+\frac{(\hat\lambda\hat\xi_i)^3} {3(1+c_{i}\hat\lambda\hat\xi_i)^3}\Big\}\nonumber\\ &=&\hat\lambda\sum_{i=1}^n\hat\xi_i-\hat\lambda\Bigl(\sum_{i=1}^n\hat\xi_i^2\Bigr)\hat\lambda/2 +\hat\lambda^3\sum_{i=1}^n\frac{\hat\xi_i^3}{3(1+c_{i}\hat\lambda\hat\xi_i)^3}, \label{eqk3} \end{eqnarray} where $c_{i}\in [0, 1]$. By Lemma~\ref{le2} and Lemma~\ref{le4}(ii), we obtain that \begin{eqnarray}
n^{-1}\sum_{i=1}^n\hat\xi_i^2&=&n^{-1}\sum_{i=1}^n\tilde\xi_i^2 +n^{-1}\sum_{i=1}^n(\hat\xi_i^2-\tilde\xi_i^2) \nonumber\\
&=&E|\tilde\xi_1|^2+o_{P}(1), \label{Varianceapp} \end{eqnarray}
which, combined with $\text{Var}(\tilde\xi_1)=E|\tilde\xi_1|^2-(E\tilde\xi_1)^2$
and $(E\tilde\xi_1)^2=O(n^{-1})$ in Lemma~\ref{le4}(iii), yields that
\begin{equation}\label{k2}
n^{-1}\sum_{i=1}^n\hat\xi_i^2= \text{Var}(\tilde\xi_1) +o_P(1).
\end{equation} This, combined with Lemmas~\ref{le4} and~\ref{le6}, implies that \begin{equation}\label{eqw1}
\Bigl|\hat\lambda^3\sum_{i=1}^n\frac{\hat\xi_i^3}{3(1+c_{i}\hat\lambda\hat\xi_i)^3}\Bigr|
\leq O_{P}(1)|\hat\lambda|^3\max_{1\leq i\leq n}|\hat\xi_i|\sum_{i=1}^n\hat\xi_i^2 =O_{P}(n^{-3/2}n^{1/\gamma}n)=o_{P}(1); \end{equation}
$$\Bigl|\Bigl(\sum_{i=1}\hat\xi_i^2\Bigr)^{-1}\sum_{i=1}^n\hat\lambda^2\hat\xi_i^3/(1+\hat\lambda\hat\xi_i)\Bigr|\leq O_{P}(1)\hat\lambda^2\max_{1\leq i\leq n}|\hat\xi_i|=O_{P}(n^{1/\gamma-1}).$$ Let $\bar\xi_n=n^{-1}\sum_{i=1}^{n}\hat\xi_i$. Then, by Lemmas~\ref{le4}-\ref{le6}, we have
$|\bar\xi_n|=O_{P}(n^{-1/2})$
and $|\hat\lambda|=O_{P}(n^{-1/2})$.
Then, it follows from \eqref{th5} and~\eqref{k2} that $$\hat\lambda=\bar\xi_n/\text{Var}(\tilde\xi_1)+o_{P}(n^{-1/2}).$$ By \eqref{eqk3}, \eqref{k2} and \eqref{eqw1}, we have $$ 2\sum_{i=1}^n\log(1+\hat\lambda\hat\xi_i)=2n\hat\lambda\bar\xi_n -n\text{Var}(\tilde\xi_1)\hat\lambda^2+o_{P}(1).$$ Hence, \begin{eqnarray*}
R_{n,1}=2\sum_{i=1}^n\log(1+\hat\lambda\hat\xi_i)=n\bar\xi_n^2/\text{Var}(\tilde\xi_1)+o_{P}(1).\nonumber \end{eqnarray*} Denoted by $\bar\xi_n^{*}=n^{-1}\sum_{i=1}^n\tilde\xi_i$. Applying Lemma~\ref{le2}, we obtain that \begin{eqnarray}
R_{n,1}=2\sum_{i=1}^n\log(1+\hat\lambda\hat\xi_i)=n|\bar\xi_n^{*}|^{2}/\text{Var}(\tilde\xi_1)+o_{P}(1).\label{BAHP} \end{eqnarray}
Since $\{\tilde\xi_i\}_{i=1}^n$ are iid and $E|\tilde\xi_n|^{\gamma}=O(1)$ for $\gamma>2$ in Lemma~\ref{le4}(ii), by the Lindeberg-Feller central limit theorem, we establish that $$\sqrt{n}\big(\bar\xi^{*}_{n}-E\tilde\xi_1\big)/\text{Var}(\tilde\xi_1) =n^{-1/2}\sum_{i=1}^{n}(\tilde\xi_i-E\tilde\xi_1)/\text{Var}(\tilde\xi_1) \to {\mathcal N}(0,1).$$
Under $H_{a,n}^{(1)}$, we know from Lemma~\ref{le4}(iii) that
$\sqrt{n}E\tilde\xi_1/\sqrt{\text{Var}(\tilde\xi_1)}\to a $.
Therefore,
$$\sqrt{n}\bar\xi^{*}_{n}/\sqrt{\text{Var}(\tilde\xi_1)}\to {\mathcal N}(a,1).$$ Then, by \eqref{BAHP}, \begin{equation}\label{ksd1}
R_{n,1}=2\sum_{i=1}^n\log(1+\hat\lambda\hat\xi_i) \to \chi_{1}^2(a^2). \end{equation}
Case (ii): $a=\infty$.
Let $\hat\lambda_*=n^{-1/2}\text{sgn}(E\tilde\xi_i)$. By Lemma~\ref{le4}(ii), $\max_{1\leq i\leq n}\hat\xi_i=o_{P}(n^{1/\gamma})$ for $\gamma>2$. Then \begin{equation}\label{eqk1} \max_{1\leq i\leq n}\hat\lambda_*\hat\xi_i=o_{P}(1). \end{equation}
Since $\hat\lambda=\arg\max_{\lambda}\sum_{i=1}^n\log(1+\lambda\hat\xi_i),$
we have $$
R_{n,1}=2\sum_{i=1}^n\log(1+\hat\lambda\hat\xi_i)
\geq 2\sum_{i=1}^n\log(1+\hat\lambda_{*}\hat\xi_i). $$ Then, using \eqref{eqk1} and Taylor's expansion, we establish that \begin{eqnarray}\label{eqfa1} R_{n,1}&\ge& 2\sum_{i=1}^n\hat\lambda_{*}\hat\xi_i -\sum_{i=1}^n\hat\lambda_{*}^2\hat\xi_i^2/(1+c_{i}\hat\lambda_{*}\hat\xi_i)^{2}\nonumber\\ &\geq&2\sum_{i=1}^n\hat\lambda_{*}\hat\xi_i -2\sum_{i=1}^n\hat\lambda_{*}^2\hat\xi_i^2\{1+o_P(1)\}\nonumber\\ &=&2n^{-1/2}\sum_{i=1}^n\hat\xi_i\text{sgn}(E\tilde\xi_i) -2n^{-1}\sum_{i=1}^n\hat\xi_i^2\{1+o_P(1)\}, \end{eqnarray} where $c_i\in [0,1]$. By Lemma~\ref{le4}(iii), we have
$E\tilde\xi_1^2=O(1)$.
This, combined with $a_{n}\to \infty$ and \eqref{Varianceapp},
produces that $n^{-1}\sum_{i=1}^n\hat\xi_i^2=E\tilde\xi_1^2+o_{P}(1)=o_{P}(a_n)$. Hence, \begin{equation*} R_{n,1}\ge 2n^{-1/2}\sum_{i=1}^n\hat\xi_i\text{sgn}(E\tilde\xi_i)+o_P(a_n). \end{equation*} Using Lemma~\ref{le2}, we get $$ n^{-1}\sum_{i=1}^n\hat\xi_i\text{sgn}(E\tilde\xi_i)=n^{-1}\sum_{i=1}^n\text{sgn}(E\tilde\xi_i)\tilde\xi_i +o_{P}(a_nn^{-1/2})
=|E\tilde\xi_i|+o_{P}(a_nn^{-1/2}). $$ Then \begin{equation}\label{eqka1}
R_{n,1}\ge 2\sqrt{n}|E\tilde\xi_i|+o_P(a_n). \end{equation} By Lemma~\ref{le4}(i)-(iii) and condition A3, we know that, for large $n$, $
|E\tilde\xi_i|> 0.5\sqrt{c_1} |a_n|n^{-1/2}. $ This, together with \eqref{eqka1} and $a_n\to \infty$,
leads to \begin{equation}\label{eqfa3}
P(R_{n,1}\to \infty)\to 1.
\end{equation}
\textbf{Proofs of Theorems \ref{th2}-\ref{th2a}}. Since Theorem~\ref{th2}
can be shown in the same way as Theorem~\ref{th2a}, we only prove Theorem~\ref{th2a}. The asymptotic results for $R_{n,2}$ follow along the same line as that for Theorem~\ref{th1a} by replacing $\hat\xi_i$ and $\tilde\xi_i$ with $\hat\eta_i$ and $\tilde\eta_i$, respectively. Since $R_{n,3}=R_{n,2}$, we complete the proof of Theorem~\ref{th2a}.
$\diamond$
{\bf Proof Lemma~\ref{le1}.} (i) We first show that \begin{eqnarray}
\max_{1\leq i\leq n}\|\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}-\mbox{\boldmath{$\Sigma$}}_{A}\|_2=O_{P}(\kappa n^{-1/2}).\label{Matrixapp} \end{eqnarray} Put $\mbox{\boldmath{$\Sigma$}}_{n}=(n-1)^{-1}\sum_{i=1}^n\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top$. By condition A2(iii) and the inequality
$(E|b|^2)^{1/2}\le (E|b|^{\gamma})^{1/\gamma}$
with $b=\|\mbox{\boldmath{$\Pi$}}(\mathbf X)\|_2^2$ for $\gamma>2$, we have
$E\|\mbox{\boldmath{$\Pi$}}(\mathbf X)\|_2^4\leq \{E\|\mbox{\boldmath{$\Pi$}}(\mathbf X)\|_2^{2\gamma}\}^{2/\gamma}=O(\kappa^2)$. Then
\begin{eqnarray}
E\Vert \mbox{\boldmath{$\Sigma$}}_{n}-\mbox{\boldmath{$\Sigma$}}_{A}\Vert_{2}^{2} &\leq& \text{Trace}\{E(\mbox{\boldmath{$\Sigma$}}_{n}-\mbox{\boldmath{$\Sigma$}}_{A})(\mbox{\boldmath{$\Sigma$}}_{n}-\mbox{\boldmath{$\Sigma$}}_{A})\}\nonumber\\ &=&(n-1)^{-2}\sum_{k=1}^{\kappa}\sum_{l=1}^{\kappa}\sum_{i=1}^n[E\{\Pi_{k}^2(\mathbf X_i)\Pi_{l}^2(\mathbf X_i)\}-\Sigma_{A,kl}^2]\nonumber\\
&=& n(n-1)^{-2}\{E\|\mbox{\boldmath{$\Pi$}}(\mathbf X)\|_2^4-\text{Trace}(\mbox{\boldmath{$\Sigma$}}_{A}^2)\}\nonumber\\ &=&O(\kappa^2/n),\label{Mat-Ex} \end{eqnarray} where $\Sigma_{A,kl}=E\{\Pi_{k}^2(\mathbf X_i)\Pi_{l}^2(\mathbf X_i)\}.$ Hence, \begin{equation}\label{eqjl1}
\|\mbox{\boldmath{$\Sigma$}}_{n}-\mbox{\boldmath{$\Sigma$}}_{A}\|_2=O_P(\kappa/\sqrt{n}) \ \ \,\mbox{\rm and}\ \ \,
\|\mbox{\boldmath{$\Sigma$}}_{n}\|_2\le \|\mbox{\boldmath{$\Sigma$}}_{A}\|_2+O_P(\kappa/\sqrt{n}). \end{equation}
Furthermore,
$$\|\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}-\mbox{\boldmath{$\Sigma$}}_{A}\|_2\le
\|\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}-\mbox{\boldmath{$\Sigma$}}_{n}\|_2+\|\mbox{\boldmath{$\Sigma$}}_{n}-\mbox{\boldmath{$\Sigma$}}_{A}\|_2=
(n-1)^{-1}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2+O_{P}(\kappa/\sqrt{n}).$$ Note that, by condition A2,
$E\|\mbox{\boldmath{$\Pi$}}(\mathbf X)\|_2^{2\gamma}=O(\kappa^\gamma)$.
It follows from Markov's inequality that \begin{eqnarray}
\max_{1\leq i\leq n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_2^{2\gamma}\leq n n^{-1}\sum_{i=1}^n\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_2^{2\gamma}=O_{P}(n\kappa^\gamma).\label{maxo} \end{eqnarray} That is,
$\max_{1\leq i\leq n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_2^2=O_{P}(n^{1/\gamma}\kappa)$. Then
$$\max_{1\leq i\leq n}\|\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}-\mbox{\boldmath{$\Sigma$}}_{A}\|_2=O_{P}\{(n^{1/\gamma-1}+n^{-1/2})\kappa\}=O_{P}(\kappa n^{-1/2}).$$
That is, \eqref{Matrixapp} holds. Note that $E\mathrm{y}_{i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top E\{\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\}=\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1/2}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_{1})\mathrm{y}_{1}\}\|_{2}^2$. It follows from \eqref{9a} that \begin{eqnarray*}
E|\tilde\varepsilon_{2,i}|^2=E|\mathrm{y}_{i}-\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top E\{\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\}|^2
=E|\mathrm{y}_1|^{2}
-\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1/2}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_{1})\mathrm{y}_{1}\}\|_{2}^2. \end{eqnarray*} This, together with condition A2,
implies that $\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1/2}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_{1})\mathrm{y}_{1}\}\|_{2}^2\leq E |\mathrm{y}_{1}|^{2}=O(1)$.
Then, with $\lambda_{\max}(\mbox{\boldmath{$\Sigma$}}_A)=O(1)$ and $\lambda_{\min}(\mbox{\boldmath{$\Sigma$}}_{A})>0$ in condition A2,
it is easy to see that \begin{eqnarray}
&& \|E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_{1})\mathrm{y}_{1}\}\|_{2} =O(1)\,\ \mbox{\rm and}\,\
\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_{1})\mathrm{y}_{1}\}\|_{2} =O(1).\label{EPM} \end{eqnarray}
Denoted by $\mbox{\boldmath{$\mu$}}_{k}=\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k-E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k\}$.
Applying Cauchy-Schwarz's inequality,~\eqref{Matrixapp},~\eqref{maxo} and~\eqref{EPM}, we obtain that \begin{eqnarray*}
|\tilde\varepsilon_{2,i}-\hat\varepsilon_{2,i}|
&\leq&\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_2
\big\|\frac{1}{n-1}\sum_{k=1(\neq i)}^n(\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)})^{-1}\mbox{\boldmath{$\mu$}}_{k}
+\{(\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)})^{-1}-\mbox{\boldmath{$\Sigma$}}_{A}^{-1}\}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k\}\big\|_2\\
&\leq&O_P(n^{\frac{1}{2\gamma}}\sqrt{\kappa})\big\|\frac{1}{n-1}\sum_{k=1(\neq i)}^n\mbox{\boldmath{$\mu$}}_{k}\big\|_{2}+O_P(n^{\frac{1}{2\gamma}}\sqrt{\kappa})\|(\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)})^{-1}(\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}-\mbox{\boldmath{$\Sigma$}}_{A})\mbox{\boldmath{$\Sigma$}}_{A}^{-1}\|_{2}\\
&=&O_P(n^{\frac{1}{2\gamma}}\sqrt{\kappa})\big\|\frac{1}{n-1}\sum_{k=1(\neq i)}^n\mbox{\boldmath{$\mu$}}_{k}\big\|_{2}+O_P(\kappa^{3/2} n^{\frac{1}{2\gamma}-\frac{1}{2}}), \end{eqnarray*} uniformly for $1\leq i\leq n$. Note that
$E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\|_2^{2\gamma}=O(\kappa^\gamma)$ and
$E|\mathrm{y}_1|^{2\gamma}=O(1)$ (condition A2), it follows that \begin{eqnarray}
E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\|_{2}^2\leq \sqrt{E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\|_{2}^4E|\mathrm{y}_1|^4}=O(\kappa).\label{TMSC} \end{eqnarray} Then \begin{equation}\label{TMSCa}
E\big\|\sum_{k=1}^n\mbox{\boldmath{$\mu$}}_{k}\big\|_2^2
=\sum_{k=1}^nE\|\mbox{\boldmath{$\mu$}}_{k}\|_{2}^2 =O(n\kappa). \end{equation} Recalling that
$E|\mathrm{y}_1|^{2\gamma}=O(1)$, we have
$\max_{1\leq i\leq n}|\mathrm{y}_i|^{2\gamma}=O_{P}(n)$. This, together with \eqref{maxo}, yields that \begin{eqnarray}
\max_{1\leq i\leq n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\mathrm{y}_i\|_{2}\leq \max_{1\leq i\leq n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2} \max_{1\leq i\leq n}|\mathrm{y}_i| =O_{P}(n^{1/\gamma}\sqrt{\kappa}).\label{MAXOC} \end{eqnarray} Then, applying \eqref{TMSC}-\eqref{MAXOC} and the inequality \begin{eqnarray*}
\max_{1\leq i\leq n}\big\|\sum_{k=1(\neq i)}^n\mbox{\boldmath{$\mu$}}_{k}\big\|_2\leq\big\|\sum_{k=1}^n\mbox{\boldmath{$\mu$}}_{k}\big\|_2+\max_{1\leq i\leq n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\mathrm{y}_i\|_{2}+E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\|_2, \end{eqnarray*} we obtain that
\begin{eqnarray}
\max_{1\leq i\leq n}\big\|\frac{1}{n-1}\sum_{k=1(\neq i)}^n\mbox{\boldmath{$\mu$}}_{k}\big\|_2 =O_{P}(\sqrt{\kappa/n}+n^{1/\gamma-1}\sqrt{\kappa})\label{MAXQ1}. \end{eqnarray}
Thus, $\max_{1\leq i\leq n}|\tilde\varepsilon_{2,i}-\hat\varepsilon_{2,i}|=O_{P}(\kappa^{3/2}n^{\frac{1}{2\gamma}-\frac{1}{2}})
=O_{P}(n^{\frac{1}{2\gamma}+\frac{3}{4r+2}-\frac{1}{2}})$. Similarly, we can also show that $\max_{1\leq i\leq n}|\tilde\varepsilon_{1,i}-\hat\varepsilon_{1,i}| =O_{P}(n^{\frac{1}{2\gamma}+\frac{3}{4r+2}-\frac{1}{2}})$.
(ii)
Let $\hat\varepsilon_{2,i}^*=\frac{1}{n-1}\sum_{k=1}^n\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k$. Then
$$n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i}|^2\leq 2n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}^*-\tilde\varepsilon_{2,i}|^2+
2n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}-\hat\varepsilon_{2,i}^*|^2.$$ By the inequality $(a+b)^2\leq 2(a^2+b^2)$, we have \begin{eqnarray*}
n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}-\hat\varepsilon_{2,i}^*|^2
&\leq& 2n^{-1}\sum_{i=1}^{n}\bigl|
\frac{1}{n-1}\sum_{k=1(\neq i)}^n\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\bigl[\{\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}\}^{-1}-\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\bigr]\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k\bigr|^2\\
&&+\frac{2}{n(n-1)^2}\sum_{i=1}^{n}\bigl|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\mathrm{y}_i\bigr|^2\\ &\equiv&r_{n,1}+r_{n,2}. \end{eqnarray*} Applying~\eqref{EPM} and ~\eqref{MAXQ1}, we obtain that
$$\max_{1\leq i\leq n}\bigl\|\frac{1}{n-1}\sum_{k=1(\neq i)}^n\mbox{\boldmath{$\mu$}}_k+E\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\bigr\|_{2}=O_{P}(1).$$ This, combined with the Cauchy-Schwarz inequality,~\eqref{Matrixapp} and~\eqref{eqjl1}, yields that \begin{eqnarray*}
r_{n,1}&\leq& 2n^{-1}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2\cdot\|\{\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}\}^{-1}-\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\|_{2}^{2}\cdot
\bigl\|\frac{1}{n-1}\sum_{k=1(\neq i)}^n\mbox{\boldmath{$\mu$}}_k+E\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\bigr\|_{2}^2\\
&\leq& O_{P}(1)\frac{1}{(n-1)^2n}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2\cdot\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^4\cdot\|\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\|_{2}^2\cdot
\|\{\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)}\}^{-1}\|_{2}^2\\
&=&O_{P}(n^{-2})n^{-1}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^6 \end{eqnarray*} and \begin{eqnarray*}
r_{n,2}&\leq& \frac{2}{n(n-1)^2}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2\cdot \|\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\|_{2}^2\cdot\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\mathrm{y}_i\|_2^2=
O_{P}(n^{-2})n^{-1}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^4\mathrm{y}_i^2. \end{eqnarray*} By condition A2, we have
$\max_{1\leq i\leq n}|\mathrm{y}_i|=O_{P}(n^{1/(2\gamma)})$
and
$$E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_{1})\|_{2}^4\leq \{E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_{1})\|_{2}^{2\gamma}\}^{2/\gamma}=O(\kappa^2).$$
Then, applying~\eqref{maxo},
we establish that
$$n^{-1}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^6
\leq \max_{1\leq i\leq n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2 n^{-1}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^4 =O_{P}(\kappa^3 n^{1/\gamma})$$ and
$$n^{-1}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^4\mathrm{y}_i^2\leq \max_{1\leq i\leq n}\mathrm{y}_i^2 n^{-1}\sum_{i=1}^{n}\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^4=O_{P}(\kappa^2n^{1/\gamma}).$$ Thus,
$n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}-\hat\varepsilon_{2,i}^*|^2 =O_{P}(\kappa^3 n^{1/\gamma-2})$. It follows that \begin{equation}\label{eqq1}
n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i}|^2\leq 2n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}^*-\tilde\varepsilon_{2,i}|^2+ O_{P}(\kappa^3 n^{1/\gamma-2}). \end{equation} Put
$\mathbf v=\mbox{\boldmath{$\Sigma$}}_{A}^{-1}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k\}$. Then \begin{eqnarray}
\hat\varepsilon_{2,i}^*-\tilde\varepsilon_{2,i} &=&\frac{1}{n-1}\sum_{k=1}^n\bigl[\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\mbox{\boldmath{$\mu$}}_{k} +\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\{\mbox{\boldmath{$\Sigma$}}_{n}^{-1}-\mbox{\boldmath{$\Sigma$}}_{A}^{-1}\}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k\}\bigr]\nonumber\\ &=&\frac{1}{n-1}\sum_{k=1}^n\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\mbox{\boldmath{$\mu$}}_{k} +\frac{1}{n-1}\sum_{k=1}^n\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mbox{\boldmath{$\Sigma$}}_{n}^{-1}(\mbox{\boldmath{$\Sigma$}}_{A}-\mbox{\boldmath{$\Sigma$}}_{n})\mathbf v\nonumber\\ &\equiv&I_{i,1}+I_{i,2}. \label{eqp0} \end{eqnarray} By the Cauchy-Schwzarz inequality,~\eqref{eqjl1}, ~\eqref{EPM},~\eqref{TMSCa}, $\lambda_{\max}(\mbox{\boldmath{$\Sigma$}}_{A})=O(1)$, and $\lambda_{\min}(\mbox{\boldmath{$\Sigma$}}_A)>0$ in condition A2, it holds that \begin{equation}\label{eqp1}
n^{-1}\sum_{i=1}^nI_{i,2}^2\leq \frac{(n-1)}{n} \|\mbox{\boldmath{$\Sigma$}}_{n}\|_{2}\cdot\|\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\|_{2}^2\cdot
\|(\mbox{\boldmath{$\Sigma$}}_{A}-\mbox{\boldmath{$\Sigma$}}_{n})\|_{2}^2\cdot\|\mathbf v\|_{2}^2=O_{P}(\kappa^2/n); \end{equation} \begin{equation}
n^{-1}\sum_{i=1}^nI_{i,1}^2
\leq \frac{(n-1)}{n}\|\mbox{\boldmath{$\Sigma$}}_{n}\|_{2}\cdot \|\mbox{\boldmath{$\Sigma$}}_{n}^{-1}\|_{2}^2\cdot
\frac{1}{(n-1)^2}\|\sum_{k=1}^n\mbox{\boldmath{$\mu$}}_{k}\|_{2}^2 =O_{P}(\kappa/n). \label{eqp2} \end{equation}
Naturally, combining \eqref{eqp0}-\eqref{eqp2} leads to
$$n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}^*-\tilde\varepsilon_{2,i}|^2 \le 2 n^{-1}\sum_{i=1}^n I_{i,1}^2 + 2 n^{-1} \sum_{i=1}^n I_{i,2}^2 =O_{P}(\kappa^2/n),$$ which, together with \eqref{eqq1}, yields that
$n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i}|^2
=O_P(n^{2/(2r+1)-1}).
$
Along the same line, we can also show that $n^{-1}\sum_{i=1}^{n}|\hat\varepsilon_{1,i}-\tilde\varepsilon_{1,i}|^2
=O_P(n^{2/(2r+1)-1}).
$
$\diamond$\\
\textbf{Proof of Lemma~\ref{le2}}. (i) By the definitions of $\hat\xi_i$ and $\tilde\xi_i$, we have $$n^{-1}\sum_{i=1}^n(\hat\xi_i-\tilde\xi_i) = n^{-1}\sum_{i=1}^n(\hat\varepsilon_{1,i}^2-\tilde\varepsilon_{1,i}^2) -n^{-1}\sum_{i=1}^n(\hat\varepsilon_{2,i}^2-\tilde\varepsilon_{2,i}^2).$$ We will show each term on the righthand side of the above equation is $o_P(n^{-1/2}).$ In the following we only show this for the 2nd term, since it can be done similarly for the 1st term.
Notice that \begin{eqnarray*}
|\hat\varepsilon_{2,i}|^2-|\tilde\varepsilon_{2,i}|^2
=(\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i})^2+2\tilde\varepsilon_{2,i}(\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i}).
\label{DIFFE} \end{eqnarray*} By Lemma~\ref{le1},
$n^{-1}\sum_{i=1}^n|\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i}|^2=o_{P}(n^{-1/2})$. Then it suffices to show that \begin{eqnarray}
n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}(\tilde\varepsilon_{2,i}-\hat\varepsilon_{2,i})=o_{P}(n^{-1/2}). \label{copr} \end{eqnarray}
The reader who does not wish to study the lengthy proof may skip to the proof of (ii). Let
$\mbox{\boldmath{$\delta$}}_{i}=\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)[
\mathrm{y}_i-\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top E\{\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\}]
=\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i).$ Then
$E\mbox{\boldmath{$\delta$}}_{i}^\top\mbox{\boldmath{$\delta$}}_{j}=0$ for $i\neq j$, and \begin{equation}\label{eqx1}
E\|\mbox{\boldmath{$\delta$}}_{i}\|_{2}^2\leq 2E\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\mathrm{y}_i\|_{2}^2+2\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\}\|_{2}^2\cdot E\{\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2\cdot\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2\}. \end{equation} Note that
$E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^4\leq \{E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^{2\gamma}\}^{2/\gamma}=O(\kappa^2)$ and $\lambda_{\min}(\mbox{\boldmath{$\Sigma$}}_{A})>0$. It follows from
\eqref{EPM},~\eqref{TMSC} and \eqref{eqx1}
that
$E\|\mbox{\boldmath{$\delta$}}_{i}\|_{2}^2=O(\kappa^2)$. Hence,
\begin{equation}\label{L2Vaa}
\big\|n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\big\|_{2}=O_{P}(\kappa n^{-1/2}).
\end{equation}
Using the Cauchy-Schwarz inequality,~\eqref{EPM},
$\lambda_{\min}(\mbox{\boldmath{$\Sigma$}}_{A})>0$ and $E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^{2\gamma}=O(\kappa^{\gamma})$ for $\gamma>2$, we establish that \begin{eqnarray*}
E|\tilde\varepsilon_{2,i}|\cdot\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2&\leq&E\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2|\mathrm{y}_i|
+\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1}E\{\mbox{\boldmath{$\Pi$}}(\mathbf X_1)\mathrm{y}_1\}\|_{2}\cdot E\{\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2\cdot\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}\}\\
&\leq&O(1)\sqrt{E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^4E\mathrm{y}_i^2}+O(1)E\|\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^3\\ &=&O(\kappa^{3/2}). \end{eqnarray*}
Thus, \begin{eqnarray}
n^{-1}\sum_{i=1}^n|\tilde\varepsilon_{2,i}|\cdot\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\|_{2}^2=O_{P}(\kappa^{3/2}).\label{L2V} \end{eqnarray} Denoted by $\mathbf F_{i}=\mbox{\boldmath{$\Sigma$}}_{A}^{1/2}\{(\mbox{\boldmath{$\Sigma$}}_{n}^{(-i)})^{-1}-\mbox{\boldmath{$\Sigma$}}_{A}^{-1}\}\mbox{\boldmath{$\Sigma$}}_{A}^{1/2},$ $\mathbf F=\mbox{\boldmath{$\Sigma$}}_{A}^{1/2}\{\mbox{\boldmath{$\Sigma$}}_{n}^{-1}-\mbox{\boldmath{$\Sigma$}}_{A}^{-1}\}\mbox{\boldmath{$\Sigma$}}_{A}^{1/2}$, $\mathbf G_{i}=(n-1)^{-1}\sum_{k=1(\neq i)}^n\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k,$
$\mathbf G=(n-1)^{-1}\sum_{k=1}^n\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k$,
$\mathbf L_i=\frac{1}{n-1}\sum_{k=1(\neq i)}^n
[\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k
-E\{\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k\}],$
and $\mathbf L=\frac{1}{n-1}\sum_{k=1}^n[\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k-E\{\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_k)\mathrm{y}_k\}]$. Let
$ I_1=n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mathbf F\mathbf G,$
$I_2=n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\{-(\mathbf F_i-\mathbf F)(\mathbf G_i-\mathbf G)\},$
$I_3=n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mathbf F_i(\mathbf G_i-\mathbf G),$
$I_4=n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top(\mathbf F_i-\mathbf F)\mathbf G_i,$
$I_5=n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top(\mathbf L_i-\mathbf L),$ and $I_6=n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mathbf L.$
\noindent Then it can be rewritten that \begin{eqnarray*}
n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}(\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i})
&=&n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mathbf F_{i}\mathbf G_{i}
+n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\mathbf L_i
= \sum_{j=1}^6 I_j.
\end{eqnarray*} Hence, by \eqref{copr},
it suffices to show that $I_j=o_P(n^{-1/2})$.
Applying the Cauchy-Schwarz inequality and \eqref{L2Vaa}-\eqref{L2V}, we establish that $$
|I_{1}|\leq \Big\|n^{-1}\sum_{i=1}^n\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\Big\|_{2}\|\mathbf F\|_{2}\|\mathbf G\|_{2}
=O_{P}(\kappa /\sqrt{n})\|\mathbf F\|_{2}\|\mathbf G\|_{2}; $$ \begin{eqnarray*}
|I_3|&\leq& \frac{1}{n(n-1)}\sum_{i=1}^n\big\|\tilde\varepsilon_{2,i}\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\big\|_{2}\cdot
\|\mathbf F_i\|_{2}\cdot\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\mathrm{y}_i\|_2\\
&\leq&\max_{1\leq i\leq n} \|\mathbf F_i\|_{2}|\mathrm{y}_i|
\frac{1}{n(n-1)}\sum_{i=1}^n|\tilde\varepsilon_{2,i}|\cdot\big\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)\big\|_{2}^2
\\
&=&O_{P}(\kappa^{3/2}/n) \max_{1\leq i\leq n}|\mathrm{y}_i| \max_{1\le i\le n}\|\mathbf F_i\|_{2}. \end{eqnarray*}
Combining~\eqref{Matrixapp},~\eqref{eqjl1}, $E|\mathrm{y}_i|^{\gamma}=O(1)$, $\lambda_{\min}(\mbox{\boldmath{$\Sigma$}}_{A})>0$ and $\lambda_{\max}(\mbox{\boldmath{$\Sigma$}}_{A})=O(1)$, we arrive at $\max_{1\leq i\leq n}|\mathrm{y}_i|=O_{P}(n^{\frac{1}{2\gamma}})$,
$$\max_{1\leq i\leq n}\|\mathbf F_i\|_{2}\leq \max_{1\leq i\leq n}\|\mbox{\boldmath{$\Sigma$}}_{A}^{1/2}\|_{2}^{2}\cdot\|\mbox{\boldmath{$\Sigma$}}^{(-i)}_{n}\|_{2}\cdot\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1}\|_{2}\cdot
\|\mbox{\boldmath{$\Sigma$}}^{(-i)}_{n}-\mbox{\boldmath{$\Sigma$}}_{A}\|_{2}=O_{P}(\kappa/\sqrt{n}),$$ and
$\|\mathbf F\|_{2}\leq\|\mbox{\boldmath{$\Sigma$}}_{A}^{1/2}\|_{2}^{2}\cdot\|\mbox{\boldmath{$\Sigma$}}_{n}\|_{2}\cdot\|\mbox{\boldmath{$\Sigma$}}_{A}^{-1}\|_{2}\cdot
\|\mbox{\boldmath{$\Sigma$}}_{n}-\mbox{\boldmath{$\Sigma$}}_{A}\|_{2}=O_{P}(\kappa/\sqrt{n}).$
Using \eqref{EPM} and \eqref{TMSC},
we get \begin{eqnarray*}
E\|\mathbf G\|_{2}^2&=& \frac{1}{(n-1)^2}\sum_{i=1}^n\sum_{j=1}^n E\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_i)^\top\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_j)\mathrm{y}_i\mathrm{y}_j\\
&=&\frac{n}{(n-1)^2}E\|\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_j)\mathrm{y}_i\|_{2}^2+\frac{n}{(n-1)}
\|E\widetilde\mbox{\boldmath{$\Pi$}}(\mathbf X_j)\mathrm{y}_i\|_{2}^2\\ &=&O(1). \end{eqnarray*} Thus,
$\|\mathbf G\|_{2}=O_P(1)$.
Since $\gamma>2$ and $r>1.5$, we have $I_{1}=O_{P}(\kappa^2/n)
=o_{P}(n^{-1/2})$ and $I_{3} =O_{P}(n^{\frac{1}{2\gamma}}\kappa^{5/2}n^{-3/2})
=o_{P}(n^{-1/2})$.
Similarly, we can show that $I_j=o_P(n^{-1/2})$ for $j=2,4,5,6$.
(ii) Notice that $$\hat\xi_i^2-\tilde\xi_i^2
=(\hat\xi_i-\tilde\xi_i)^2+2\tilde\xi_i(\hat\xi_i-\tilde\xi_i). $$ It follows from the Cauchy-Schwarz inequality that \begin{eqnarray*}
\bigl|n^{-1}\sum_{i=1}^n( \hat\xi_i^2-\tilde\xi_i^2)\bigr| &\leq& n^{-1}\sum_{i=1}^n(\hat\xi_i-\tilde\xi_i)^2
+2|\tilde\xi_i|\cdot|\hat\xi_i-\tilde\xi_i|\\ &\leq&
2n^{-1}\Bigl(\sum_{i=1}^n|\tilde\xi_i|^2\sum_{i=1}^n|\hat\xi_i-\tilde\xi_i|^2\Bigr)^{1/2} +n^{-1}\sum_{i=1}^n(\hat\xi_i-\tilde\xi_i)^2. \end{eqnarray*} By Jensen's inequality and
condition A3, we obtain that \begin{eqnarray}
E|\tilde\xi_1|^\gamma=E|\tilde\varepsilon_{1,1}^2-\tilde\varepsilon^2_{2,1}|^\gamma\leq 2^{\gamma-1}E|\tilde\varepsilon_{1,1}|^{2\gamma}+2^{\gamma-1}E|\tilde\varepsilon_{2,1}|^{2\gamma}=O(1).\label{MSAPE} \end{eqnarray} Thus, by condition A3 and Markov's inequality,
$n^{-1}\sum_{i=1}^n\tilde\xi_i^2=O_{P}(1)$. To complete the proof, it's sufficient to show that
$n^{-1}\sum_{i=1}^n |\hat\xi_i-\tilde\xi_i|^2=o_{P}(1)$.
By the definitions of $\hat\xi_i$, $\tilde{\xi}_i$, and Jessen's inequality, we establish that \begin{eqnarray*}
n^{-1}\sum_{i=1}^n(\hat\xi_i-\tilde\xi_i)^2 &=& n^{-1}\sum_{i=1}^n \bigl\{(\tilde\varepsilon_{1,i}-\hat\varepsilon_{1,i})^2-(\tilde\varepsilon_{2,i}-\hat\varepsilon_{2,i})^2\\ && +2\tilde\varepsilon_{1,i}(\hat\varepsilon_{1,i}-\tilde\varepsilon_{1,i}) -2\tilde\varepsilon_{2,i}(\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i})\bigr\}^2\\ &\leq&\frac{4}{n}\sum_{i=1}^n\sum_{j=1}^2(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^4 +\frac{16}{n}\sum_{i=1}^n\sum_{j=1}^2 \tilde\varepsilon_{j,i}^2(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^2. \end{eqnarray*} Applying Lemma~\ref{le1}, we establish that \begin{eqnarray*} n^{-1}\sum_{i=1}^n\sum_{j=1}^2(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^4 &\leq& \max_{1\leq i\leq n,1\leq j\leq 2}(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^2 n^{-1}\sum_{i=1}^n\sum_{j=1}^2(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^2\\ &=&O_{P}(n^{\frac{1}{\gamma}+\frac{5}{2r+1}-2}); \end{eqnarray*}
\begin{eqnarray*} n^{-1}\sum_{i=1}^n\sum_{j=1}^2 \tilde\varepsilon_{j,i}^2(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^2 &\leq& \max_{1\leq i\leq n, 1\leq j\leq 2}\tilde\varepsilon_{j,i}^2 n^{-1}\sum_{i=1}^n\sum_{j=1}^2(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^2\\ &=&\max_{1\leq i\leq n, 1\leq j\leq 2}\tilde\varepsilon_{j,i}^2 \, O_{P}(n^{\frac{2}{2r+1}-1}). \end{eqnarray*} By condition A3,
$E|\tilde\varepsilon_{2,1}|^{2\gamma}=O(1)$
for $\gamma>2$. Then, by the Markov inequality,
$$P(n^{-1}\max_{1\leq i\leq n}|\tilde\varepsilon_{2,i}|^{2\gamma}>M_n)
\le M_n^{-1}n^{-1}\sum_{i=1}^n E|\tilde\varepsilon_{2,i}|^{2\gamma}\to 0,$$ as $M_n\to \infty$.
Thus,
$\max_{1\leq i\leq n}|\tilde\varepsilon_{2,i}|^2=O_{P}(n^{1/\gamma})$. Similarly, $\max_{1\leq i\leq n}|\tilde\varepsilon_{1,i}|^2=O_{P}(n^{1/\gamma})$. Hence, \begin{eqnarray}
\max_{1\leq i\leq n, 1\leq j\leq 2}|\tilde\varepsilon_{j,i}|^2=O_{P}(n^{1/\gamma}).\label{MAXO} \end{eqnarray} Combining the above results with $\gamma>2$ and $r>1.5$ leads to $$ n^{-1}\sum_{i=1}^n(\hat\xi_i-\tilde\xi_i)^2 = O_{P}(n^{\frac{1}{\gamma}+\frac{5}{2r+1}-2})+O_{P}(n^{\frac{1}{\gamma}+\frac{2}{2r+1}-1}) =o_{P}(1). $$
$\diamond$\\
\textbf{Proof of Lemma~\ref{le4}}. (i) By Lemma~\ref{le2}, we have \begin{equation}\label{eqsat2} n^{-1}\sum_{i=1}^n(\tilde\xi_i-\hat\xi_i)=o_{P}(n^{-1/2})\,\,\ \mbox{\rm and}\,\,\ n^{-1}\sum_{i=1}^n(\hat\xi_i^2-\tilde\xi_i^2)=o_{P}(1). \end{equation} Let $\mbox{\boldmath{$\omega$}}_{1,n}=n^{-1}\sum_{i=1}^n(\hat\xi_i^2-\tilde\xi_i^2)/(E\hat\xi_i^2+E\tilde\xi_i^2)$. By condition A3, we have
$E\tilde\xi_i^2\geq \text{Var}(\tilde\xi_i)>c_1$.
This, combined with~\eqref{eqsat2}, ensures that
$$\mbox{\boldmath{$\omega$}}_{1,n}=o_{P}(1)\,\,\
\mbox{\rm and }\,\,\
\sup_{n}E|\mbox{\boldmath{$\omega$}}_{1,n}|\leq 1.
$$
Applying Theorem A (Serfling, 1980, page 14),
we obtain that $|E\mbox{\boldmath{$\omega$}}_{1,n}|\leq E|\mbox{\boldmath{$\omega$}}_{1,n}|\to 0$.
Then, it is easy to see that \begin{eqnarray}\label{eqsat4}
\ E\hat\xi_i^2/E\tilde\xi_i^2\to 1. \end{eqnarray} Let $\mathcal{X}_n=n^{-1}\sum_{i=1}^n (\hat\xi_i-E\tilde\xi_i)$ and $\mathcal{Y}_n=n^{-1}\sum_{i=1}^n (\tilde\xi_i-E\tilde\xi_i)$. Using the identity
$\mathcal{X}_n^2-\mathcal{Y}_n^2=(\mathcal{X}_n-\mathcal{Y}_n)^2+2\mathcal{Y}_n(\mathcal{X}_n-\mathcal{Y}_n)$, we get \begin{eqnarray} \mathcal{X}_n^2-\mathcal{Y}_n^2 =\bigl\{n^{-1}\sum_{i=1}^n (\hat\xi_i-\tilde\xi_i)\bigr\}^2 +2\mathcal{Y}_n n^{-1}\sum_{i=1}^n (\hat\xi_i-\tilde\xi_i\bigr). \label{eqll1} \end{eqnarray} By Markov's inequality and \eqref{MSAPE}, it holds that
$$P\Big(|\mathcal{Y}_n|>c_n n^{-1/2}\Big)\leq n^{-1}c_n^{-2}\sum_{i=1}^nE(\tilde\xi_i-E\tilde\xi_i)^2 \to 0 $$ for any $c_n\to\infty$. Hence, \begin{eqnarray}\label{eqsat6}
\mathcal{Y}_n =O_{P}(n^{-1/2}). \end{eqnarray} This, combined with~\eqref{eqsat2} and \eqref{eqll1}, yields that $$\mathcal{X}_n^2-\mathcal{Y}_n^2 =o_{P}(n^{-1}).$$ Define $\mbox{\boldmath{$\omega$}}_{2,n}=(\mathcal{X}_n^2-\mathcal{Y}_n^2)/(E\mathcal{X}_n^2+E\mathcal{Y}_n^2).$
Since $E\mathcal{Y}_n^2= n^{-1}\text{Var}(\tilde\xi_1)\geq n^{-1}c_1$,
we have
$\mbox{\boldmath{$\omega$}}_{2,n}=o_{P}(1)$ and $\sup_nE|\mbox{\boldmath{$\omega$}}_{2,n}|\leq 1$.
Hence,
$E \mbox{\boldmath{$\omega$}}_{2,n}=o(1)$.
Similar to~\eqref{eqsat4}, we get \begin{eqnarray}\label{eqsat7}
E\mathcal{X}_n^2/E\mathcal{Y}_n^2
\to 1. \end{eqnarray} Denoted by $\mbox{\boldmath{$\omega$}}_{3,n}=\{n\text{Var}(\tilde\xi_1)\}^{-1/2}\sum_{i=1}^n(\tilde\xi_i-\hat\xi_i)$. Then, by~\eqref{eqsat2} and $\text{Var}(\tilde\xi_1)>c_1$,
$\mbox{\boldmath{$\omega$}}_{3,n}=o_{P}(1)$.
Furthermore, \begin{eqnarray*}
E|\mbox{\boldmath{$\omega$}}_{3,n}|^2&=&\{\text{Var}(\tilde\xi_1)/n\}^{-1}E(\mathcal{X}_n-\mathcal{Y}_n)^2\\ &\leq&2(E\mathcal{Y}_n^2)^{-1}(E\mathcal{X}_n^2+E\mathcal{Y}_n^2)\\ &=&2+2E\mathcal{X}_n^2/E\mathcal{Y}_n^2. \end{eqnarray*} It follows from \eqref{eqsat7} that
$\sup_nE|\mbox{\boldmath{$\omega$}}_{3,n}|^2$ is bounded. Thus,
\begin{eqnarray}\label{eqsat5}
E\mbox{\boldmath{$\omega$}}_{3,n}
\to 0,\,\, \mbox{\rm or equivalently} \,\, \sqrt{n}(E\tilde\xi_1 - E\hat\xi_1)/\sqrt{\text{Var}(\tilde\xi_1)}\to 0.
\end{eqnarray} Since $\text{Var}(\tilde\xi_1)>c_1>0$, $E(\tilde\xi_i-\hat\xi_i) \to 0.$ This, combined with~\eqref{eqsat4}, yields that
$\text{\mbox{\rm Var}}(\hat\xi_i)- \text{\mbox{\rm Var}}(\tilde\xi_i)=o(1)$,
or equivalently
\begin{equation}\label{eqsat1}
\text{\mbox{\rm Var}}(\tilde\xi_i)=\sigma_{\xi}^2+o(1).
\end{equation}
(ii)
By~\eqref{MSAPE}, we obtain that $
\max_{1\leq i\leq n}|\tilde\xi_i|^\gamma\leq \sum_{i=1}^n|\tilde\xi_i|^\gamma=O_{P}(n), $ which implies that
$\max_{1\leq i\leq n}|\tilde\xi_i|=O_{P}(n^{1/\gamma})$. Using
Lemma~\ref{le1}(i) and the definitions of $\hat\xi_i$ and $\tilde\xi_i$, we obtain that \begin{eqnarray*}
|\hat\xi_i-\tilde\xi_i|
&=& \bigl| (\tilde\varepsilon_{1,i}-\hat\varepsilon_{1,i})^2-(\tilde\varepsilon_{2,i}-\hat\varepsilon_{2,i})^2 +2\tilde\varepsilon_{1,i}(\hat\varepsilon_{1,i}-\tilde\varepsilon_{1,i}) -2\tilde\varepsilon_{2,i}(\hat\varepsilon_{2,i}-\tilde\varepsilon_{2,i})
\bigr|\\ &\leq&2\max_{1\leq i\leq n, 1\leq j\leq 2}(\tilde\varepsilon_{j,i}-\hat\varepsilon_{j,i})^2
+4\max_{1\leq i\leq n, 1\leq j\leq 2}|\tilde\varepsilon_{j,i}(\hat\varepsilon_{j,i}-\tilde\varepsilon_{j,i})|\\ &=&O_{P}(n^{\frac{1}{\gamma}+\frac{3}{2r+1}-1})+
O_{P}(n^{\frac{1}{2\gamma}+\frac{3}{4r+2}-\frac{1}{2}})\max_{1\leq i\leq n, 1\leq j\leq 2}|\tilde\varepsilon_{j,i}|, \end{eqnarray*} which, combined with
\eqref{MAXO} and $r>1.5$,
yields that
$\max_{1\leq i\leq n}|\hat\xi_i-\tilde\xi_i|=O_{P}(n^{1/\gamma})$. Thus, $\max_{1\leq i\leq n}|\hat\xi_{i}|\leq \max_{1\leq i\leq n}|\tilde\xi_{i}|+\max_{1\leq i\leq n}|\hat\xi_i-\tilde\xi_i|=O_{P}(n^{1/\gamma})$. Note that $\tilde\xi_i$ are iid. It follows from \eqref{MSAPE} that $n^{-1}\sum_{i=1}^n\tilde\xi_i^2=E\tilde\xi_i^2+o_{P}(1)$.
(iii). Case 1: $|a|=+\infty$. By~\eqref{eqsat5} and~\eqref{eqsat1}, we have $$a_n^{-1}\sqrt{n}E(\hat\xi_i/\sigma_{\xi}-\tilde\xi_i/\sigma_{\xi})\to 0.$$
Under $H^{(1)}_{a,n},$ we know that
$a_n^{-1}\sqrt{n}E\hat\xi_i/\sigma_{\xi}=1$
and
$a_n^{-1}\sqrt{n}E\tilde\xi_i/\sigma_{\xi}\to 1$.
Case 2: $|a|<+\infty$. By~\eqref{eqsat5} and~\eqref{eqsat1}, we have
$\sqrt{n}E(\hat\xi_i/\sigma_{\xi}-\tilde\xi_i/\sigma_{\xi})\to 0$. Thus,
under $H^{(1)}_{a,n},$ we get
\begin{equation}\label{eqpp1}
\sqrt{n}E\hat\xi_i/\sigma_{\xi}=a_n\to a\,\,\
\mbox{\rm and}\,\,\
\sqrt{n}E\tilde\xi_i/\sigma_{\xi}\to a.
\end{equation}
(iv). By \eqref{eqpp1}, we have $E(\tilde\xi_1)/\sigma_{\xi}=O(n^{-1/2})$ when $|a|<\infty$. Then \begin{eqnarray} n^{-1}\sum_{i=1}^{n}\tilde\xi_i/\sigma_{\xi} &=&E(\tilde\xi_1/\sigma_{\xi})+ \frac{1}{n\sigma_{\xi}}\sum_{i=1}^{n}(\tilde\xi_i-E\tilde\xi_i)\nonumber\\ &=& \frac{1}{n\sigma_{\xi}}\sum_{i=1}^{n}(\tilde\xi_i-E\tilde\xi_i) +O(n^{-1/2}). \end{eqnarray} By condition A3, $\text{Var}(\tilde\xi_{1})>c_1$. It follows from \eqref{eqsat1} that
$\sigma_{\xi}\geq c_{1}+o(1)$. This, combined with~\eqref{eqsat6}, yields that $n^{-1}\sum_{i=1}^{n}\tilde\xi_i=O_{P}(n^{-1/2})$.
In addition, applying Lemma~\ref{le2} and the triangle inequality, we obtain that
$$\bigl|n^{-1}\sum_{i=1}^{n}\hat\xi_i\bigr|
\leq \bigl|n^{-1}\sum_{i=1}^{n}(\hat\xi_i-\tilde\xi_i)\bigr|
+\bigl|n^{-1}\sum_{i=1}^{n}\tilde\xi_i\bigr|=O_{P}(n^{-1/2}).$$
$\diamond$
\textbf{Proof of Lemma~\ref{le6}}.
Since $n^{1/\gamma-1/2}=o(1)$, there exists a sequence $\phi_n$ such that
$\phi_n=o(n^{-1/\gamma})$ and $n^{-1/2}=o(\phi_n)$. Define
$\Lambda_n=\{\lambda:\, |\lambda|\leq \phi_n\}$. Then, by the result
$\max_{1\leq i\leq n}|\hat\xi_i|=O_{P}(n^{1/\gamma})$ in Lemma~\ref{le4}(ii), we get $\max_{1\leq i\leq n,\lambda\in \Lambda_n}|\lambda\hat\xi_i|=o_{P}(1)$. Let $$\bar\lambda=\arg\min_{\lambda\in \Lambda_n}\sum_{i=1}^n\log(1+\lambda\hat\xi_i).$$
Then $\max_{1\leq i\leq n}|\bar{\lambda}\hat\xi_i|=o_{P}(1)$. Using Taylor's expansion, with probability going to 1, we obtain that \begin{eqnarray}
0\leq \sum_{i=1}^n\log(1+\bar\lambda\hat\xi_i)&=&\bar\lambda\sum_{i=1}^n\hat\xi_i -\frac{\bar\lambda^2}{2}\sum_{i=1}^n\frac{\hat\xi_i^2}{(1+c_{i}^*\bar\lambda\hat\xi_i)^2}\nonumber\\
&\leq& |\bar\lambda|\Bigl|\sum_{i=1}^n\hat\xi_i \Bigr| - \bar\lambda^2c\sum_{i=1}^n\hat\xi_i^2 \label{eqr1} \end{eqnarray} for some constants $0\le c_{i}^*\le 1$ and $0<c\leq 1$. By condition A3, $\text{Var}(\tilde\xi_1)>c_1$,
and by Lemma~\ref{le4}(ii),
$n^{-1}\sum_{i=1}^n|\tilde\xi_i|^2=E|\tilde\xi_1|^2+o_{P}(1)$.
Then, applying Lemma~\ref{le2}(ii), we establishes that \begin{equation}\label{eqr2} n^{-1}\sum_{i=1}^n\hat\xi_i^2\geq c_1+o_{P}(1). \end{equation} By Lemma~\ref{le4}(iv), we have $n^{-1}\sum_{i=1}^{n}\hat\xi_i=O_{P}(n^{-1/2})$, which, combined with \eqref{eqr1}-\eqref{eqr2}, yields that
$$\bar\lambda=O_{P}(n^{-1/2})=o_{P}(\phi_n).$$
Thus,
with probability tending to $1$,
$\bar\lambda$ is in the interior of $\Lambda_n$. Since $\sum_{i=1}^n\log(1+\lambda\hat\xi_i)$ is concave, $P(\hat\lambda=\bar\lambda)\to 1.$ Hence, $\hat\lambda=O_{P}(n^{-1/2}).$
$\diamond$
{\spacingset{1.2}
}
\end{document} | arXiv |
Guide to Accounting
Corporate Finance & Accounting Financial Ratios
Interest Coverage Ratio
Reviewed By Margaret James
The Interest Coverage Ratio
Interest Coverage Ratio Formula
Understanding the Interest Coverage Ratio
Trends Over Time
Example of Interest Coverage
Limitations of the Interest Coverage Ratio
What Is the Interest Coverage Ratio?
The interest coverage ratio is a debt ratio and profitability ratio used to determine how easily a company can pay interest on its outstanding debt. The interest coverage ratio may be calculated by dividing a company's earnings before interest and taxes (EBIT) by its interest expense during a given period by the company's interest payments due within the same period.
The Interest coverage ratio is also called "times interest earned." Lenders, investors, and creditors often use this formula to determine a company's riskiness relative to its current debt or for future borrowing.
The interest coverage ratio is used to see how well a firm can pay the interest on outstanding debt.
Also called the times-interest-earned ratio, this ratio is used by creditors and prospective lenders to assess the risk of lending capital to a firm.
A higher coverage ratio is better, although the ideal ratio may vary by industry.
The Formula for the Interest Coverage Ratio
?Interest?Coverage?Ratio=EBITInterest?Expensewhere:\begin{aligned} &\text{Interest Coverage Ratio}=\frac{\text{EBIT}}{\text{Interest Expense}}\\ &\textbf{where:}\\ &\text{EBIT}=\text{Earnings before interest and taxes} \end{aligned}?Interest?Coverage?Ratio=Interest?ExpenseEBIT?where:??
The interest coverage ratio measures how many times a company can cover its current interest payment with its available earnings. In other words, it measures the margin of safety a company has for paying interest on its debt during a given period. The interest coverage ratio is used to determine how easily a company can pay its interest expenses on outstanding debt.
The ratio is calculated by dividing a company's earnings before interest and taxes (EBIT) by the company's interest expenses for the same period. The lower the ratio, the more the company is burdened by debt expense. When a company's interest coverage ratio is only 1.5 or lower, its ability to meet interest expenses may be questionable.
Companies need to have more than enough earnings to cover interest payments in order to survive future (and perhaps unforeseeable) financial hardships that may arise. A company's ability to meet its interest obligations is an aspect of its solvency and is thus a very important factor in the return for shareholders.
Interpretation is key when it comes to using ratios in company analysis. While looking at a single interest coverage ratio may tell a good deal about a company's current financial position, analyzing interest coverage ratios over time will often give a much clearer picture of a company's position and trajectory.
By analyzing interest coverage ratios on a quarterly basis for the past five years, for example, trends may emerge and give an investor a much better idea of whether a low current interest coverage ratio is improving or worsening, or if a high current interest coverage ratio is stable. The ratio may also be used to compare the ability of different companies to pay off their interest, which can help when making an investment decision.
Generally, stability in interest coverage ratios is one of the most important things to look for when analyzing the interest coverage ratio in this way. A declining interest coverage ratio is often something for investors to be wary of, as it indicates that a company may be unable to pay its debts in the future.
Overall, the interest coverage ratio is a good assessment of a company's short-term financial health. While making future projections by analyzing a company's interest coverage ratio history may be a good way of assessing an investment opportunity, it is difficult to accurately predict a company's long-term financial health with any ratio or metric.
The interest coverage ratio at one point in time can help tell analysts a bit about the company's ability to service its debt, but analyzing the interest coverage ratio over time will provide a clearer picture of whether or not their debt is becoming a burden on the company's financial position. A declining interest coverage ratio is something for investors to be wary of, as it indicates that a company may be unable to pay its debts in the future.
However, it is difficult to accurately predict a company's long-term financial health with any ratio or metric. Moreover, the desirability of any particular level of this ratio is in the eye of the beholder to an extent. Some banks or potential bond buyers may be comfortable with a less desirable ratio in exchange for charging the company a higher interest rate on their debt.
Example of How to Use the?Interest Coverage Ratio?
To provide an example of how to calculate interest coverage ratio, suppose that a company's earnings during a given quarter are $625,000 and that it has debts upon which it is liable for payments of $30,000 every month.
To calculate the interest coverage ratio here, one would need to convert the monthly interest payments into quarterly payments by multiplying them by three. The interest coverage ratio for the company is $625,000 / ($30,000 x 3) = $625,000 / $90,000 = 6.94.
Staying above water with interest payments is a critical and ongoing concern for any company. As soon as a company struggles with this, it may have to borrow further or dip into its cash reserve, which is much better used to invest in capital assets or for emergencies.
The lower a company's interest coverage ratio is, the more its debt expenses burden the company. When a company's interest coverage ratio is 1.5 or lower, its ability to meet interest expenses may be questionable.
A result of 1.5 is generally considered to be a bare minimum acceptable ratio for a company and the tipping point below which lenders will likely refuse to lend the company more money, as the company's risk for default may be perceived as too high.
Moreover, an interest coverage ratio below 1 indicates the company is not generating sufficient revenues to satisfy its interest expenses. If a company's ratio is below 1, it will likely need to spend some of its cash reserves in order to meet the difference or borrow more, which will be difficult for reasons stated above. Otherwise, even if earnings are low for a single month, the company risks falling into bankruptcy.
Even though it creates debt and interest, borrowing has the potential to positively affect a company's profitability through the development of capital assets according to the cost-benefit analysis. But a company must also be smart in its borrowing. Because interest affects a company's profitability as well, a company should only take a loan if it knows it will have a good handle on its interest payments for years to come.
A good interest coverage ratio would serve as a good indicator of this circumstance and potentially as an indicator of the company's ability to pay off the debt itself as well. Large corporations, however, may often have both high-interest coverage ratios and very large borrowings. With the ability to pay off large interest payments on a regular basis, large companies may continue to borrow without much worry.
Businesses may often survive for a very long time while only paying off their interest payments and not the debt itself. Yet, this is often considered a dangerous practice, particularly if the company is relatively small and thus has low revenue compared to larger companies. Moreover, paying off the debt helps pay off interest down the road, as with reduced debt the company frees up cash flow and the debt's interest rate may be adjusted as well.
Like any metric attempting to gauge the efficiency of a business, the interest coverage ratio comes with a set of limitations that are important for any investor to consider before using it.
For one, it is important to note that interest coverage is highly variable when measuring companies in different industries and even when measuring companies within the same industry. For established companies in certain industries, like a utility company, an interest coverage ratio of 2 is often an acceptable standard.
Even though this is a low number, a well-established utility will likely have very consistent production and revenue, particularly due to government regulations, so even with a relatively low-interest coverage ratio, it may be able to reliably cover its interest payments. Other industries, such as manufacturing, are much more volatile and may often have a higher minimum acceptable interest coverage ratio, like 3.
These kinds of companies generally see greater fluctuation in business. For example, during the recession of 2008, car sales dropped substantially, hurting the auto manufacturing industry.?? A workers' strike is another example of an unexpected event that may hurt interest coverage ratios. Because these industries are more prone to these fluctuations, they must rely on a greater ability to cover their interest in order to account for periods of low earnings.
Because of wide variations like these, when comparing companies' interest coverage ratios, be sure to only compare companies in the same industry, and ideally when the companies have similar business models and revenue numbers as well.
While all debt is important to take into account when calculating the interest coverage ratio, companies may choose to isolate or exclude certain types of debt in their interest coverage ratio calculations. As such, when considering a company's self-published interest coverage ratio, one should try to determine if all debts were included, or should otherwise calculate interest coverage ratio independently.
Variations of the Interest Coverage Ratio
A couple of somewhat common variations of the interest coverage ratio are important to consider before studying the ratios of companies. These variations come from alterations to EBIT in the numerator of interest coverage ratio calculations.
One such variation uses earnings before interest, taxes, depreciation, and amortization (EBITDA) instead of EBIT in calculating the interest coverage ratio. Because this variation excludes depreciation and amortization, the numerator in calculations using EBITDA will often be higher than those using EBIT. Since the interest expense will be the same in both cases, calculations using EBITDA will produce a higher interest coverage ratio than calculations using EBIT will.
Another variation uses earnings before interest after taxes (EBIAT) instead of EBIT in interest coverage ratio calculations. This has the effect of deducting tax expenses from the numerator in an attempt to render a more accurate picture of a company's ability to pay its interest expenses. Because taxes are an important financial element to consider, for a clearer picture of a company's ability to cover its interest expenses one might use EBIAT in calculating interest coverage ratios instead of EBIT.
All of these variations in calculating the interest coverage ratio use interest expenses in the denominator. Generally speaking, these three variants increase in conservatism, with those using EBITDA being the most liberal, those using EBIT being more conservative and those using EBIAT being the most stringent.
Federal Reserve. "Auto Financing During and After the Great Recession." Accessed July 31, 2020.
Coverage Ratio
Coverage ratios measure a company's ability to service its debt and meet its financial obligations.
The solvency ratio is a key metric used to measure an enterprise's ability to meet its debt and other obligations.
Interest Expense Definition
An interest expense is the cost incurred by an entity for borrowed funds.
EBITDA – Earnings Before Interest, Taxes, Depreciation, and Amortization
EBITDA, or earnings before interest, taxes, depreciation, and amortization, is a measure of a company's overall financial performance.
Understanding the Debt-Service Coverage Ratio (DSCR)
In corporate finance, the debt-service coverage ratio (DSCR) is a measurement of the cash flow available to pay current debt obligations.
Debt Load Definition
Debt load refers to the total amount of debt that a company is carrying on its books, which can be found on its balance sheet.
How do companies use the fixed charge coverage ratio?
Analyzing Walmart's Debt Ratios in 2020 (WMT)
A Clear Look at EBITDA
What Is a Good Interest Coverage Ratio?
Tools for Fundamental Analysis
How to Use Enterprise Value to Compare Companies
What is a bad interest coverage ratio?
日韩精品中文字幕高清在线,欧美高清va在线视频,一级a爱情一级带免费观看,两性色午夜视频免费 | CommonCrawl |
\begin{definition}[Definition:Local Ring/Commutative/Definition 2]
Let $A$ be a commutative ring with unity.
The ring $A$ is '''local''' {{iff}} it is nontrivial and the sum of any two non-units is a non-unit.
\end{definition} | ProofWiki |
Null semigroup
In mathematics, a null semigroup (also called a zero semigroup) is a semigroup with an absorbing element, called zero, in which the product of any two elements is zero.[1] If every element of a semigroup is a left zero then the semigroup is called a left zero semigroup; a right zero semigroup is defined analogously.[2] According to Clifford and Preston, "In spite of their triviality, these semigroups arise naturally in a number of investigations."[1]
Null semigroup
Let S be a semigroup with zero element 0. Then S is called a null semigroup if xy = 0 for all x and y in S.
Cayley table for a null semigroup
Let S = {0, a, b, c} be (the underlying set of) a null semigroup. Then the Cayley table for S is as given below:
Cayley table for a null semigroup
0 a b c
0 0 0 0 0
a 0 0 0 0
b 0 0 0 0
c 0 0 0 0
Left zero semigroup
A semigroup in which every element is a left zero element is called a left zero semigroup. Thus a semigroup S is a left zero semigroup if xy = x for all x and y in S.
Cayley table for a left zero semigroup
Let S = {a, b, c} be a left zero semigroup. Then the Cayley table for S is as given below:
Cayley table for a left zero semigroup
a b c
a a a a
b b b b
c c c c
Right zero semigroup
A semigroup in which every element is a right zero element is called a right zero semigroup. Thus a semigroup S is a right zero semigroup if xy = y for all x and y in S.
Cayley table for a right zero semigroup
Let S = {a, b, c} be a right zero semigroup. Then the Cayley table for S is as given below:
Cayley table for a right zero semigroup
a b c
a a b c
b a b c
c a b c
Properties
A non-trivial null (left/right zero) semigroup does not contain an identity element. It follows that the only null (left/right zero) monoid is the trivial monoid.
The class of null semigroups is:
• closed under taking subsemigroups
• closed under taking quotient of subsemigroup
• closed under arbitrary direct products.
It follows that the class of null (left/right zero) semigroups is a variety of universal algebra, and thus a variety of finite semigroups. The variety of finite null semigroups is defined by the identity ab = cd.
See also
• Right group
References
1. A H Clifford; G B Preston (1964). The algebraic theory of semigroups Vol I. mathematical Surveys. Vol. 1 (2 ed.). American Mathematical Society. pp. 3–4. ISBN 978-0-8218-0272-4.
2. M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3-11-015248-7, p. 19
| Wikipedia |
An alternating direction method for solving a class of inverse semi-definite quadratic programming problems
Subgradient-based neural network for nonconvex optimization problems in support vector machines with indefinite kernels
January 2016, 12(1): 303-315. doi: 10.3934/jimo.2016.12.303
Finite-time stabilization and $H_\infty$ control of nonlinear delay systems via output feedback
Ta T.H. Trang 1, , Vu N. Phat 1, and Adly Samir 2,
Institute of Mathematics, VAST, 18 Hoang Quoc Viet Road, Hanoi 10307, Vietnam, Vietnam
Université de Limoges, Laboratoire XLIM, 123, avenue Albert Thomas, 87060 Limoges CEDEX, France
Received November 2014 Revised January 2015 Published April 2015
This paper studies the robust finite-time $H_\infty$ control for a class of nonlinear systems with time-varying delay and disturbances via output feedback. Based on the Lyapunov functional method and a generalized Jensen integral inequality, novel delay-dependent conditions for the existence of output feedback controllers are established in terms of linear matrix inequalities (LMIs). The proposed conditions allow us to design the output feedback controllers which robustly stabilize the closed-loop system in the finite-time sense. An application to $H_\infty$ control of uncertain linear systems with interval time-varying delay is also given. A numerical example is given to illustrate the efficiency of the proposed method.
Keywords: time-varying delay, Finite-time stabilization, output feedback, Lyapunov function, $H_\infty$ control, linear matrix inequality..
Mathematics Subject Classification: Primary: 93D20, 34D20; Secondary: 37C7.
Citation: Ta T.H. Trang, Vu N. Phat, Adly Samir. Finite-time stabilization and $H_\infty$ control of nonlinear delay systems via output feedback. Journal of Industrial & Management Optimization, 2016, 12 (1) : 303-315. doi: 10.3934/jimo.2016.12.303
F. Amato, M. Ariola and C. Cosentino, Finite-time stabilization via dynamic output feedback,, Automatica, 42 (2006), 337. doi: 10.1016/j.automatica.2005.09.007. Google Scholar
F. Amato, G. De Tommasi and A. Pironti, Necessary and sufficient conditions for finite-time stability of impulsive dynamical linear systems,, Automatica, 49 (2013), 2546. doi: 10.1016/j.automatica.2013.04.004. Google Scholar
E. K. Boukas, Static output feedback control for stochastic hybrid systems: LMI approach,, Automatica, 42 (2006), 183. doi: 10.1016/j.automatica.2005.08.012. Google Scholar
S. Boyd, L. El. Ghaoui and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory,, SIAM, (1994). doi: 10.1137/1.9781611970777. Google Scholar
P. Dorato, Short time stability in linear time-varying systems,, In Proc IRE Int Convention Record, 4 (1961), 83. Google Scholar
E. Fridman and U. Shaked, Delay-dependent stability and $H_{\infty}$control: constant and time-varying delays,, International Journal of Control, 76 (2003), 48. doi: 10.1080/0020717021000049151. Google Scholar
P. Gahinet, A. Nemirovskii, A. J. Laub and M. Chilali, LMI Control Toolbox For use with MATLAB,, The MathWorks, (1995). Google Scholar
G. Garcia, S. Tarbouriech and J. Bernussou, Finite-time stabilization of linear time-varying continuous systems,, IEEE Transactions on Automatic Control, 54 (2009), 364. doi: 10.1109/TAC.2008.2008325. Google Scholar
L. Gollmann and H. Maurer, Theory and applications of optimal control problems with multiple time-delays,, Journal of Industrial and Management Optimization, 10 (2014), 413. Google Scholar
V. Kharitonov, Time-Delay Systems: Lyapunov Functionals and Matrices,, Control Engineering. Birkhäuser/Springer, (2013). Google Scholar
O. M. Kwon, J. H. Park and S. M. Lee, Exponential stability for uncertain dynamic systems with time-varying delays: LMI optimization approach,, Journal of Optimization Theory and Applications, 137 (2008), 521. doi: 10.1007/s10957-008-9357-7. Google Scholar
H. Liu, Y. Shen and X. Zhao, Delay-dependent observer-based $H_\infty$ finite-time control for switched systems with time-varying delay,, Nonlinear Analysis: Hybrid Systems, 6 (2012), 885. doi: 10.1016/j.nahs.2012.03.001. Google Scholar
Q. Y. Meng and Y. J Shen, Finite-time $H_\infty$ control for linear continuous system with norm-bounded disturbance,, Communications in Nonlinear Science and Numerical Simulation, 14 (2009), 1043. doi: 10.1016/j.cnsns.2008.03.010. Google Scholar
E. Moulay, M. Dambrine, N. Yeganefar and W. Perruquetti, Finite-time stability and stabilization of time-delay systems,, Systems and Control Letters, 57 (2008), 561. doi: 10.1016/j.sysconle.2007.12.002. Google Scholar
T. Senthilkumar and P. Balasubramaniam, Delay-dependent robust stabilization and $H_\infty$ control for nonlinear stochastic systems with Markovian jump parameters and interval time-varying delays,, Journal of Optimization Theory and Applications, 151 (2011), 100. doi: 10.1007/s10957-011-9858-7. Google Scholar
A. Seuret and F. Gouaisbaut, Wirtinger-based integral inequality: Application to time-delay systems,, Automatica, 49 (2013), 2860. doi: 10.1016/j.automatica.2013.05.030. Google Scholar
L. Wu, J. Lam and C. Wang, Robust $H_{\infty}$ dynamic output feedback control for 2D linear parameter-varying systems,, IMA journal of mathematical control and information, 26 (2009), 23. doi: 10.1093/imamci/dnm028. Google Scholar
Z. Xiang, Y. N. Sun and M. S. Mahmoud, Robust finite-time $H_\infty$ control for a class of uncertain switched neutral systems,, Communications in Nonlinear Science Numerical Simulations, 17 (2012), 1766. doi: 10.1016/j.cnsns.2011.09.022. Google Scholar
W. Xiang and J. Xiao, $H_{\infty}$ finite-time control for nonlinear switched discrete-time systems with norm-bounded disturbance,, Journal of the Franklin Institute, 348 (2011), 331. doi: 10.1016/j.jfranklin.2010.12.001. Google Scholar
H. Xu and K. L. Teo, $H_\infty$ optimal stabilization of a class of uncertain impulsive systems: An LMI approach,, Journal of Industrial and Management Optimization, 5 (2009), 153. doi: 10.3934/jimo.2009.5.153. Google Scholar
Y. Zhang, C. Liu and X. Mu, Robust finite-time $H_\infty$ control of singular stochastic systems via static output feedback,, Applied Mathematics and Computation, 218 (2012), 5629. doi: 10.1016/j.amc.2011.11.057. Google Scholar
Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004
Mohammed Abdulrazaq Kahya, Suhaib Abduljabbar Altamir, Zakariya Yahya Algamal. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 87-98. doi: 10.3934/naco.2020017
Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020444
Lars Grüne, Roberto Guglielmi. On the relation between turnpike properties and dissipativity for continuous time linear quadratic optimal control problems. Mathematical Control & Related Fields, 2021, 11 (1) : 169-188. doi: 10.3934/mcrf.2020032
Simone Göttlich, Elisa Iacomini, Thomas Jung. Properties of the LWR model with time delay. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020032
Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296
Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021002
Biao Zeng. Existence results for fractional impulsive delay feedback control systems with Caputo fractional derivatives. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021001
Chongyang Liu, Meijia Han, Zhaohua Gong, Kok Lay Teo. Robust parameter estimation for constrained time-delay systems with inexact measurements. Journal of Industrial & Management Optimization, 2021, 17 (1) : 317-337. doi: 10.3934/jimo.2019113
Maoli Chen, Xiao Wang, Yicheng Liu. Collision-free flocking for a time-delay system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1223-1241. doi: 10.3934/dcdsb.2020251
Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434
Jianquan Li, Xin Xie, Dian Zhang, Jia Li, Xiaolin Lin. Qualitative analysis of a simple tumor-immune system with time delay of tumor action. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020341
Kateřina Škardová, Tomáš Oberhuber, Jaroslav Tintěra, Radomír Chabiniok. Signed-distance function based non-rigid registration of image series with varying image intensity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1145-1160. doi: 10.3934/dcdss.2020386
Ta T.H. Trang Vu N. Phat Adly Samir | CommonCrawl |
Plant Methods
Spatially-localized bench-top X-ray scattering reveals tissue-specific microfibril orientation in Moso bamboo
Patrik Ahvenainen ORCID: orcid.org/0000-0003-3055-32741,
Patrick G. Dixon2,
Aki Kallonen1,
Heikki Suhonen1,
Lorna J. Gibson2 &
Kirsi Svedström1
Plant Methods volume 13, Article number: 5 (2017) Cite this article
Biological materials have a complex, hierarchical structure, with vital structural features present at all size scales, from the nanoscale to the macroscale. A method that can connect information at multiple length scales has great potential to reveal novel information. This article presents one such method with an application to the bamboo culm wall. Moso (Phyllostachys edulis) bamboo is a commercially important bamboo species. At the cellular level, bamboo culm wall consists of vascular bundles embedded in a parenchyma cell tissue matrix. The microfibril angle (MFA) in the bamboo cell wall is related to its macroscopic longitudinal stiffness and strength and can be determined at the nanoscale with wide-angle X-ray scattering (WAXS). Combining WAXS with X-ray microtomography (XMT) allows tissue-specific study of the bamboo culm without invasive chemical treatment.
The scattering contribution of the fiber and parenchyma cells were separated with spatially-localized WAXS. The fiber component was dominated by a high degree of orientation corresponding to small MFAs (mean MFA 11°). The parenchyma component showed significantly lower degree of orientation with a maximum at larger angles (mean MFA 65°). The fiber ratio, the volume of cell wall in the fibers relative to the overall volume of cell wall, was determined by fitting the scattering intensities with these two components. The fiber ratio was also determined from the XMT data and similar fiber ratios were obtained from the two methods, one connected to the cellular level and one to the nanoscale. X-ray diffraction tomography was also done to study the differences in microfibril orientation between fibers and the parenchyma and further connect the microscale to the nanoscale.
The spatially-localized WAXS yields biologically relevant, tissue-specific information. With the custom-made bench-top set-up presented, diffraction contrast information can be obtained from plant tissue (1) from regions-of-interest, (2) as a function of distance (line scan), or (3) with two-dimensional or three-dimensional tomography. This nanoscale information is connected to the cellular level features.
Biological materials have a hierarchical structure that connects their features at different length scales, from the atomic to the macroscale, to their function and form. A method that allows connecting the cellular level information to the nanoscale can open up new levels of understanding that are not possible without that vital connection. Spatially-localized X-ray scattering is an accessible method to bridge the nanoscale to the cellular structure. In this article a bench-top set-up [1] combining X-ray scattering and microtomography is used to provide novel structural information from bamboo culm wall.
Phyllostachys edulis is economically the most important bamboo species in the world in the multi-billion euro bamboo industry [2]. Also known as P. pubescens [3] (henceforth referred to as Moso), it is a large and woody bamboo species native to China. While it is also known for its edible shoots, the mature bamboo culm is used for its excellent structural properties.
Although a member of the grass family (Poaceae), bamboo is often used as a timber-like construction material. It is known for its exceptionally fast growth (over 1 m in 24 h [4]), high strength and stiffness [5], and excellent fracture toughness [6]. Bamboo is a renewable and sustainable material that has potential in structural bamboo products analogous to wood products such as plywood, and oriented strand board [2, 7, 8], fiber-reinforced composites [9] and in many other products, such as furniture, handicrafts, scaffolding, flooring and construction [2].
Unlike wood, bamboo, as a member of the grass family, does not produce secondary growth. The primary shoots emerge and expand to their final height during one rainy season [2]. The bamboo tissue then typically matures over the following 4–5 years [10]. Structurally bamboo can be seen as a composite material where the vascular bundles are embedded in a matrix of parenchyma cells [11]. The bamboo culm is functionally graded and highly heterogeneous [12, 13]. In the longitudinal direction, the bamboo culm is separated by nodes into several internodes. In one internode section the parenchyma and vascular bundles are well aligned with the longitudinal axis of the culm [14]. In the radial direction, the density of bamboo and the proportion of vascular bundles increase from the interior to the exterior [14]. The vascular bundle structure varies by bamboo species, but contains always fibers, vessels and other cells [2].
The bamboo culm cell wall consists mainly of cellulose, hemicelluloses and lignin [2]. Cellulose is present in the cell wall as long microfibrils with alternating amorphous and crystalline regions. The angle that the microfibrils form with the longitudinal axis of the cell is called the microfibril angle (MFA). The bamboo cell wall, both in the parenchyma and fiber cells, is separated into multiple layers with alternating microfibril orientation [2]. The longitudinal axes of the parenchyma and fiber cells are parallel to the longitudinal axis of the culm wall.
Cellulose microfibril orientation in different bamboo tissue types has been studied with field emission scanning electron microscope (FESEM) by Crow and Murphy [15]. With chemical treatment and FESEM observations they reported varying MFAs in different cell wall layers in both fiber and parenchyma cells. The crystallinity of bamboo parenchyma cells has more recently been studied by Abe and Yano [16] who treated the bamboo culm chemically to separate microfibril aggregates from the parenchyma cells and the fiber cells.
In complex heterogeneous materials such as bamboo, the interpretation of the scattering data is complicated due to the presence of several cell types and cell wall layers. Thomas et al. [17] measured internode tissue of mature bamboo with WAXS and suggest that the azimuthal orientation distribution could be dissected in two components that originate either from different cell wall layers or from different cell types. The aim of the current study is to obtain tissue-specific scattering components and help explain their contribution also in scattering data that is not tissue-specific.
To the authors' knowledge, this study is the first tissue-specific study of microfibril orientation in native bamboo using an in-house set-up. Because both of the X-ray methods used, WAXS and X-ray microtomography (XMT), are nondestructive and thus the sample does not require any invasive chemical treatment, the results are obtained from the bamboo cell in its naturalFootnote 1, albeit dried, state. The same set-up is also used here to present an X-ray diffraction tomography (XDT) measurement on Moso bamboo with different diffraction-dependent contrasts. The localized X-ray scattering (LXS) set-up used in this article has previously been presented with an application to micrometeorites [1, 18] and to compacted clays [19].
Most XDT experiments are carried out at synchrotrons [20–23] although the viability of the method has recently been shown with in-house experiments as well [1, 24]. However, so far, outside of synchrotrons, both LXS and XDT have scarcely been applied to plant and other biological materials.
The method presented here is shown to yield biologically significant results, both qualitative and quantitative, that are not possible without the combined information from XMT and WAXS. The method could be applied to other biologically relevant systems as well, such as archaeological plant samples, hypocotyl and roots of Arabidopsis thaliana and other grass plants, never-dried wood and reaction wood.
Tangential slices (n = 8, radial thicknesses of 1.3–1.9 mm) were cut from a single internode section of Moso bamboo, half of them from the inner third of the culm wall (n = 4) and half of them from the outer third (n = 4). Outer samples do not contain the hard epidermal region, and inner samples do not contain the pithy terminal layer. For microtomography the slices were cut to a tangential width of approximately 1.5 mm so that the radial and tangential dimensions were approximately equal, as this geometry is more suitable for tomography. An example of the final cross-section of a bamboo sample is shown in Fig. 1, where also the different culm wall cell types are presented.
Various cross-sections of a Moso bamboo sample. (T–R) The bamboo culm wall pieces comprise of vascular bundles (VB) embedded in a parenchyma (P) matrix. One vascular bundle (VB, shaded with brown/gray) comprises of fibers (F), two metaxylem vessels (V) and the metaphloem (*). The red/gray rectangle shows the region-of-interest used for reconstructing parenchyma cell lumens. (T–L) The longitudinal axes of the fibers and parenchyma are parallel. (L–R) The aspect ratio is low in the parenchyma. Scale bar is 400 \(\mu\)m. The arrows indicate the orientation with respect to the bamboo culm wall: L = longitudinal, R = radial and T = tangential
All LXS, XMT and XDT measurements were done with a custom-built combined X-ray microtomography–X-ray scattering set-up (set-up 1 [1]). Set-up 1 consists of two separate, independently operable, X-ray devices, which are described in detail in [1, 19, 25]. The two functionalities are connected by a shared sample manipulator stage, which allows the spatial alignment of the sample to be calibrated into XMT coordinates for the scattering modality.
The tomography functionality is provided by a custom-built high-resolution XMT scanner (Nanotom 180NF, GE Measurement and Control Solutions, Germany), which was built inside a lead-shielded room that allows more customization than a traditional radiation shielding cabinet. The XMT functionality is based on a transmission-type microfocus X-ray tube and a CMOS flat-panel detector (C7942SK-25, Hamamatsu Photonics, Japan). In the cone beam geometry optical magnification can be used to select the field of view and the scan resolution.
The pencil-beam scattering functionality is provided by a Mo-anode source (\(\hbox {I}\mu \hbox {S}\), Incoatec GmbH, Germany) and a two-dimensional detector (Pilatus 1M, Dectris Ltd, Switzerland; maximum sample-to-detector distance 75 cm). The Mo-K\(\alpha\) energy (17 keV) is selected using focusing Montel multilayer mirrors and the beam size and shape is adjusted with a variable divergence aperture and a vertical slit.
Beam alignment for set-up 1 with the X-ray microtomography coordinates was conducted using a small silver behenate particle, as in [1]. After the alignment, regions-of-interests (ROIs) for micro-diffraction could be selected directly from the tomographic reconstruction slice.
In addition to set-up 1, a conventional two-dimensional X-ray scattering set-up that is described elsewhere [26] (set-up 2) was used for X-ray scattering with a larger beam to obtain comparative bulk average information (average bamboo tissue).
X-ray microtomography
XMT was conducted either with a pixel size of \(2.0\,\mu \hbox {m}\) resulting in a field of view of 2.3 mm (with a \(2 \times 2\) pixel binning) or with a pixel size of \(1.5\,\mu \hbox {m}\) and a field of view of 3.4 mm (without binning). At least 600 projection images were taken over the 360-degree scanning range. The X-ray tube was used with a \(160/180\,\mu \hbox {A}\) tube current and an 80 kV acceleration voltage. To reduce noise, a total of 7 transmission images of 250 ms exposure each were averaged to produce one projection image.
Tomographic reconstruction slices were filtered with a non-linear diffusion filter before binarization in Matlab R2014a (Mathworks, USA). A morphological closing operation was used to close cell walls and a morphological opening to remove small objects. The fiber cell walls were selected by using a large median filter and using a large structuring element in the morphological opening of the image. An example of the binarization and cell type separation is shown in Fig. 2.
Tomographic reconstruction slice and binarization of Moso bamboo. (Left) Tomographic reconstruction, scale bar is \(400\,\mu \hbox {m}\). (Middle) The binarized image overlayed on the reconstruction. The cell walls are shown in magenta/light gray. (Right) The cell walls separated by cell type: fibers (white) and others (magenta/gray)
Wide-angle X-ray scattering
All scattering measurements were performed with perpendicular transmission geometry using a two-dimensional detector, with a measurement time of 30 min to obtain a sufficient signal-to-noise ratioFootnote 2. Set-up 1 measurements were corrected for detector geometry and those of set-up 2 were corrected as in [11]Footnote 3 prior to data analysis. Set-up 1 used Mo K\(\alpha\) energy of 17.0 keV and set-up 2 used Cu K\(\alpha\) energy of 8.0 keV. The data are thus presented in energy-independent units of the scattering vector length \(q={4\pi \sin (\theta )}/{\lambda }\), where \(\theta\) is half of the scattering angle \(2\theta\) and \(\lambda\) is the X-ray wavelength.
X-ray diffraction tomography
X-ray diffraction tomography was performed for one sample using a pencil beam geometry. A total of 31 rotation steps were taken over 180°. At each rotation step a 2.85-mm long line scan was completed with a step size of \(150\,\mu \hbox {m}\) by measuring the scattering pattern for 30 s at each step, yielding a total duration of 5 h 10 min.
Two kinds of contrast were calculated from the 2D diffraction patterns: (1) degree of fibril orientation and (2) cellulose I content. These values were used as input for tomographic reconstruction done in Matlab.
To obtain the contribution of the degree of orientation, the difference was determined between the intensity of the 200-peak in the direction parallel to the preferred fiber orientation and in the direction perpendicular to it. Forty-degree sectors were chosen over the corresponding azimuthal angles. Radially the sectors extended from \(q = 1.485\) Å\(^{-1}\) to \(q=1.555\) Å\(^{-1}\).
To obtain the cellulose I contribution, the intensity value of the 200-peak perpendicular to the fiber orientation was chosen as this is a good indicator for cellulose I, regardless of the cell type.
Before reconstruction, bilinear interpolation was used to artificially increase both the number of translation and rotation steps with a factor of 2. After reconstruction a bilinear interpolation was also used to the diffraction tomography slices.
Fiber ratio
Fiber ratio is determined here as the ratio of fiber cells to all cells. It was assessed both from the XMT and the WAXS data. From the tomographic reconstructions it was calculated as the cell wall volume of fiber cells relative to the total cell wall volume.
The azimuthal integrals from a WAXS line scan (Fig. 8) were fitted in Matlab with a two-component model using non-negative matrix factorization (NNMF). One component represents scattering from bamboo fibers and the other from other types of cells (mainly parenchyma cells, referred to as the parenchyma model from here on out). The fiber ratio from WAXS was obtained then from the azimuthal scattering intensities by taking the relative weight of the fiber model.
Aspect ratio of parenchyma cells
An aspect ratio (AR) was estimated for the parenchyma cells from the highest resolution tomographic reconstruction by first selecting a ROI consisting of only parenchyma cells. The maximum height (\(H_{l}\), along the longitudinal axis of the culm) and maximum cross-sectional area (\(A_{l}\)) were calculated for each segmented parenchyma cell lumen (n = 1667). The cross-sectional cell shape was assumed to be roughly circular and the cell AR (\(AR_{c}\)) was approximated from the cell lumen dimensions as
$$\begin{aligned} AR_{c} = \frac{H_{l}}{\sqrt{A_{l} \frac{4}{\pi }}}. \end{aligned}$$
MFA analysis
Two-dimensional X-ray scattering pattern of bamboo. Azimuthal angles are defined so that one of the cellulose \(\hbox {I}\beta\) 200 diffraction peaks is at 0°. A microfibril angle seen at the azimuthal angle \(\phi _{i}\) (a) is also seen at the azimuthal angles \(-\phi _{i}\) (b), 180° \(-\phi _{i}\) (c), and 180° \(+\phi _{i}\) (d) due to symmetry. The most notable reflections are annotated and their symmetry is indicated by symbols. The regions highlighted with a dark checkerboard pattern were used to determine background and the region in light checkerboard pattern was used to calculate the azimuthal integral used for the microfibril angle analysis. Scattering pattern is of an outer bamboo culm piece measured with set-up 2
In order to subtract the contribution of amorphous components, linear background subtraction was conducted as in [27] from the azimuthally integrated intensities before MFA analysis. Specifically chosen regions with smaller and higher scattering angles than those selected for the 200 reflection region were assumed to contain only the amorphous component (Fig. 3). The regions were selected so that they should not contain any contribution from crystalline diffraction peaks and they were used to approximate a linear background for the 200 reflection region. The background-subtracted intensity was then assumed to contain only the crystalline cellulose component. The 200 reflection region was selected so that the 102 reflection would have minimal contribution to the azimuthal intensities. In general, the observed positions of the cellulose reflections may vary with the measured samples or tissue types. Based on the measured scattering data, the same 200 reflection region could be used for all tissue types in this study.
The mean microfibril angle was calculated as
$$\begin{aligned} \langle MFA \rangle = \frac{\int _{-40^{\circ }}^{140^{\circ }}\phi f(\phi ) d\phi }{\int _{-40^{\circ }}^{140^{\circ }}f(\phi ) d\phi }, \end{aligned}$$
where \(\phi\) is the microfibril angle and \(f(\phi )\) is the MFA distributionFootnote 4. The MFA distribution was obtained by fitting four quadruplets of Gaussian peaks to the azimuthal intensity profile, in addition to a constant. The constant corresponds to a contribution of un-oriented cellulose crystallites and this contribution was not considered when calculating the average MFA. Each Gaussian peak quadruplet maxima were fitted to the angles of \(\phi _{i}, -\phi _{i}, {180^{\circ }+\phi _{i}}\) and \({180^{\circ }-\phi _{i}}\) (Fig. 3) based on the model presented in [28]. Only the peaks at \(\phi _{i}\) were used to calculate the final MFA distribution and the MFA distribution represents the contribution of all bamboo cell wall layers.
When the observed azimuthal intensity distribution (over 180 degrees) consists of a single narrow and relatively featureless peak, there are no well-established correction procedures for the cell shape factor for cell shapes close to circular. Due to this lack of features in the narrow azimuthal intensity peak, the fitting is more ambiguous and less reliable, especially if too many experimental parameters are fitted. The MFA distribution was therefore determined without a cell shape correction and the average MFA values represent thus the lower limit of the true average MFA values.
In practice, the MFAs are determined with respect to the longitudinal axis of the culm wall. For the parenchyma cells that have an aspect ratio close to one, we define the longitudinal axis of the cell to be parallel to that of the culm wall. With this definition, the observed MFA of ideal, horizontal end caps of the cell would be 90° regardless of the true microfibril orientation within the end cap.
X-ray diffraction tomography. The X-ray diffraction tomography can be used to obtain the cellulose I distribution in 2D (left) or the cellulose microfibril orientation (middle). The cellulose orientation is plotted on top of the absorption-contrast tomographic reconstruction slice (right). The scale bar is \(400\,\mu \hbox {m}\)
The results of the XDT reconstruction for microfibril orientation and cellulose I contrasts are shown in Fig. 4. The cellulose contrast follows clearly the overall shape of the bamboo cross-section and shows the presence of cellulose I, as expected. The microfibril orientation contrast shows clearly that the highest degree of orientation is localized on the fibers and not on the parenchyma.
Spatially-resolved X-ray scattering
Azimuthal X-ray scattering patterns from a bamboo sample. The spatially-localized X-ray scattering intensities from an outer culm wall bamboo sample shows tissue-specific scattering patterns originating mostly from fibers (1, 2, 3) and from parenchyma only (4). The rectangles on the top of the tomographic reconstruction slice (left, scale bar \(400\,\mu \hbox {m}\)) show the approximate X-ray beam paths for the measurements corresponding to the azimuthal intensity profiles on the right. The scale bar is \(400\,\mu \hbox {m}\) and the profiles are shifted vertically for clarity
For each of the five samples measured with both X-ray tomography and X-ray scattering, ROIs were selected as in Fig. 5. The ROI with the highest parenchyma content and the one with the highest fiber content were selected from each sample to calculate the average MFA for the two tissue types. All the fiber-selective measurements (such as measurement 2 of Fig. 5) contain some contribution of parenchyma cells and the parenchyma-selective measurements (measurement 4 of Fig. 5) may contain contribution from neighboring fiber cells. The mean MFA for fiber-selective measurements was \(11\pm 8 ^{\circ }\) (n = 5, mean ± standard deviation) and for parenchyma-selective measurements \(46\pm 15 ^{\circ }\) (n = 5). The difference between these averages is statistically significant with a t test with \(p=0.01\). The averaged MFA distributions of the two tissue types are shown in Fig. 6.
Microfibril angle (MFA) distributions from different tissues. Tissue-selectivity of the method is visualized by plotting the averaged MFA distributions of the fiber- and parenchyma-rich regions of interest. The shaded area represents the standard deviation (n = 5)
Parenchyma cell lumens. A set of 50 parenchyma cells lumens segmented from the tomographic reconstruction shows that the cells vary greatly in aspect ratio. Scale bars (\(50\,\mu \hbox {m}\)) are shown with bolded lines
Parenchyma aspect ratio
The parenchyma aspect ratio was calculated from the ROI shown in Fig. 1. An example of the segmented parenchyma cell lumens is shown in Fig. 7. By defining the longitudinal axis of the parenchyma cells to be the longitudinal axis of the culm, aspect ratios below 1 were seen, with an average of 1.6 ± 1.0 (n = 1667, mean ± standard deviation). The parenchyma cell end caps are rather horizontal and due to the low aspect ratio of the cells they have a large contribution to the total scattering intensities. Due to measurement geometry, the scattering contribution of horizontal end caps is seen at MFAs close to 90°.
A line scan over one sample. A total of 14 scattering measurements were performed by scanning the sample with a 111-\(\mu \hbox {m}\) step size. The azimuthal integral over the cellulose \(\hbox {I}\beta\) 200-reflection is shown on the left for each measurement (numbered 1–14 from bottom to top). The right-hand side shows the corresponding tomographic reconstruction slice of the inner bamboo culm sample. The rectangles overlaid on the slice show which part of the sample was sampled for each scattering measurement 1–14. The scale bar is \(400\,\mu \hbox {m}\)
A line scan was performed over one bamboo sample in 111-\(\mu \hbox {m}\) steps. A 30-min scattering measurement was performed at each step (Fig. 8). This line scan featured two measurements of parenchyma cells only (#8 and #9), which did not show any preferred orientation parallel to the strong preferred orientation visible in other measurements. The primary orientation of the other measurements can be contributed to fibers in the vascular bundles. The secondary orientation perpendicular to the primary one can be contributed to the parenchyma. Although not easily seen from the azimuthal integrals, most measurements can be expected to contain both primary (fiber) and secondary (parenchyma) orientation.
Starting values for the two components for the NNMF fit were selected by summing the line scan measurement data #5 and #6 for the fiber component and #8 and #9 for the parenchyma component. The fiber ratio calculated from the tomographic reconstruction slice for each scattering measurement was also used as the input for NNMF. A good fit for all measurements was obtained (\(r^{2}=0.92\pm 0.14\)) suggesting that the data can be well explained using the two-component model. The fit yields one component with a sharp orientation peak (fibers) and the other with a broader maxima perpendicular to the first (parenchyma, Fig. 9).
Models of the non-negative matrix factorization (NNMF). The input models (subscript 0) are shown as dotted curves and the NNMF results (subscript M) are shown with full lines. Black curve corresponds to the fiber model (F) and the blue/gray curve to the parenchyma model (P)
Ratio of fibers by tomography and X-ray diffraction (XRD). Both methods yield similar fiber ratios for the measurements #2 to #13 (Fig. 8). The spline fit shown with a dotted line is a guide for the eye
Correlation between tomography fiber ratio (FR) and X-ray diffraction (XRD) FR. FRs of measurements #3 to #12 (Fig. 8) measured with tomography and XRD show excellent linear correlation (\(r^{2}=0.96\)). The linear component of the fit is 0.91 and the constant is \(-0.01\)
NNMF yields also weight factors for the two components. The fiber ratio obtained from the LXS data is shown in Fig. 10 as a function of the scan position. The figure also shows the fiber ratio calculated from the tomographic reconstruction slice. A good correlation between these data (Fig. 11, \(r^{2}=0.96\)) suggests that the two components obtained from the NNMF really are (1) fibers and (2) other cells in the sample, mainly parenchyma. The mean MFAs for these two components are \(11\pm 3^{\circ }\) for fibers and \(65\pm 10^{\circ }\) for parenchyma.
Microfibril angle distributions for representative line scan measurements from Fig. 8 are shown. The lines are plotted in order of decreasing X-ray diffraction fiber ratio (FR), from top to bottom and are shifted vertically for clarity. The number at the end of the curve corresponds to the measurement number shown in Fig. 8
Fiber ratio as function of mean microfibril angle in line scan measurements shows a linear correlation (\(r^{2}=0.93\) ) between the parameters. The fiber ratio is calculated from the tomographic reconstruction data
The MFA distributions for representative line scan measurements are shown in Fig. 12. There is a strong linear correlation (\(r^{2}=0.93\)) between the mean MFA and the fiber ratio determined from tomography (Fig. 13).
Scattering from average bamboo tissue
Full two-dimensional X-ray scattering patterns were obtained with set-up 2 to obtain an average bamboo tissue scattering for inner and outer culm wall samples. In the radially integrated scattering patterns, there are no notable differences between the inner and outer culm wall samples (Fig. 14). This indicates that the relative sample crystallinityFootnote 5 is the same for inner and outer samples. Since the inner samples have significantly smaller fiber ratios, this suggests that there is no substantial crystallinity difference between the fibers and the parenchyma, as has been shown earlier by Abe and Yano [16].
Radial scattering intensities of the Moso bamboo samples. The scattering intensities of the inner (in) and outer (out) culm wall pieces as a function of the scattering vector length are very similar for all measured samples
Azimuthal intensities of the Moso bamboo samples. Samples from outer (out) culm show higher degree of orientation than the inner culm samples (in). Amorphous contribution has been subtracted from these curves which represent contribution of crystalline cellulose only
However, there are qualitative and quantitative differences in the microfibril orientation of the inner and outer culm wall samples as evidenced by the azimuthal integrals (Fig. 15). For the outer culm samples, they show an even distribution of MFAs and one sharp orientation peak (primary orientation). The intensity of the primary orientation peak of the inner culm samples varies more and a broad secondary maximum is visible perpendicular to the primary peaks.
The NNMF models obtained from the line scan (Fig. 9) were fitted to the bulk average data of inner and outer culm wall samples to obtain a fiber ratio. A value of \(0.35\pm 0.06\) (mean ± standard deviation, n = 4, \(r^{2}=0.96\pm 0.02\)) was obtained for the inner and \(0.68\pm 0.02\) (n = 4, \(r^{2}=0.99\pm 0.01\)) for the outer samples. A considerably larger fiber ratio for the outer culm wall samples is consistent with literature [5, 14]. The high \(r^{2}\) values suggest that the bulk average data can be modeled well using the two-component model obtained from the line scan data.
Microfibril angle (MFA) distributions of inner and outer culm wall samples. The mean MFA distribution of the outer culm wall sample is shown with a solid line and the inner with a dotted line. The shaded areas represent the standard deviation (n = 4 for each)
The mean MFA was \(27\pm 3 ^{\circ }\) for the inner samples (n = 4) and \(7.0\pm 1.4 ^{\circ }\) for the outer samples (n = 4). The mean MFAs of the inner samples are similar to those reported in [11] and the average MFA distributions are shown in Fig. 16.
Spatially-localized WAXS was used to observe tissue-specific MFA orientation in native bamboo culm wall. Unlike previous studies [15, 16], the method presented here is non-destructive and does not require any chemical treatment. The LXS also provided models for different cell types, which can be useful when assessing whether the observed peaks in the MFA orientation distribution originate from different cell types or from different cell wall layers within one cell type. To the authors' knowledge, no other model has been presented for the parenchyma microfibril orientation distribution based on WAXS data.
The average MFA has considerable influence on the stiffness and strength of wood [29–31] and micromechanical models of wood include it as one of the critical parameters [32–34]. To develop micromechanical models for bamboo, both the highly elongated fiber and the cellular parenchyma tissues, with short aspect ratios, need to be considered, requiring data for the MFA for both. Additionally, the transverse elastic properties of internode bamboo tissue are likely dominated by those of the parenchyma, as those of a unidirectional fiber reinforced composite are dominated by the matrix [35]. Considering the short aspect ratios of the parenchyma cells, the low degree of orientation of the microfibrils in the parenchyma and high degree of orientation of the microfibrils in the fiber (relative to the longitudinal fiber axis), the parenchyma's role in governing the transverse elastic properties is likely even larger. The tissue specific MFA data obtained by this non-invasive, spatially-localized, WAXS method provides important data needed for understanding the mechanics of bamboo.
The cell shape affects the observed azimuthal integrals of the 200 reflection [36–38]. For rectangular cells, the peak shape can be taken into account by fitting a contribution from all four cell walls [39]. For cells with irregular or circular cross section the correction is more complicated and the 004 reflection has been used for the MFA analysis in some earlier work [37, 40] although it is more contaminated by neighboring reflections [37, 41] than the stronger 200 reflection. In the case of small MFAs, there is more peak overlap of the contributions of different cell wall sides and therefore the fitting is more ambiguous, so the smaller MFAs should be considered to have a rather high external uncertainty. Excluding the shape factor yields a more repeatable fitting process. For circular cells, this method is likely to yield MFAs that are systematically smaller than the true MFAs in the cell wall. Therefore, the average MFA values calculated here should be considered as lower limits.
The variation in peak height of the primary (fiber) orientation in Fig. 15 could be caused by variations in how the different tissue types were sampled (statistical variation). Variation in the degree of orientation as a function of the distance from the inner culm has also been seen earlier [5, 28] and this could also explain the difference seen in Fig. 15 (biological variation). Both factors are likely to contribute to the observed variation in the inner culm wall samples.
The value obtained for the fiber ratio from the tomographic reconstruction slice is sensitive to the binarization parameters and to the reconstruction resolution. Also the tomographic reconstruction resolution used was not sufficient to distinguish individual fiber cells. The fiber volume is therefore overestimated. These fiber ratio values should thus be considered relative values. As such, no comparison was made involving the fiber ratios of different samples. The relative fiber ratio values of one sample can be compared, however, as the factors described do not affect the correlation between the values of individual ROIs from the same sample.
The line scan method could be applied to studying the MFA distribution as a function of radial distance in the bamboo culm wall. This information could be connected to the three-dimensional cell-level structure, obtainable from the microtomography. This includes not only the information on different cell types sampled, but also the information on the cross-sectional cell shapes and the aspect ratios of the cells, both of which affect the observed scattering intensities. Using the spatially-localized line scan method, a better radial resolution and a finer sampling grid could be obtained than has been used in conventional WAXS laboratory methods for bamboo [5, 9, 28].
Set-up 1 is also suitable for measuring diffraction-contrast tomography with a resolution limited mainly by the 200-\(\mu \hbox {m}\) beam size. The measured XDT (Fig. 4) showed that the current system can be used to obtain 1D, 2D or even 3D maps with diffraction contrast. Different diffraction-based contrasts can be chosen after the measurements. In addition to the orientation and crystal phase detection presented, impurities, crystallite size or crystallographic parameters can be used for contrast. For example, for samples with more heterogeneous crystalline content than Moso bamboo, e.g. cellulose treated with supercritical water hydrolysis [42, 43], cellulose II content could also be determined.
Some biologically relevant systems that could benefit from in-house XDT are reaction wood, drying of wood, mechanically tested wood and crystallization of biomaterials. The functionalities of all these systems crucially depend on their structural features both at the nanoscale and at the microscale. More generally, the structure and function of all biological and biomimetic materials is determined by the hierarchy of the structure; in biomaterials the structural features cover length scales from atomic to the macroscale and are entangled with the properties of the materials. Using the X-ray techniques presented here, the links and connections between the different length scales can be mapped together in a unique way thus giving pivotal information on the structure-function relationships.
The applicability of the combined in-house microtomography and diffraction set-up to biologically relevant samples was demonstrated with bamboo samples. The set-up allows tissue-specific X-ray scattering by selecting the region-of-interest from the tomographic reconstruction slice. Further, both a one-dimensional line scan and a two-dimensional diffraction tomography were used to obtain scattering information that can be combined with the three-dimensional X-ray tomography information. The method presented is applicable to a wide range of biological samples and can be performed using a combined bench-top XMT/LXS set-up.
A two-component model was obtained from the azimuthal integrals of the line scan which yielded tissue-specific components of Moso bamboo. The fiber component showed a high degree of orientation, whereas the parenchyma model showed a lower degree of orientation with the preferred orientation perpendicular to that of the bamboo fibers.
Spatially-localized X-ray scattering has been shown to provide novel information on the bamboo culm wall that is only attainable by combining the cellular level information with the nanoscale. LXS thus provides insight on biological materials that is uniquely able to reveal the hierarchical structure in complex biological, or biomimetic, systems.
The commercial Moso bamboo culm material was treated for preservation with a chemical borate treatment before importation. This treatment is not necessary for the X-ray methods and it has little effect on the natural state of the smaller samples cut from the culm wall.
A 15 min measurement time was used for one sample.
Unlike in [11], due to a multi-layer monochromator, polarization caused by the monochromator was assumed to be naught.
A 180-degree sector was used that contained the contribution of all fitted Guassian peaks.
Differences in cellulose crystallinities or in texture-independent crystallinities cannot be assessed from our data since only one measurement geometry was used and the cellulose content of the samples was not determined.
Suuronen J-P, Kallonen A, Hänninen V, Blomberg M, Hämäläinen K, Serimaa R. Bench-top X-ray microtomography complemented with spatially localized X-ray scattering experiments. J Appl Crystallogr. 2014;47(1):471–5. doi:10.1107/S1600576713031105.
Liese, W, Köhl M (eds). Bamboo: the plant and its uses. Tropical Forestry. Springer, Cham.2015. doi:10.1007/978-3-319-14133-6. http://link.springer.com/10.1007/978-3-319-14133-6.
Integrated Taxonomic Information System on-line database. http://www.itis.gov. Accessed 2016 June 12
Fu J-H. Chinese moso bamboo: its importance. Bamboo. 2001;22(5):5–6.
Wang XQ, Li XZ, Ren HQ. Variation of microfibril angle and density in moso bamboo (Phyllostachys pubescens). J Trop For Sci. 2010;22(1):88–96.
Habibi MK, Lu Y. Crack Propagation in bamboo's hierarchical cellular structure. Sci Rep. 2014;4:5598. doi:10.1038/srep05598.
Dixon PG, Gibson LJ. The structure and mechanics of Moso bamboo material. J R Soc Interface. 2014;11(99):20140321. doi:10.1098/rsif.2014.0321.
Dixon PG, Semple KE, Kutnar A, Kamke FA, Smith GD, Gibson LJ. Comparison of the flexural behavior of natural and thermo-hydro-mechanically densified Moso bamboo. Eur J Wood Wood Prod. 2016;74(5):633–42. doi:10.1007/s00107-016-1047-9.
Yan-hui H, Ben-hua F, Yan Y, Rong-jun Z. Plant age effect on mechanical properties of moso bamboo (Phyllostachys heterocycla var. pubescens) single fibers. Wood Fiber Sci. 2007;44(2):196–201.
Vogtländer J, van der Lugt P, Brezet H. The sustainability of bamboo products for local and Western European applications. LCAs and land-use. J Clean Prod. 2010;18(13):1260–9. doi:10.1016/j.jclepro.2010.04.015.
Dixon PG, Ahvenainen P, Aijazi AN, Chen SH, Lin S, Augusciak PK, Borrega M, Svedström K, Gibson LJ. Comparison of the structure and flexural properties of Moso, Guadua and Tre Gai bamboo. Constr Build Mater. 2015;90:11–7. doi:10.1016/j.conbuildmat.2015.04.042.
Nogata F, Takahashi H. Intelligent functionally graded material: bamboo. Compos Eng. 1995;5(7):743–51. doi:10.1016/0961-9526(95)00037-N.
Habibi MK, Samaei AT, Gheshlaghi B, Lu J, Lu Y. Asymmetric flexural behavior from bamboo's functionally graded hierarchical structure: underlying mechanisms. Acta Biomater. 2015;16(1):178–86. doi:10.1016/j.actbio.2015.01.038.
Huang P, Chang W-S, Ansell MP, Chew YMJ, Shea A. Density distribution profile for internodes and nodes of Phyllostachys edulis (Moso bamboo) by computer tomography scanning. Constr Build Mater. 2015;93:197–204. doi:10.1016/j.conbuildmat.2015.05.120.
Crow E, Murphy RJ. Microfibril orientation in differentiating and maturing fibre and parenchyma cell walls in culms of bamboo (Phyllostachys viridiglaucescens (Carr.) Riv. & Riv.). Bot J Linn Soc. 2000;134(1–2):339–59. doi:10.1006/bojl.2000.0376.
Abe K, Yano H. Comparison of the characteristics of cellulose microfibril aggregates isolated from fiber and parenchyma cells of Moso bamboo (Phyllostachys pubescens). Cellulose. 2010;17(2):271–7. doi:10.1007/s10570-009-9382-1.
Thomas LH, Forsyth VT, Martel A, Grillo I, Altaner CM, Jarvis MC. Diffraction evidence for the structure of cellulose microfibrils in bamboo, a model for grass and cereal celluloses. BMC Plant Biol. 2015;15(1):153. doi:10.1186/s12870-015-0538-x.
Kohout T, Kallonen A, Suuronen JP, Rochette P, Hutzler A, Gattacceca J, Badjukov DD, Skála R, Böhmová V, Čuda J. Density, porosity, mineralogy, and internal structure of cosmic dust and alteration of its properties during high-velocity atmospheric entry. Meteorit Planet Sci. 2014;49(7):1157–70. doi:10.1111/maps.12325.
Suuronen J-P, Matusewicz M, Olin M, Serimaa R. X-ray studies on the nano- and microscale anisotropy in compacted clays: comparison of bentonite and purified calcium montmorillonite. Appl Clay Sci. 2014;101:401–8. doi:10.1016/j.clay.2014.08.015.
Barroso RC, Lopes RT, de Jesus EFO, Oliveira LF. X-ray diffraction microtomography using synchrotron radiation. Nucl Instrum Methods Phys Res Sect A Accel Spectrom Detect Assoc Equip. 2001;471(1–2):75–9. doi:10.1016/S0168-9002(01)00918-4.
Bleuet P, Welcomme E, Dooryhée E, Susini J, Hodeau J-L, Walter P. Probing the structure of heterogeneous diluted materials by diffraction tomography. Nat Mater. 2008;7(6):468–72. doi:10.1038/nmat2168.
Ludwig W, King A, Reischig P, Herbig M, Lauridsen EM, Schmidt S, Proudhon H, Forest S, Cloetens P, du Roscoat SR, Buffiere JY, Marrow TJ, Poulsen HF. New opportunities for 3D materials science of polycrystalline materials at the micrometre lengthscale by combined use of X-ray diffraction and X-ray imaging. Mater Sci Eng A. 2009;524(1–2):69–76. doi:10.1016/j.msea.2009.04.009.
Vamvakeros A, Jacques SDM, Di Michiel M, Senecal P, Middelkoop V, Cernik RJ, Beale AM. Interlaced X-ray diffraction computed tomography. J Appl Crystallogr. 2016;49:485–96. doi:10.1107/S160057671600131X.
King A, Reischig P, Adrien J, Ludwig W. First laboratory X-ray diffraction contrast tomography for grain mapping of polycrystals. J Appl Crystallogr. 2013;46(6):1734–40. doi:10.1107/S0021889813022553.
Suuronen J-P, Peura M, Fagerstedt K, Serimaa R. Visualizing water-filled versus embolized status of xylem conduits by desktop X-ray microtomography. Plant Methods. 2013;9(1):11. doi:10.1186/1746-4811-9-11.
Leppänen K, Bjurhager I, Peura M, Kallonen A, Suuronen J-P, Penttilä PA, Love J, Fagerstedt K, Serimaa R. X-ray scattering and microtomography study on the structural changes of never-dried silver birch, European aspen and hybrid aspen during drying. Holzforschung. 2011;65(6):865–73. doi:10.1515/HF.2011.108.
Josefsson G, Ahvenainen P, Mushi NE, Gamstedt EK. Fibril orientation redistribution induced by stretching of cellulose nanofibril hydrogels. J Appl Phys. 2015;117(21):214311. doi:10.1063/1.4922038.
Wang Y, Leppänen K, Andersson S, Serimaa R, Ren H, Fei B. Studies on the nanostructure of the cell wall of bamboo using X-ray scattering. Wood Sci Technol. 2012;46(1–3):317–32. doi:10.1007/s00226-011-0405-3.
Cave ID. The anisotropic elasticity of the plant cell wall. Wood Sci Technol. 1968;2(4):268–78. doi:10.1007/BF00350273.
Sahlberg U, Salmén L, Oscarsson A. The fibrillar orientation in the S2-layer of wood fibres as determined by X-ray diffraction analysis. Wood Sci Technol. 1997;31:77–86. doi:10.1007/s002260050017.
Gherardi Hein, PR, Tarcísio Lima, J. Relationships between microfibril angle, modulus of elasticity and compressive strength in Eucalyptus wood. Maderas Cienc Tecnol. 2012;14(3):267–74. doi:10.4067/S0718-221X2012005000002.
Hofstetter K, Hellmich C, Eberhardsteiner J. Development and experimental validation of a continuum micromechanics model for the elasticity of wood. Eur J Mech A/Solids. 2005;24(6):1030–53. doi:10.1016/j.euromechsol.2005.05.006.
Mishnaevsky L, Qing H. Micromechanical modelling of mechanical behaviour and strength of wood: state-of-the-art review. Comput Mater Sci. 2008;44(2):363–70. doi:10.1016/j.commatsci.2008.03.043.
Qing H, Mishnaevsky L. 3D hierarchical computational model of wood as a cellular material with fibril reinforced, heterogeneous multiple layers. Mech Mater. 2009;41(9):1034–49. doi:10.1016/j.mechmat.2009.04.011.
Jones RM. Mechanics of composite materials. 2nd ed. New York: Taylor & Francis Group; 1999.
Cave ID. Theory of X-ray measurement of microfibril angle in wood. Part 2. Wood Sci Technol. 1997;31:225–34. doi:10.1007/BF00702610.
Andersson S, Serimaa R, Torkkeli M, Paakkari T, Saranpää P, Pesonen E. Microfibril angle of Norway spruce [Picea abies (L.) Karst.] compression wood: comparison of measuring techniques. J Wood Sci. 2000;46(5):343–9. doi:10.1007/BF00776394.
Rüggeberg M, Saxe F, Metzger TH, Sundberg B, Fratzl P, Burgert I. Enhanced cellulose orientation analysis in complex model plant tissues. J Struct Biol. 2013;183(3):419–28. doi:10.1016/j.jsb.2013.07.001.
Pirkkalainen K, Peura M, Leppänen K, Salmi A, Meriläinen A, Saranpää P, Serimaa R. Simultaneous X-ray diffraction and X-ray fluorescence microanalysis on secondary xylem of Norway spruce. Wood Sci Technol. 2012;46(6):1113–25. doi:10.1007/s00226-012-0474-y.
Peura M, Sarén MP, Laukkanen J, Nygård K, Andersson S, Saranpää P, Paakkari T, Hämäläinen K, Serimaa R. The elemental composition, the microfibril angle distribution and the shape of the cell cross-section in Norway spruce xylem. Trees Struct Funct. 2008;22(4):499–510. doi:10.1007/s00468-008-0210-2.
Tolonen LK, Penttilä PA, Serimaa R, Kruse A, Sixta H. The swelling and dissolution of cellulose crystallites in subcritical and supercritical water. Cellulose. 2013;20:2731–44. doi:10.1007/s10570-013-0072-7.
Buffiere J, Ahvenainen P, Borrega M, Svedström K, Sixta H. Supercritical water hydrolysis: a pathway for producing low-molecular-weight cellulose. Green Chem. 2016. doi:10.1039/C6GC02544G.
PGD and LJG suggested need for measurement of MFA in the different tissues. PA and KS proposed the experimental solution based on the further development of set-up 1 by AK and HS. PGD obtained the bamboo material and cut the samples from the culm wall. PA, HS and AK performed the measurements with set-up 1 and PA with set-up 2. PA analyzed the data. All authors read and approved the final manuscript.
PGD and LJG would like to express gratitude to Martin Family Society of Fellows for Sustainability for their support.
The datasets generated during the current study are available in the Zenodo repository, doi:10.5281/zenodo.60046.
PA has received funding from the Jenny and Antti Wihuri Foundation for a personal research grant. PGD and LJG would like to note: this paper is based on their work supported by the National Science Foundation under OISE 1258574. The views expressed in this paper are not endorsed by the National Science Foundation. PGD is financially supported by the The Martin Family Society of Fellows for Sustainability fellowship.
Department of Physics, University of Helsinki, P.O. Box 64, 00014, Helsinki, Finland
Patrik Ahvenainen, Aki Kallonen, Heikki Suhonen & Kirsi Svedström
Department of Materials Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, 02139, Cambridge, MA, USA
Patrick G. Dixon & Lorna J. Gibson
Patrik Ahvenainen
Patrick G. Dixon
Aki Kallonen
Heikki Suhonen
Lorna J. Gibson
Kirsi Svedström
Correspondence to Patrik Ahvenainen.
Ahvenainen, P., Dixon, P.G., Kallonen, A. et al. Spatially-localized bench-top X-ray scattering reveals tissue-specific microfibril orientation in Moso bamboo. Plant Methods 13, 5 (2017). https://doi.org/10.1186/s13007-016-0155-1
Bamboo parenchyma
Phyllostachys edulis
Microfibril angle
Spatially-localized scattering
Diffraction-contrast tomography | CommonCrawl |
Spatial distribution and determinants of intimate partner violence among reproductive-age women in Ethiopia: Spatial and Multilevel analysis
Dessie Abebaw Angaw1,
Alemakef Wagnew Melesse1,
Bisrat Misganaw Geremew1 &
Getayeneh Antehunegn Tesema1
BMC Women's Health volume 21, Article number: 81 (2021) Cite this article
Intimate partner violence is a serious global public health problem particularly in low-and middle-income countries such as Ethiopia where women's empowerment is limited. Despite the high prevalence of intimate partner violence in Ethiopia, there is limited evidence on the spatial distribution and determinants of intimate partner violence among reproductive-age women. Exploring the spatial distribution of intimate partner violence is crucial to identify hotspot areas of intimate partner violence to design targeted health care interventions. Therefore, this study aimed to investigate the spatial distribution and determinants of intimate partner violence among reproductive-age women in Ethiopia.
A secondary data analysis was done based on the 2016 Ethiopian Demographic and Health Survey (EDHS) data. A total weighted sample of 6090 reproductive-age women were included in the study. The spatial scan statistical analysis was done to identify the significant hotspot areas of intimate partner violence. A multilevel binary logistic regression analysis was fitted to identify significant determinants of intimate partner violence. Deviance, Intra-cluster Correlation Coefficient (ICC), Median Odds Ratio, and Proportional Change in Variance (PCV) were used for model comparison as well as for checking model fitness. Variables with a p-value less than 0.2 were considered in the multivariable analysis. In the multivariable multilevel analysis, the Adjusted Odds Ratio (AOR) with 95% Confidence Interval (CI) were reported to declare statistical significance and strength of association between intimate partner violence and independent variables.
The spatial analysis revealed that the spatial distribution of intimate partner violence was significantly varied across the country (Moran's I = 0.1007, p-value < 0.0001). The SaTScan analysis identified a total of 192 significant clusters, of these 181 were primary clusters located in the Benishangul-Gumuz, Gambella, northwest Amhara, and west Oromia regions. In the multivariable multilevel analysis; women aged 45–49 years (AOR = 2.79, 95% CI 1.52–5.10), women attained secondary education (AOR = 0.61, 95% CI 0.38–0.98), women in the richest household (AOR = 0.58, 95% CI 0.35–0.97), > 10 family size (AOR = 3.85, 95% CI 1.41–10.54), and high community women empowerment (AOR = 0.66, 95% CI 0.49–0.8)) were significantly associated with intimate partner violence.
Intimate partner violence among reproductive-age women had significant spatial variation across the country. Women's age, education status, family size, community women empowerment, and wealth status were found significant determinants of intimate partner violence. Therefore, public health programs should design targeted interventions in identified hot spot areas to reduce the incidence of intimate partner violence. Besides, health programmers should scale up public health programs designed to enhance women's autonomy to reduce the incidence of intimate partner violence and its consequences.
According to the World Health Organization (WHO), Intimate Partner Violence (IPV) is defined as any behavior within an intimate partner that causes physical, psychological, or sexual harm [1]. IPV is the commonest form of violence that encompasses physical, sexual, and emotional violence [2,3,4]. Globally, an estimated 11 million women experience sexual, physical, or psychological violence by their intimate partner in their lifetime [5]. Overall, 30% of women experienced physical or sexual harassment throughout their lifetime by an intimate partner ranged from 24.6% in the West Pacific to 36.6% in Africa. However, in low-and middle-income countries, it reached up to 70% [2, 6, 7]. In Ethiopia, 59% and 42% of women faced sexual and physical violence by their intimate partners, respectively [5].
Women and girls are faced with physical, emotional, and sexual violence that threatens their safety and livelihood; disrupts their social structures and relationships [8, 9]. IPV is a serious, preventable public health problem that affects millions of women [10]. Evidence showed that sexually abused women often experience psychological, physical, economic, and social consequences such as depression, anxiety, sexual addiction, posttraumatic distress disorder, and substance abuse [11, 12]. Besides, IPV had imposed significant health impacts ranging from mild discomfort to extreme injury, abortion, anxiety, depression, post-traumatic illness, and death [11, 13].
Despite the international declaration of women's rights and national law to uphold the rights of women and girls enshrined in the constitution, IPV remains the commonest problem in Ethiopia [14,15,16]. A prior study conducted in Ethiopia found that 3 out of 4 women experience IPV in their lifetime [17]. Previous studies showed that household wealth status, women's age, residence, women's education, husband education, and parity were significant predictors of IPV [15, 18, 19]. Women from poor households rural resident women and uneducated women are more likely to experience intimate partner violence [20].
The distribution of education, wealth index, fertility, and empowerment of women differed significantly across Ethiopia's regions [21]. Rural people account for an estimated 80% of the population [22]. s with health indicators, education differed significantly across Addis Ababa, Dire-Dawa, and Harari regions with the highest rate of literacy, whereas Afar, Benishangul-Gumuz, and Somali have the lowest level of literacy. Also, women have limited access to health care facilities and health knowledge in rural and less developed regions (Afar, Somali, Benishangul-Gumuz, and Gambella) compared to more developed regional states (Amhara, Oromia, and Tigray) [23]. The more intensive labor such as plowing, trading, constructing, and harvesting is the duty of men in the countryside [24]. Women are more accountable for the household's domestic labor, such as cooking, gathering goods, and household care. Compared to girls, education is still more stressed for boys and also provided more leeway to social activities over girls, while enrollment rates for girls in education are growing [25].
The prevalence of intimate partner violence has varied within and across the country [26]. The presence of IPV indicates poor women's empowerment in the community [27]. There are several studies conducted in Ethiopia about the prevalence of IPV and associated factors [28,29,30]. However, the results of these studies are unable to capture the spatial distribution and determinants of intimate partner violence across the country. Therefore, the current study aimed to investigate the spatial distribution and determinants of intimate partner violence among women of reproductive age in Ethiopia. The results of this study could help to identify significant hotspot areas of intimate partner violence and design evidence-based public health interventions targeting the susceptible groups.
This study was based on the 2016 Ethiopian Demographic and Health Survey (EDHS) data. EDHS is a nationally representative survey conducted in every five years interval in Ethiopia. Ethiopia has nine regional states (Afar, Amhara, Benishangul-Gumuz, Gambela, Harari, Oromia, Somali, Southern Nations, Nationalities, and People's Region (SNNP) and Tigray) and two Administrative Cities (Addis Ababa and Dire-Dawa). A stratified two-stage cluster sampling technique was employed to select the study participants. At the first stage, a total of 645 Enumeration Areas (EAs) were selected. In the second stage, on average 28 households per EA were selected. Overall, for EDHS 2016 a total of 18,008 households were chosen and 16,583 eligible women in the selected household were identified. For this study, a total weighted sample of 6090 women were included. The detailed sampling procedure has been presented in the full report EDHS 2016 [31].
Measurements of variables
Outcome variable
Having experienced IPV was the outcome variable for this study. Women were asked whether or not experienced any of the specified acts of physical, sexual, or emotional violence committed by their current husband/partner or most recent husband/partner in the 12 months preceding the survey was considered as experienced IPV, and if not were considered as never experienced IPV [10].
Independent variables
The data sources we used for this study were EDHS data and this data has hierarchical nature. The independent variables were collected at two levels (at individual and community levels). At the individual level, variables such as women's education, religion, sex of household head, women age, women occupation, wealth status, family size, number of unions, husband education, and media exposure were included. At level two, variables such as residence, region, community media exposure, community women employment, community women education, and community poverty were considered. The community-level variables considered in this study were from two sources. First, variables as collected without manipulation such as residence and region. In EDHS except for region and place of residence, there is no variable collected at the community level. Therefore, we generate community media exposure, community women employment, community women education, and community poverty by aggregating women's education, women occupation, media exposure, and wealth index at cluster/EA levels. Then these variables were categorized as high or low based on the national media values since these were not normally distributed.
ArcGIS version 10.6 and SaTScan version 9.6 statistical software were used to explore the spatial distribution and to identify the hotspot areas of intimate partner violence. The spatial global autocorrelation (Global Moran's I) was used to determine whether intimate partner violence was randomly distributed or not [29]. Moran's I is a spatial statistic used to measure autocorrelation in space by taking the entire data set and generating a single output value ranging from -1 to + 1. A statistically significant Moran's I value (p < 0.05) indicates that the spatial distribution of intimate partner violence is non-random and suggests the existence of spatial autocorrelation. Besides, Getis-OrdGi * statistical hotspot analysis was done to identify significant hotspot and cold spot areas of intimate partner violence [32]. The Bernoulli based model spatial scan statistical analysis was conducted to identify significant primary and secondary clusters of intimate partner violence. The SaTScan uses a circular scanning window that goes across the region of the study. Women who had experienced intimate partner violence were considered cases while those who had not experienced intimate partner violence were taken as controls to fit the Bernoulli model. The default overall spatial cluster size of < 50% of the population was used as an upper limit, allowing for the identification of small and large clusters and excluding clusters that contained more than the maximum limit. A likelihood ratio test statistic and the p-value were used to determine significant clusters for each possible cluster. The most likely performing cluster was the scanning window with a maximum likelihood. The primary and secondary clusters were established and ranked based on their likelihood test, based on 999 replicates from Monte Carlo [33].
The Kriging spatial interpolation technique was applied to predict the prevalence of IPV in un-sampled/unmeasured areas based on the values observed from sampled areas. There are various deterministic and geostatistical interpolation methods [34]. For this study, the Ordinary Kriging spatial interpolation method was used since it had a smaller residual and root mean square error.
Multilevel analysis
The data were weighted using sampling weight, primary sampling unit, and strata before any statistical analysis to restore the representativeness of the survey and to take into account the sampling design to get reliable statistical estimates. Descriptive and summary statistics were conducted using STATA version 14 software. The EDHS data has hierarchical nature and women are nested within a cluster and we expect that women within the same cluster may be more similar to each other than women in the rest of the country. This violates the assumption of the traditional regression model which is the independence of observations and equal variance across clusters. This implies that the need to take into account the between cluster variability by using an advanced model. Therefore, a multilevel random intercept logistic regression model was fitted to estimate the association between the individual and community level variables and the likelihood of experiencing intimate partner violence. Model comparison was done based on Deviance (The negative 2 log-likelihood (− 2LL)) since the models were nested. Likelihood ratio test, Intra-class Correlation Coefficient (ICC), Median Odds Ratio (MOR), and Proportional Change in Variance (PCV) were computed to measure the variation between clusters. ICC quantifies the degree of heterogeneity of intimate partner violence between clusters (the proportion of the total observed individual variation in intimate partner violence that is attributable to between cluster variations).
ICC = ϭ2/ (ϭ2 + π2/3) [35], but MOR is quantifying the variation or heterogeneity in outcomes between clusters and is defined as the median value of the odds ratio between the cluster at high risk of experiencing intimate partner violence and cluster at lower risk when randomly picking out two clusters (EAs) [36].
$${\text{MOR}} = {\text{ exp }}(\sqrt {2*\partial 2*0.6745} ) \sim {\text{MOR}} = {\text{exp }}(0.95*)$$
\(\partial\) 2 indicates that cluster variance.
PCV measures the total variation attributed to individual-level factors and community-level factors in the multilevel model as compared to the null model.
$$PCV = \frac{{var \left( {null model} \right) - var \left( {full model} \right)}}{{var \left( {null model} \right)}}$$
Multilevel random intercept logistic regression was used to analyze factors associated with intimate partner violence at two levels to take into account the hierarchical structure of the data, at individual and community (cluster) levels. Four models were constructed for the multilevel logistic regression analysis. The first model (a multilevel random intercept logistic regression model without covariates) was an empty model without any explanatory variables, to determine the extent of cluster variation on intimate partner violence. The second model (determined the association between the individual level predictors and intimate partner violence) was adjusted with individual-level variables; the third model (determined the association between community-level variables and intimate partner violence) was adjusted for community-level variables while the fourth (individual and community level model) was fitted with both individual and community level variables simultaneously. The final model (a model with individual and community level factors) was chosen since it had the lowest deviance.
Variables with p-value ≤ 0.2 in the bi-variable analysis for both individual and community-level factors were fitted in the multivariable model. Adjusted Odds Ratio (AOR) with a 95% Confidence Interval (CI) and p-value < 0.05 in the multivariable model were used to declare significant predictors of intimate partner violence. Multi-collinearity was also checked using the variance inflation factor (VIF) which indicates that there is no multi-collinearity since all variables have VIF < 5 and tolerance greater than 0.1.
Permission for data access was obtained from major demographic and health survey through an online request from http://www.dhsprogram.com. The data used for this study were publicly available with no personal identifier. We received the authorization letter from The Demographic and Health Surveys (DHS) Program. The IRB-approved procedures for DHS public-use datasets do not in any way allow respondents, households, or sample communities to be identified. There are no names of individuals or household addresses in the data files. The geographic identifiers only go down to the regional level (where regions are typically very large geographical areas encompassing several states/provinces). Each enumeration area (Primary Sampling Unit) has a PSU number in the data file, but the PSU numbers do not have any labels to indicate their names or locations. In surveys that collect GIS coordinates in the field, the coordinates are only for the enumeration area (EA) as a whole, and not for individual households, and the measured coordinates are randomly displaced within a large geographic area so that specific enumeration areas cannot be identified.
The characteristics of respondents
A total of 6090 women were included in the study. Of these, 3207 (52.7%) of women did not have formal education and 925 (15.6%) of the women attained secondary education or higher. About 1106 (18.2) of the women were from the poorest household and 2502 (41.1%) had media exposure. Nearly three-fourth (76.1%) of household heads were males and about 3580 (58.8%) of the women were participated in making decisions (Table 1). About 2281 (37.5%) and 1477 (24.3%) are living in the Oromia and Amhara regions, respectively. Regarding community media exposure and community poverty, about 47.3% of the women were from community with high media exposure and 51.9% from high community poverty (Table 2).
Table 1 Individual level characteristics of reproductive age women in Ethiopia, 2016
Table 2 Community level characteristics of reproductive age women in Ethiopia, 2016
The overall national prevalence of IPV among reproductive-age women in Ethiopia was 33.5% (95% CI 32.1, 34.7). The spatial distribution of IPV was non-random in Ethiopia (Global Moran's I = 0.1, p-value < 0.0001) (Fig. 1). In the Getis Ord GI statistical analyses, the significant hotspot areas of IPV were located in the east SNNPRs, west Oromia, Gambella, north Amhara, and northwest Tigray regions, whereas significant cold spot areas of IPV were found in east Amhara, west Afar, and Somali regions (Fig. 2).
Global spatial autocorrelation of intimate partner violence among reproductive age women in Ethiopia, 2016
Source: CSA, 2013)
Hotspot analysis of intimate partner violence in Ethiopia, 2016 (
The SaTScan analysis identified a total of 192 significant clusters, of these 181 clusters were primary clusters located in Benishangul-Gumuz, Gambella, northwest Amhara and west Oromia regions centered at 10.637520 N, 35.719206 E with a radius of 373.97 km, a Relative Risk (RR) of 1.35 and a Log-Likelihood Ratio (LLR) of 16.55, at p < 0.001 (Table 3). This showed that women within the spatial window had 1.35 times higher risk of experiencing IPV than women outside the spatial window. The secondary clusters scanning window was located between the border area of the southwest Oromia, and north Tigray regions (Fig. 3). The Kriging interpolation identified northwest Tigray, northern and eastern Amhara, west Benishangul, east SNNPRs, and southwest Oromia regions as predicted high-risk areas of IPV while the Somali region was identified as predicted low prevalence of IPV (Fig. 4).
SaTScan analysis of hotspot areas of intimate partner violence in Ethiopia, 2016 (
Kriging interpolation of intimate partner violence in Ethiopia, 2016 (
Determinants of intimate partner violence
Random effect results
The ICC-value in the null model was 23% indicated that 23% of the total variability for IPV was attributable to the between group variation while the remaining 77% was explained by the between individual variation. Besides, the MOR was 2.56 indicated that, if we randomly select two women from two different clusters, women at the cluster with a higher risk of IPV had 2.56 times higher likelihood of experiencing IPV compared with women at cluster with a lower risk of IPV (Table 4). Moreover, Therefore, multilevel binary logistic regression analysis was mandatory to take in to account the clustering effect. A total of four models (null model, model with individual level variables, model with community level variables, and the final model that was model with both individual and community level variables) were fitted and the final model was the best-fitted model for the data since it had the lowest deviance value.
Table 3 SaTScan analysis result of hotspot areas of intimate partner violence in Ethiopia, 2016
Table 4 Random effect results and model comparison
Table 5 Multilevel logistic regression analysis of individual and community level factors associated with intimate partner violence in Ethiopia, 2016
Fixed effect results
In the multivariable multilevel logistic regression analysis; women's age, women education, wealth index, region, family size, and community women empowerment were the significant determinants of IPV. Among the individual-level variables; mothers aged 20–24 and 25–29, 35–39, 40–45 and 45–49 years were 1.87 times (AOR = 1.87; 95% CI 1.09–3.20), 2.00 times (AOR = 2.00; 95% CI 1.17–3.41), 2.02 times (AOR = 2.02; 95% CI 1.11–3.69), 2.21 times (AOR = 2.21; 95% CI 1.14–4.31), 2.10 times (AOR = 2.10; 95% CI 1.12–3.94) and 2.79 times (AOR = 2.79; 95% CI 1.52–5.10) higher likelihood of experiencing IPV than women aged 15–19 years, respectively. The likelihood of experiencing IPV among women who attained secondary education or higher were decreased by 43% (AOR = 0.57; 95% CI 0.35–0.91) compared to women who did not have formal education. Women from the richest household had 42% (AOR = 0.58; 95% CI 0.35–0.97) decreased likelihood of experiencing IPV compared to those mothers from the poorest households. Women in the nuclear family (family size > 10) had 3.85 times (AOR = 3.85; 95% CI 1.41–10) higher likelihood of experiencing IPV compared to those women in family size of four and less.
Among community level variables, the likelihood of experiencing IPV among women living in Afar and Benishangul regions were decreased by 65% (AOR = 0.35; 95% CI 0.183–0.69) and 88% (AOR 0.12; 95% CI 0.06–0.25) compared to women in Addis Ababa, respectively. The likelihood of IPV among women in the community with higher women empowerment were decreased by 34% (AOR = 0.66; 95% CI 0.49–0.89) than women in the community with lower community empowerment (Table 5).
The spatial distribution of IPV was significantly varied across the country. The significant hotspot areas of IPV were located in the Benishangul-Gumuz, Gambella, northwest Amhara, and west Oromia regions. This could be due to the difference in cultural belief and misconceptions about IPV as husband's have the right for beating, choking and forced sex of their wife [37]. Besides, the geographic variation in IPV might be attributable to the difference in awareness and attitude of husbands/partners toward negative consequent of women violence [38]. Moreover, IPV is closely linked with poor women's education, and women empowerment, the regional variation in education and women autonomy in the border areas might be the reason for the spatial variation [39].
In the multilevel analysis; women age, women education, wealth status, family size, region, and community women empowerment were significant determinants of IPV. The likelihood of experiencing IPV among women from the richest household were lower compared to women from a poor household. This is consistent with studies reported in Uganda [11], Nepal [13] and Philippines [40], This could be explained by the fact that women in poor households are more likely to be vulnerable to intimate partner violence because of their economic dependence on meeting their basic needs and are exposed to abuse, while rich women are more autonomous in decision-making [17, 41].
Age of women was found significant predictors of intimate partner violence. Advanced age was significantly associated with higher likelihood of experiencing intimate partner violence than women aged less than 20 years. This was supported by a previous study [42]. This could be due to the fact that advanced age women have large family size and this could increase the burden of women such as workload, and economic burden to meet the basic needs of their children, that can increase the risk of marital dispute [15]. Besides, advanced age are associated with increased household hardship, increased arguments over the partner's inability to provide for the family, and decreased likelihood of relationship dissolution as they have too many children [43].
Women had secondary education or higher were less likely to experience of intimate partner violence than women who had no formal education. It is consistent with studies reported in Bangladesh [44], and Vietnam [45]. It might be due to the fact that women with secondary education or higher have improved access to information towards women empowerment or they may have less acceptances for partner violence than uneducated women [46, 47]. Moreover, women with higher level of education are less tolerant to beating, choking, and forced sex and their acceptance and tolerance towards husband's mistreatment and control over the wife markedly declined as the education level of the women improved [48, 49].
The likelihood of experiencing intimate partner violence increases as family size increases. As majority of the people in Ethiopia are living under the poverty line with high rate of unemployment, which creates pressure on men to discharge their responsibilities as head of the household and could create poor interaction with their wife [50]. Besides, it could be due to the fact that families with lower number of household member may find it easier to meet their basic needs than families with a larger household member [51]. Therefore, the resources are lacking and when facing numerous family needs that are echoed by the wife, the husband may resort to violence [52].
Women from the community with a higher women empowerment community has been significantly lower risk of experiencing intimate sexual violence than a low women empowerment community. This is consistent with study findings in Bangladesh [53] and Peru [54]. This might be due to the reason that women who are empowered are able to fight for their rights and will not accept men to fully dictate to them which could result in sexual, physical or emotional violence [55]. Besides, in Ethiopia, majority of the cultures considered women to be subordinated or controlled by men and therefore, women in community with high women empowerment are not depend men for their lives and tend to resist some of the decisions of men which may bring about intimate partner violence [56].
The study has several strengths. First, the study was based on the nationally representative national EDHS survey which were weighted and it can be generalizable to the reproductive-age women in Ethiopia. Second, the use of GIS and SaTScan statistical analyses helped to detect specific and statistically important IPV hotspot areas to design effective public health interventions. The findings of this study should be interpreted considering the following limitations. First, the SaTScan detects only circular clusters but it can not detect the irregular clusters. Second, the kriging interpolation technique assumes that the space being studied is stationary and the joint probability does not change throughout the study area, due to these the interpolated values might be higher or lower than the real values in non-stationary areas. Besides, the EDHS survey did not include variables at the community level, such as community norms, culture, and beliefs that are closely linked with IPV. Moreover, the data were obtained based on the report of mothers or caregivers and may have the potential of social desirability and recall bias, because IPV is not socially acceptable, while CSA argues that substantial attempts have been made to reduce this, primarily by thorough training of data collectors, hiring skilled data collectors and managers, which may misrepresent our results.
The spatial distribution of intimate partner violence was significantly varied across the country with the significant hotspot areas located in the Benishangul-Gumuz, Gambella, North West Amhara, and west Oromia regions. Advanced maternal age and large family size were significantly associated with an increased likelihood of experiencing intimate partner violence whereas women who had secondary education, richest wealth status, and community with high women empowerment were significant predictors of decreased risk of experiencing intimate partner violence. This finding highlights the need for designing spatially targeted public health programs and interventions to the identified significant hotspot areas of IPV to reduce the incidence of IPV in these areas. Public health interventions like enhancing women's empowerment in the community to decide on their health, promoting women's education and financial resources since it has the potential to enhance the decision-making capabilities of women to reduce intimate partner violence. However, much to be done on promoting women education in Ethiopia it is needed to scale up the programs to prevent intimate partner violence.
The data used for this study are publicly available and can access it from https://www.dhsprogram.com/data/dataset_admin/login_main.cfm.
CSA:
Central statistical agency
EA:
Enumeration area
EDHS:
Ethiopian demographic health survey
GIS:
Geographic information system
ICC:
Intra-cluster correlation coefficient
LLR:
Log-likelihood ratio
Likelihood ratio
Median odds ratio
PCV:
Proportional change in variance
PHC:
SNNPRs:
Southern nations and nationality people regional state
Dahlberg LL, Krug EG. Violence a global public health problem. Ciência & Saúde Coletiva. 2006;11:277–92.
Leonardsson M, San SM. Prevalence and predictors of help-seeking for women exposed to spousal violence in India–a cross-sectional study. BMC Women's Health. 2017;17(1):99.
Mancini JA, Nelson JP, Bowen GL, Martin JA. Preventing intimate partner violence: A community capacity approach. J Aggress Maltreat Trauma. 2006;13(3–4):203–27.
World Health Organization. Understanding and addressing violence against women: Intimate partner violence. Geneve: World Health Organization; 2012.
Garcia-Moreno C, Jansen HA, Ellsberg M, Heise L, Watts CH. Prevalence of intimate partner violence: findings from the WHO multi-country study on women's health and domestic violence. The Lancet. 2006;368(9543):1260–9.
Ismayilova L. Spousal violence in 5 transitional countries: a population-based multilevel analysis of individual and contextual factors. Am J Public Health. 2015;105(11):e12–22.
King JD, Endeshaw T, Escher E, Alemtaye G, Melaku S, Gelaye W, et al. Intestinal parasite prevalence in an area of Ethiopia after implementing the SAFE strategy, enhanced outreach services, and health extension program. PLoS Negl Trop Dis. 2013;7(6):e2223.
Leatherman J. Sexual violence and armed conflict: Polity; 2011.
Black M, Basile K, Breiding M, Smith S, Walters M, Merrick M, et al. National intimate partner and sexual violence survey: 2010 summary report. 2011.
Basile KC, Black MC, Breiding MJ, Chen J, Merrick MT, Smith SG, et al. National intimate partner and sexual violence survey; 2010 summary report. 2011.
Ogland EG, Xu X, Bartkowski JP, Ogland CP. Intimate partner violence against married women in Uganda. J Family Violence. 2014;29(8):869–79.
CSACEa. I. Ethiopia Demographic and Health Survey. Addis Ababa, Ethiopia, and Rockville , maryland, USA: CSA and ICF. 2016.: 2016.
Atteraya MS, Gnawali S, Song IH. Factors associated with intimate partner violence against married women in Nepal. J Interpers Violence. 2015;30(7):1226–46.
Azene ZN, Yeshita HY, Mekonnen FA. Intimate partner violence and associated factors among pregnant women attending antenatal care service in Debre Markos town health facilities, Northwest Ethiopia. PLoS ONE. 2019;14(7):e0218722.
Abeya SG, Afework MF, Yalew AW. Intimate partner violence against women in western Ethiopia: prevalence, patterns, and associated factors. BMC Public Health. 2011;11(1):913.
Alebel A, Kibret GD, Wagnew F, Tesema C, Ferede A, Petrucka P, et al. Intimate partner violence and associated factors among pregnant women in Ethiopia: a systematic review and meta-analysis. Reprod Health. 2018;15(1):1–12.
Montagu D, Yamey G, Visconti A, Harding A, Yoong J. Where do poor women in developing countries give birth? A multi-country analysis of demographic and health survey data. PLoS ONE. 2011;6(2):e17155.
Makayoto LA, Omolo J, Kamweya AM, Harder VS, Mutai J. Prevalence and associated factors of intimate partner violence among pregnant women attending Kisumu District Hospital. Kenya Mat Child Health J. 2013;17(3):441–7.
Walton-Moss BJ, Manganello J, Frye V, Campbell JC. Risk factors for intimate partner violence and associated injury among urban women. J Commun Health. 2005;30(5):377–89.
Azevêdo ACDC, Araújo TVBD, Valongueiro S, Ludermir AB. Intimate partner violence and unintended pregnancy: prevalence and associated factors. Cadernos de saude publica. 2013;29:2394–404.
Ahmed S, Creanga AA, Gillespie DG, Tsui AO. Economic status, education and empowerment: implications for maternal health service utilization in developing countries. PLoS ONE. 2010;5(6):e11190.
Kloos H, Adugna A. The Ethiopian population: growth and distribution. Geogr J. 1989:33–51.
Deressa T, Hassan RM, Ringler C. Measuring Ethiopian farmers' vulnerability to climate change across regional states: Intl Food Policy Res Inst; 2008.
Gebre GG, Isoda H, Rahut DB, Amekawa Y, Nomura H, editors. Gender differences in the adoption of agricultural technology: the case of improved maize varieties in southern Ethiopia. Women's studies international forum; 2019: Elsevier.
Tyre P. The trouble with boys: a surprising report card on our sons, their problems at school, and what parents and educators must do: Harmony; 2008.
Fontes KB, Alarcão ACJ, Nihei OK, Pelloso SM, Andrade L, de Barros Carvalho MD. Regional disparities in the intimate partner sexual violence rate against women in Paraná State, Brazil, 2009–2014: an ecological study. BMJ Open. 2018;8(2):e018437.
Gracia E, López-Quílez A, Marco M, Lladosa S, Lila M. The spatial epidemiology of intimate partner violence: do neighborhoods matter? Am J Epidemiol. 2015;182(1):58–66.
Deyessa N, Berhane Y, Alem A, Ellsberg M, Emmelin M, Hogberg U, et al. Intimate partner violence and depression among women in rural Ethiopia: a cross-sectional study. Clin Pract Epidemiol Mental Health. 2009;5(1):8.
Deyessa N, Berhane Y, Ellsberg M, Emmelin M, Kullgren G, Högberg U. Violence against women in relation to literacy and area of residence in Ethiopia. Glob Health Action. 2010;3(1):2070.
Abate BA, Wossen BA, Degfie TT. Determinants of intimate partner violence during pregnancy among married women in Abay Chomen district, Western Ethiopia: a community based cross sectional study. BMC women's health. 2016;16(1):16.
ICF CSACEa. Ethiopia Demographic and Health Survey 2016 Addis Ababa, Ethiopia, and Rockville , maryland, USA: CSA and ICF. 2016
Tsai P-J, Lin M-L, Chu C-M, Perng C-H. Spatial autocorrelation analysis of health care hotspots in Taiwan in 2006. BMC Public Health. 2009;9(1):464.
Kulldorff M. SaTScanTM user guide. Boston; 2006.
Bhunia GS, Shit PK, Maiti R. Comparison of GIS-based interpolation methods for spatial distribution of soil organic carbon (SOC). J Saudi Soc Agric Sci. 2018;17(2):114–26.
Rodriguez G, Elo I. Intra-class correlation in random-effects models for binary data. Stata J. 2003;3(1):32–46.
Merlo J, Chaix B, Ohlsson H, Beckman A, Johnell K, Hjerpe P, et al. A brief conceptual tutorial of multilevel analysis in social epidemiology: using measures of clustering in multilevel logistic regression to investigate contextual phenomena. J Epidemiol Commun Health. 2006;60(4):290–7.
Kassa S. Challenges and opportunities of women political participation in Ethiopia. J Glob Econ. 2015;3(4):1–7.
Bazargan-Hejazi S, Medeiros S, Mohammadi R, Lin J, Dalal K. Patterns of intimate partner violence: a study of female victims in Malawi. J Injury Violence Res. 2013;5(1):38.
Organization WH. Increasing access to health workers in remote and rural areas through improved retention: global policy recommendations: World Health Organization; 2010.
Hindin MJ, Adair LS. Who's at risk? Factors associated with intimate partner violence in the Philippines. Soc Sci Med. 2002;55(8):1385–99.
Acharya DR, Bell JS, Simkhada P, Van Teijlingen ER, Regmi PR. Women's autonomy in household decision-making: a demographic study in Nepal. Reprod Health. 2010;7(1):15.
Azene ZN, Yeshita HY, Mekonnen FA. Intimate partner violence and associated factors among pregnant women attending antenatal care service in Debre Markos town health facilities, Northwest Ethiopia. PloS one. 2019;14(7).
Miller E, Levenson R, Herrera L, Kurek L, Stofflet M, Marin L. Exposure to partner, family, and community violence: Gang-affiliated Latina women and risk of unintended pregnancy. J Urban Health. 2012;89(1):74–86.
Rahman M, Hoque MA, Makinoda S. Intimate partner violence against women: Is women empowerment a reducing factor? A study from a national Bangladeshi sample. J Family Violence. 2011;26(5):411–20.
Vung ND, Ostergren P-O, Krantz G. Intimate partner violence against women in rural Vietnam-different socio-demographic factors are associated with different forms of violence: need for new intervention guidelines? BMC Public Health. 2008;8(1):55.
Madeira JL. Woman scorned: resurrecting infertile women's decision-making autonomy. Md L Rev. 2011;71:339.
Lamichhane P, Puri M, Tamang J, Dulal B. Women's status and violence against young married women in rural Nepal. BMC Women's Health. 2011;11(1):19.
Yount KM, Carrera JS. Domestic violence against married women in Cambodia. Soc Forces. 2006;85(1):355–87.
Horsman J. Too scared to learn: women, violence, and education: Routledge; 2013.
Erulkar A. Early marriage, marital relations and intimate partner violence in Ethiopia. Int Perspect Sex Reprod Health. 2013:6–13.
Gennetian LA, Castells N, Morris PA. Meeting the basic needs of children: does income matter? Child Youth Serv Rev. 2010;32(9):1138–48.
Garikipati S. The impact of lending to women on household vulnerability and women's empowerment: evidence from India. World Dev. 2008;36(12):2620–42.
Dalal K, Dahlström Ö, Timpka T. Interactions between microfinance programmes and non-economic empowerment of women associated with intimate partner violence in Bangladesh: a cross-sectional study. BMJ Open. 2013;3(12).
Svec J, Andic T. Cooperative decision-making and intimate partner violence in Peru. Popul Dev Rev. 2018;44(1):63.
Leite TH, Moraes CLd, Marques ES, Caetano R, Braga JU, Reichenheim ME. Women economic empowerment via cash transfer and microcredit programs is enough to decrease intimate partner violence? Evidence from a systematic review. Cadernos de saude publica. 2019;35:e00174818.
Kwagala B, Wandera SO, Ndugga P, Kabagenyi A. Empowerment, partner's behaviours and intimate partner physical violence among married women in Uganda. BMC Public Health. 2013;13(1):1112.
We would like to thank the measure DHS program for providing the data set.
Department of Epidemiology and Biostatistics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
Dessie Abebaw Angaw, Alemakef Wagnew Melesse, Bisrat Misganaw Geremew & Getayeneh Antehunegn Tesema
Dessie Abebaw Angaw
Alemakef Wagnew Melesse
Bisrat Misganaw Geremew
Getayeneh Antehunegn Tesema
Conceptualization: DAA, GAT, AWM, BMG, Data curation: DAA, GAT, AWM, BMG, Investigation: DAA, GAT, AWM, BMG, Methodology: DAA, GAT, AWM, BMG, Software: DAA, GAT, AWM, BMG, Validation: DAA, GAT, AWM, BMG, Visualization: DAA, GAT, AWM, BMG, Writing: DAA, GAT, AWM, Writing – review and editing: DAA, GAT, AWM, BMG. All authors have read and approved the manuscript.
Correspondence to Dessie Abebaw Angaw.
Authors declare that they have no conflict of interest.
Angaw, D.A., Melesse, A.W., Geremew, B.M. et al. Spatial distribution and determinants of intimate partner violence among reproductive-age women in Ethiopia: Spatial and Multilevel analysis. BMC Women's Health 21, 81 (2021). https://doi.org/10.1186/s12905-021-01218-3 | CommonCrawl |
\begin{definition}[Definition:Inverse Cotangent/Complex/Definition 2]
Let $S$ be the subset of the complex plane:
:$S = \C \setminus \left\{{0 + i, 0 - i}\right\}$
The '''inverse cotangent''' is a multifunction defined on $S$ as:
:$\forall z \in S: \cot^{-1} \left({z}\right) := \left\{{\dfrac 1 {2 i} \ln \left({\dfrac {z + i} {z - i}}\right) + k \pi: k \in \Z}\right\}$
where $\ln$ denotes the complex natural logarithm as a multifunction.
\end{definition} | ProofWiki |
A novel secured Euclidean space points algorithm for blind spatial image watermarking
Shaik Hedayath Basha ORCID: orcid.org/0000-0003-1612-60811 &
Jaison B2
EURASIP Journal on Image and Video Processing volume 2022, Article number: 21 (2022) Cite this article
Digital raw images obtained from the data set of various organizations require authentication, copyright protection, and security with simple processing. New Euclidean space point's algorithm is proposed to authenticate the images by embedding binary logos in the digital images in the spatial domain. Diffie–Hellman key exchange protocol is implemented along with the Euclidean space axioms to maintain security for the proposed work. The proposed watermarking methodology is tested on the standard set of raw grayscale and RGB color images. The watermarked images are sent in the email, WhatsApp, and Facebook and analyzed. Standard watermarking attacks are also applied to the watermarked images and analyzed. The finding shows that there are no image distortions in the communication medium of email and WhatsApp. But in the Facebook platform, raw images experience compression and observed exponential noise on the digital images. The authentication and copyright protection are tested from the processed Facebook images. It is found that the embedded logo could be recovered and seen with added noise distortions. So the proposed method offers authentication and security with compression attacks. Similarly, it is found that the proposed methodology is robust to JPEG compression, image tampering attacks like collage attack, image cropping, rotation, salt-and-pepper noise, sharpening filter, semi-robust to Gaussian filtering, and image resizing, and fragile to other geometrical attacks. The receiver operating characteristics (ROC) curve is drawn and found that the area under the curve is approximately equal to unity and restoration accuracy of [67 to 100]% for various attacks.
Authentication, tamper detection, broadcast monitoring, and copyright protection are the important applications of watermarking [2]. Image watermarking is significant to prove the authentication or ownership of the digital images. It provides copyright protection to the digital images and it is used to detect tampering of digital images [22]. In the proposed work, the watermarking system is designed significantly involving all the above applications. The watermarking systems are categories into two types, they are spatial and frequency domain watermarking, and the trade-off parameters are robustness, imperceptibility, payload, and security [8]. In the literature, it is a challenge to propose a spatial domain watermarking system by maintaining the trade-off parameters. The authors have taken up the challenge to provide a better spatial domain watermarking system.
In digital image watermarking spatial domain watermarking is simple and embedding the logo in the least significant bit offers less robustness, the robustness is increased by embedding the logo in the intermediate significant bits [10]. Kutter and Petitcolas [17] suggest embedding 80 bits of watermark data in the host image for good quality of perception and the recovery of watermark it sufficient to obtain greater than 80% of the watermark. Yinyin et al. [33] thinks that embedding 2 bits in every pixel increases the payload and processing the two images for image integrity makes the work very complex. Chuan et al. [3] develop watermark from MSB and embedding them as reference bits and authentication bits, but according to the benchmarking techniques by Kutter and Petitcolas in [17], the watermark needs to be independent with the host image and it was violated. Shabir et al. [25] developed a watermarking scheme to embed the logo in the intermediate significant bits of the host image at specific spatial locations using a random vector address, the average peak signal-to-noise ratio (PSNR) of five standard images embedding logo in the 4th significant bit is 45.084 dB.
Works on the authentication and tampering are studied in which, few authentication oriented are presented here, Kostopoulos et al., [14] generate authentication codes and embed in the 4 × 4 blocks of host image where the residual histogram is produced. The PSNR values range from 51.454 dB to 48.826 dB depending upon the images. Yinyin et al. [33] implemented reversible data hiding and image authenticated using two images and tampered images are detected. Hong and Chen in [9] made the comparison between pixel pair matching and optimal pixel adjustment process and diamond process. Nikolaidis and Pitas in [20] used watermark casting for protecting the copyright of the digital content and provides robustness against JPEG and low-pass filter attacks. When discussing tampering literature, Zhang and Wang in [31] propose fragile watermarking and use two different types of bits known as reference bits and check bits to obtain the tampered regions from the host image. Similarly, the same author with other co-authors Zhang et al. in [32] proposed recovery of the tampered area from the host image using two methods in which the first method recovers 5 MSB's digits, and the second method uses restoration methods to recover the data.
Abraham and Paul in [13] works on the spatial color image watermarking scheme, in which out-off the three color frames red, green, and blue, only the blue color frame is used for watermark embedding and the other two color frames red and green are not disturbed. The watermark is embedded in the least significant bit of the blue color frame using a spatial mask, another spatial mask is used for red and green frames to retain the color information.
Wang and Su in [29] the color image is QR decomposed and matrix R is used for copyright protection of the color image in spatial domain. R. Sinhal in [22] converted RGB to grayscale image and embedded the 2 × 4 watermark in the LSB domain in a structure. Chun-Chi Lo and Yu-Chen Hu in [4] worked on the tampered images and the preserving the image integrity. George Voyatzis and Ionnis Pitas in [7] proposed the concepts of robust watermarking with various attacks and stated that geometrical attacks are still a problem to solve. Huynh-The et al. [11] work on the binary watermark bits; they are embedded in the DWT blocks HL4 and LH4 to provide imperceptibility and robustness in the color images. Liu in [16] work on YCbCr color model images, they were tamper-proofed. Using dual-option parity check and morphological operations it was recovered. Sajjad et al. [23] proposed image tampering and restoration using SVD. Lei-Doa and Bao-Long in [15] resist the geometric attacks in the spatial domain, the watermark is embedded in the circular regions of the even–odd quantization.
Qingtang Su et al. in [21] proposed color image watermarking in YCbCr color space and performed watermark embedding in the Y component using the DC component of each 8 × 8 sub-block in the spatial domain similar to DCT transform, this work was robust against signal processing attacks and geometric attacks, it does not specify the pitfalls of the method for the future scope. It does not add the security parameter to the proposed algorithm.
Dipti Prasad Mukherjee et al. [5] work on a new algorithm in the spatial domain, which provides buyer authentication with the multimedia objects, it provides robust against the standard Stirmark attack.
Image authentication scheme is proposed by Zhaxoia et al. in [35] using random values authentication codes are generated and induced in the user-defined seed using Hilbert curve mapping. Authors thought authentication of digital images in the natural way using the concept of moles on the human body. Moles on the human body are considered as one of the basic identification marks (authentication) for a person. In the same analogy, authors thought of establishing moles like watermarks all over the image to provide authentication where these watermark points are invisible and the moles are visible. Mathematical investigation behind moles on the human body is still under research, so author's proposed the points of watermarks (as like moles) on the digital image mathematically using geometric and arithmetic series. The authors thought of generating geometric sequences instead of any predefined curves. So, in the proposed work the geometric sequences were derived to retain the image authentication with added security concept.
The knowledge behind the Euclidean Space points and their axioms are obtained from [28] the idea behind discrete mathematics in the form of Geometric and Arithmetic Sequence came into play in the author's mind using from [6]. In the same way, the implementation idea of the Diffie–Hellman key exchange protocol was initiated in [27] explains the similarities and differences between watermark security and cryptography security.
Using the above knowledge, Euclidean Space point's (ESP) algorithm is designed in the second section. In the third section, the digital watermark embedding scheme in the raw host image is described along with the Diffie–Hellman key exchange protocol algorithm. The blind watermark recovery is described in the fourth section. Testing the proposed work and comparing with the existing methods on the standard raw digital grayscale, and RGB color images are carried out in the fifth section. The results and discussion is carried out with different perspective in the spatial domain with the conclusion of the proposed work.
Methods: Euclidean space point's (ESP) algorithm and Diffie–Hellman key exchange protocol
The proposed work objective is:
To design and develop a New Euclidean Space Points (ESP) Algorithm using discrete mathematics and algebra.
To implement blind spatial domain image watermarking system for image authentication and copyright protection using the developed ESP algorithm and Diffie–Hellman key exchange protocol algorithm for security purpose.
To test the authentication and copyright protection of the proposed watermarking system by sending the watermarked images in the Email, WhatsApp, and Facebook platform.
To test the proposed watermarking system with various attacks.
To restore the watermark logo after the attacks.
To analyze and evaluate the proposed watermarking system using quality metrics and statistical methods.
Euclidean space point's (ESP) algorithm basics
The Euclidean space point's (ESP) algorithm is designed first by choosing the size of the host image in which the watermarking is desired. Using discrete mathematics, in (discrete mathematics), geometric sequence (GS), and arithmetic sequence (AS) are generated and arranged along the x and y-spatial Cartesian coordinate system using the axioms of the Euclidean plane (UChicago REU, 2013). The generation of GS and AS is described in the ESP algorithm with example values. The points of intercept (POI), of the GS (along with the x-axis), and AS (along the y-axis) in the host image define the ESP to embed the watermark in the spatial domain.
Proposed Euclidean space point's (ESP) algorithm
: Let the set \({\varvec{x}}=\left({\varvec{x}}_{11},{\varvec{x}}_{21},{\varvec{x}}_{31,}\ldots {\varvec{x}}_{\varvec{n}1}\right)\) and \({\varvec{y}}=\left({{\varvec{y}}}_{11},{{\varvec{y}}}_{21},{{\varvec{y}}}_{31},\dots .,{{\varvec{y}}}_{{\varvec{n}}1}\right)\) are the possible sets of all ordered n-tuples in x-coordinates and y-coordinates. Let \({\varvec{X}}={{\varvec{R}}}_{{\varvec{x}}}^{{\varvec{n}}}\bigcup {{\varvec{R}}}_{{\varvec{y}}}^{{\varvec{n}}}={{\varvec{R}}}^{{\varvec{n}}}\) and the elements of \({{\varvec{R}}}_{{\varvec{x}}}^{{\varvec{n}}}\) and \({{\varvec{R}}}_{{\varvec{y}}}^{{\varvec{n}}}\) are the points in the spatial coordinates. The axioms of the Euclidean spaces are defined in [28].
$$\mathrm{If }\, x, \mathrm{ and }\, y \in {R}^{n}, a \in R$$
Vectors additions \(\left({x}_{11}+{y}_{11}, {x}_{21}+{y}_{21}, \dots ., {x}_{n1}+{y}_{n1}\right).\)
Multiplication by a real number 'a'.
$$\alpha x=\left({\alpha x}_{1}, {\alpha x}_{2}, \dots , {\alpha x}_{n}\right)\mathrm{ and }ay=\left(a{y}_{1}, {\alpha y}_{2}, \dots . {\alpha y}_{n}\right).$$
Vector product \(\left({x}_{1}{y}_{1}+{x}_{2}{y}_{2}+ \dots .+ {x}_{n}{y}_{n}\right).\)
: Geometric sequence is obtained using the recursive Eq. (1):
$${x}_{11}=x* {r}^{N},$$
where r is the common ratio and \(N \in \left\{0, 1, . . . R\right\}\). The first term is \({x}_{0}= x*{r}^{0}\), the second term is\({x}_{1}= {x}_{0}*{r}^{1}\), the third term is \({x}_{2}= {x}_{1}*{r}^{2}\) and so forth. For example, let r = 2, \({x}_{0}= x=1\), and \(R=\left\{0, 1, 2, 3, . . . 9\right\}\). Thus the sequence \({x}_{11}\) (2) is obtained [10], with the cardinality of 10:
$${x}_{11}=\left\{1, 2, 4, 8, 16, 32, 64, 128, 256, 512\right\}.$$
: Arithmetic sequence is obtained using the recursive Eq. (3):
$${{y}_{11}=y}_{n}= {y}_{n-1}+d,$$
where d is a common difference, the first term is \({y}_{0}= y\), the second term is \({y}_{1}= {y}_{0}+d\), the third term is \({y}_{2}= {y}_{1}+d\) and so on. For example, let d = 2 and \({y}_{0}= y=1\), thus sequence (4) is obtained [6] with the cardinality of 10:
$${y}_{11}=\left\{1, 3, 5, 7, 9, 11, 13, 15, 17, 19\right\}.$$
: To obtain more point locations on the host image, these set points \(\left({x}_{11}, {y}_{11}\right)\), are
Improved using the Euclidean space points axioms defined in i(a), i(b), and i(c).
Using the axiom i(b),
let 'a' = 19 (Secret Key) we obtained another \({x}_{12}\) sequence (5) where:
$${x}_{12}=a* {x}_{11}=19*{x}_{11}$$
$${x}_{12}=\left\{19, 38, 76, 152, 304, 608, 1216, 2432, \mathrm{4864,9728}\right\}.$$
Similarly \({y}_{12}\) sequence (6) is obtained using the axiom i (b), \({y}_{12}=a*{y}_{11}=19*{y}_{11}\)
$${y}_{12}=\left\{19, 57, 95, 133, 171, 209, 247, 285, \mathrm{323,361}\right\}.$$
Using the above axiom i (a) we get \({x}_{13}\) sequence (7), \({x}_{13}= {x}_{11}+{y}_{11}\)
$${x}_{13}=\left\{2, 5, 9, 8, 15, 25, 43, \mathrm{77,143}, 273, 531\right\}.$$
From axiom i (c) we get the sequence, \({y}_{13}\) sequence (8), \({y}_{13}= {x}_{11}*{y}_{11}\)
$${y}_{13}=\left\{1, 6, 20, 56, 144, \mathrm{352,832}, 1920, \mathrm{4352,9728}\right\}.$$
Here \(\{{x}_{11}\}, \{{x}_{12}\} and \{{x}_{13}\}\) are the possible sets of points in the \(x\)-coordinate. Similarly, \(\{{y}_{11}\}, \{{y}_{12}\} and \{{y}_{13}\}\) are the possible sets of points in the y-coordinate.
If these coordinate points exceed the size of the host image, the location point processing is described below:
If the size of the host image is M X N then the location points are restricted to the maximum (M, N), by applying modulus M on the x-coordinates points and modulus N on the y-coordinate points.
The redundant location point values in the x and y-coordinates are removed.
The set of processed x-coordinates and y-coordinates are taken as the location points to embed the watermark.
This set of sequences can further be expanded, but the designed algorithm takes only the axioms from the sequences (3) and (4) so that the processing time and the complexity can be reduced. An example of the ESP algorithm with (Tables 1, 2, and 3) is explained in the Supplementary material. In this algorithm, to initialize the security to the Euclidean Space points, the secret key is shared between the end-users using Diffie–Hellman key exchange protocol, which is explained in the next upcoming section.
Table 1 Full-reference quality metric and No-reference quality metrics—analysis on standard set of images
Table 2 Analysis of Facebook Images and Logo recovery in 4th, 5th, and 6th-bit planes
Table 3 Visual quality metrics of watermarked RGB color image vs. host RGB color image
Proposed watermark embedding scheme using ESP algorithm
In the proposed work, the host image is selected, and based on the size of the host image (different image sizes are chosen), the Euclidean Space points are designed using the ESP Algorithm. The main objective of the proposed work is to attain a blind and simple watermarking algorithm, imperceptibility of the watermark in the spatial domain, robust watermark for any attack, increased payload, and also to provide security to the watermark to preserve the owner copyrights or authentication. To achieve the mentioned parameters a block diagram is proposed which is shown in Fig. 1, the block diagram performs Euclidean space point's based spatial domain watermarking and it is analyzed with both the grayscale image and RGB color model image. The proposed block diagram and algorithm are analyzed with the grayscale images and also with the RGB color model images.
Euclidean space points spatial domain watermarking
Proposed block diagram
In the proposed block diagram Fig. 1, the grayscale host image is first chosen in which the watermark is to be embedded. Next according to the size of the host image Euclidean space points are generated using the ESP algorithm. After obtaining the Euclidean space points, the grayscale intensity values of the host image at the POI are obtained and these grayscale intensity values of the host image are converted into its eight binary equivalent digits.
The watermark image or Logo is converted into a binary logical image using Otsu's adaptive threshold method [26]. The binary digits of the logical logo are embedded in one of the bit planes or positions of POI obtained from the Euclidean space points and the watermarked image is obtained. The analysis was done on the host image by embedding the logo binary digits in every bit position of the POI. The watermarked image performance was obtained and it is described in the results and discussion.
In the second stage of the work, the RGB color image model is chosen as the host image. In this second proposed methodology, the RGB host image is processed by separating the red, green, and blue frames. The ESP algorithm is obtained according to the size of the Blue frame to obtain POI to embed the watermark logo. Intensity values of the blue frame at POI are obtained and converted to 8-bit binary digits. The binary digits of the logical logo are embedded in every bit position of POI and analyzed by merging the two frames. Similarly, the green color frame and red color frames are processed separately and analyzed. The results are described in the "Results and discussion" section. After individual frame processing, red, green, and blue frames are embedded with Logo2 at the same time and combined to get the RGB watermarked image.
According to the concept of cryptography algorithm, public keys are shared in the public domain and private keys are shared securely. In the ESP algorithm, the initial x-coordinate value \(\left({x}_{0}\right)\), the initial y-coordinate value \(\left({y}_{0}\right)\), the common ratio value \(\left(r\right)\), the common difference value \(\left(d\right)\) and secret key value \(\left(a\right)\) could be defined by the sender to obtain security of the Euclidean space points. This provides security to the Euclidean Space points where the watermark logo is going to reside. The concept of sharing private keys using Diffie–Hellman key exchange protocol is explained in the next section.
Diffie–Hellman key exchange protocol algorithm
The security of the secret key Diffie–Hellman key exchange protocol (DHKEP) [24] is one of the best algorithms to share the secret key securely in an unsecured channel [30].
The procedure of key exchange protocol is as follows:
Let the two users be transmitter 'T' and receiver 'R' in a treaty to share the secret key.
In the treaty both 'T' and 'R' users agree on two prime numbers 'P' and 'G' and these are in the public domain. Choose P as a large number.
G is the primitive root modulo P.
Let 'T' and 'R' users choose privately 'a' and 'b'—a large random number or secret key or private key.
'T' computes \(A= {G}^{a}mod P\) and sends to 'R' and 'R' computes \(B= {G}^{b}mod P\) and sends to 'T'. Both 'T' and 'R' computes the shared key \(K= {G}^{ab}mod P,\)
where 'T' computes \(K={B}^{a} \mathrm{mod} P= {\left({G}^{b}\right)}^{a} \mathrm{mod} P ,\) and 'R' computes \(K={A}^{b} \mathrm{mod} P= {\left({G}^{a}\right)}^{b} \mathrm{mod} P\). 'T' and 'R' share the shared key 'K' to exchange the secret key securely. This algorithm is the discrete logarithmic problem and it is equally as hard as RSA. In this proposed work P = 7919, G = 3041, a = 19 and b = 17, A = 2753, B = 4886.
Proposed blind watermark extraction methodology
Figure 2 shows the proposed blind watermark extraction block diagram at the receiver end. In the transmission section, the host image could be either the grayscale image or the RGB color image. In the receiver section, after obtaining the watermarked image, the watermark logo can be removed and checked based on the required strategies. To prove the authentication of the sender at the transmission end the watermark logo could be recovered from the watermarked image without using the original logo or partial logo or original image, so the recovery of the watermark logo is very blind, so it is called blind watermarking. The logo is recovered by generating the secret key value 'a' from the shared keys between the two parties 'T' and 'R' using the Diffie–Hellman key exchange protocol.
Blind recovery of watermark
Then the Euclidean space points are generated at the receiver end. The key value provides security to the location points of Euclidean space points where the logo was embedded. After obtaining the Euclidean space points from the secret key value, POI is obtained on the watermarked image, the intensity values of the POI from the watermarked image are obtained from all the specific locations of POI, and it is converted into its binary equivalent digits. As per the end–end user treaty, the bit where the digital bit of logo is embedded is obtained and all the digital bits are processed to form the logical digital binary logo, and hence the logical digital binary logo is recovered blindly without the use of any host image information or original watermark logo information and it proves the authentication or ownership of the user. Similarly, the logo is recovered from the RGB color image, and analysis was done and it is described in the "Result and discussion" section.
The Euclidean space points (point of intercept of GS and AS) are obtained and shown in Fig. 3. The complete proposed work is implemented using custom MATLAB R2018a coding.
Euclidean space points—for 512 × 512 host image with the initial values x0 = y0 = 1, r = 5, d = 2, secret key = 19
In the proposed work, the host image is selected and the Euclidean space points are generated using the ESP algorithm according to the size of the host image. Depending upon the initial values and secret key value the Euclidean space points are obtained, these points spread over the entire region of the host image, and it is an advantage that, when the watermark spreads over the entire region of the host image, authentication of the digital image could be obtained from every corner of the image.
Measure of imperceptibility
The proposed watermarking scheme is applied on the data set (ImageProcessingPlace.com), in [12] which there are 166 raw images they are Gray Scale Data Set: Tiff_Sequence_Images: 64 images, Tiff_Texture_Image: 64 images, Tiff_Misc_Images: 25 images, and Standard_PNG_Images: 13 images, and JPG_Football_images: 44 images. ImageProcessingPlace.com data sets are the Image databases and it is selected because:
These are the standard test images found frequently in literature.
Almost all the images are uncompressed with higher resolution.
Image databases are used for digital image processing using MATLAB, and they are in the DIP4E and DIPUM3E Faculty and Students Support Packages.
In one place faculty and students can select different image data base according to their work.
These databases are used in more than 50 countries worldwide.
More than 1K research institution, industries, and educational institutes use DIP and DIPUM image databases.
Most of the books, journals and publishers used these image databases
In Fig. 4, '10' standard grayscale images of size 512 × 512 and '3' standard color images of size 512 × 512 are shown as few example images. Logo2 is 64 × 64 which is the same proposed in [13]. In this proposed work, the grayscale Logo1.jpg or Logo2.jpg is used and is converted to a logical binary logo. The ESP algorithm is used to obtain the point of intersection of GS and AS according to the size of the host image. The length of the GS and AS is 90 without excluding zeros and redundant values. By removing the zeros and redundant values, the length of the GS and AS is 73, so 73 POI-ESP are obtained to embed the watermark of size 32 × 32 logical binary logo, that is 1024 bits are required and for the 64 × 64 logical binary logo 4096 bits are required to embed the binary logo in the host image. To obtain better imperceptibility of the watermark these 73 POI-ESP are not sufficient to embed the watermark. So, from GS and AS a maximum of 73 × 73 array of POI is developed to hide the watermark in the host grayscale image. The 73 × 73 POI of the host image is converted into 8-bit binary digits, and the logical binary digits of Logo1 are embedded in every bit and analyzed. The analysis is carried out with 166 raw grayscale images (*.tiff and *.png) and 44 *.jpg images. The proposed methodology is analyzed with two types of quality metrics are Full-reference quality metrics and No-reference quality metrics.
Images: up to down. From left to right: first row: cameraman.tif, Lena.tif, Mandril_gray.tif, Pepper.tif, Pirate.tif, second row: Walkbridge.tif, Woman_blonde.tif, Woman_blackhair.tif, White_Page.tif, Black_Page.tif, third row: Lena_Color.tif, Mandril_Color.tif, and Pepper_Color.tif (grayscale and color host images of size 512 × 512), Logo1.jpg and Logo2.jpg (Grayscale images of size 64 × 64)
Evaluation of image watermarking quality metrics
Image watermarking could degrade the quality of the host image to a certain degree, to test the watermarked image quality metric equal to the subjective perception of quality by a human observer [36] it can be done using two methods are Full-reference quality metrics and No-reference quality metrics. In the Full-reference quality metric, there are three algorithms used to check the quality of the image, they are image mean square error (MSE), image peak signal-to-noise ratio (PSNR), and image Structural Similarity Index (SSIM). MSE and PSNR are simple to calculate it is given in Eq. 9 from [19] peak Val is the maximum intensity value of the image, and Eq. 10, but they are very less compared to the quality of human perception:
$$\mathrm{PSNR}= 10 {\mathrm{log}}_{10}\frac{{\mathrm{Peak} \mathrm{Val}}^{2}}{\mathrm{MSE}},$$
$$\mathrm{MSE}= \sum_{k=1}^{3}\sum_{i=1}^{M}\sum_{j=1}^{N}\frac{\mathrm{WI}\left(i, j\right)-I\left(i, j\right)}{3*M*N},$$
where WI(i, j) is the watermarked image and I(i, j) is the host image.
SSIM metric consists of luminance, contrast, and structural information of the digital image. The human visual system (HVS) perception is good at observing structures that are used in SSIM, so the metric SSIM is more related to the subjective quality score of human beings. Structural Similarity Index [26] is given in Eq. 11:
$$\mathrm{SSIM}= {\left[l\left(I,\mathrm{WI}\right)\right]}^{\alpha }* {\left[c\left(I, \mathrm{WI}\right)\right]}^{\beta }* {\left[s\left(I, \mathrm{WI}\right)\right]}^{\gamma }.$$
SSIM is based on the computation of three terms; they are luminance, contrast, and structural. They are computed using Eqs. (12), (13), and (14):
$$l\left(I, \mathrm{WI}\right)= \frac{2{\mu }_{I}{\mu }_{\mathrm{WI}}+{C}_{1}}{{\mu }_{I}^{2}+{\mu }_{\mathrm{WI}}^{2}+{C}_{1}},$$
$$c\left(I, \mathrm{WI}\right)= \frac{2{\sigma }_{I}{\sigma }_{\mathrm{WI}}+{C}_{2}}{{\sigma }_{I}^{2}+{\sigma }_{\mathrm{WI}}^{2}+{C}_{2}},$$
$$s\left(I, \mathrm{WI}\right)=\frac{{\sigma }_{I\mathrm{WI}}+{C}_{3}}{{\sigma }_{I}{\sigma }_{\mathrm{WI}}+{C}_{3}},$$
where \({\mu }_{I}, {\mu }_{\mathrm{WI}},{\sigma }_{I}, {\sigma }_{\mathrm{WI}, }\mathrm{and} {\sigma }_{I\mathrm{WI}}\) are the local means, standard derivations, and cross-covariance for images I (host image) and WI (Watermarked image).
If \(\alpha = \beta = \gamma =1 (\mathrm{default})\)
$${C}_{3}=\left. {C}_{2} \right/2$$
Substituting Eqs. 12, 13, 14, and 15 in 10, we obtain equation from [32]:
$$\mathrm{SSIM}\left(I, \mathrm{WI}\right)=\frac{\left(2{\mu }_{I}{\mu }_{WI}+{C}_{1}\right)\left({2\sigma }_{IWI}+{C}_{2}\right)}{\left({\mu }_{I}^{2}+{\mu }_{WI}^{2}+{C}_{1}\right) \left({\sigma }_{I}^{2}+{\sigma }_{WI}^{2}+{C}_{2}\right)}.$$
The formula to calculate the correlation coefficient is given in Eq. (17):
$$\mathrm{CC}= \frac{\sum_{M}\sum_{N}\left({I}_{M, N}-\overline{I } \right)\left({WI}_{M, N}-\overline{W }\right)}{\sqrt{\left(\sum_{M}\sum_{N}{\left({I}_{M, N}-\overline{I } \right)}^{2}\right)\left(\sum_{M}\sum_{N}{\left({WI}_{M, N}-\overline{W }\right)}^{2}\right)}},$$
where \(\overline{I }\) is the mean of the original image and \(\overline{W }\) is the mean of the watermarked image. The correlation coefficient value ranges from − 1 to 1. 'M' and 'N' are the number of rows and columns of the host image I(M, N) and watermarked image WI(M, N). The bit error rate is the ratio of no. of bits in error to that of total no. of bits.
The watermark Logo1 or Logo2 is embedded in all the bit planes of Fig. 4 images (Table4 in Supplementary material) at the specific ESP of I(M, N). Full Reference Quality Metric is measured, it is observed that using the proposed ESP algorithm and embedding methodology in the spatial domain gives the average PSNR of 46.21356 dB and 40.3028 dB between I and WI at the 5th-bit plane or position due to the logical binary logo of size 32 × 32 and 64 × 64, respectively. At the 4th bit plane, the PSNR is 52.53392 and 46.4826, respectively. The PSNR is above 45 dB even if the watermark binary logo is embedded in the 4th-bit plane. In all the 166 raw grayscale images (*.tiff and *.png) the 64 × 64 size binary watermark logo 'Logo2' is embedded in the 4th bit plane of the host image at the specific ESP, and whereas in the 44 *.jpg images, the watermark is embedded in the 6th-bit plane instead of 4th plane because in compressed images *.jpg, the embedded watermark in the 4th-bit plane could not be recovered properly.
Table 4 PSNR comparison of the proposed method with the other methods on Lena (512 × 512) image
No-reference quality metrics
In the No-reference quality metrics evaluation method, there are two algorithms, the first algorithm is Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), and the second algorithm is Natural Image Quality Evaluator (NIQE) from [1]. BRISQUE model is trained with the set of images with the known distortion, whereas NIQE can evaluate the quality of the images with arbitrary distortion. BRISQUE is opinion-aware that is the subjective quality score that correlates with HVS but NIQE is opinion-unaware. Figure 5 shows the relationship between the average SSIM and the average BRISQUE score of all the 10 standard grayscale images.
Average SSIM vs. average BRISQUE score
The BRISQUE value is low for good images and high for the distorted images. The proposed ESP algorithm and embedding methodology satisfy the relationship between BRISQUE and SSIM. Table 1 gives the values of the Full-reference quality metric and No-reference quality metrics of all the standard sets of images.
In all the raw images, that is *.tiff, *.tif, and *.png the watermark Logo2 is embedded in the 4th bit plane of a specific ESP location. In the compressed images (*.jpg), Logo2 is embedded in every 4th, 5th, and 6th-bit plane to check the blind recovery of Logo2 at the receiver end. The average Full-reference quality metrics and average No-reference quality metrics of all the images between the 'I' and 'WI' are measured. According to the measurement in Table 1, as the MSE (mean square error) more the PSNR (peak signal-to-noise ratio) is less. Similarly, the BER (bit error rate) is high for high MSE.
Watermarked image analysis via Email, WhatApp, and Facebook
All the raw watermarked images (WIs) are zipped, attached in a folder, and sent to the personal email. The WIs from email (E_WI) are downloaded and extracted through the compressed folder. Performance metric analysis on the images WI and E_WI is done and it is found that the PSNR is infinite dB, SSIM is unity, BER is zero, and NC is unity. All the embedded logos are perfectly recovered from all the sets of images. Similarly, all the WIs are sent individually in WhatsApp and downloaded from WhatsApp the metric analysis between WI and W_WI is done the result values are as same as email.
In the same way, the WIs are uploaded to the authors' Facebook (FB) book account and downloaded individually. It is shown in Fig. 6. The observations made in the analysis are the following:
The input data set raw images are in the format *.tiff, *.tif, *.png, the output image downloaded from FB is *.jpg.
If the size of the WI to the FB is M x N (less than 1024 X 1024 pixels), occupied space say XX KB (below 1 MB) of memory in the drive, the F_WI size is as same as input image with fewer data storage (approximately XX/4 KB). That is the memory compression is carried in FB. The embedded logo is blindly recovered.
If the size of the WI to the FB is greater than say 1024 X 1024 pixels and occupying more than 1 MB of storage, then the F_WI size is different from that of the WI, and memory compression takes place it is approximately equal to XX/2.
In this proposed work, if the size of the image is altered to a greater extent, the logo could not be perfectly recovered.
Authentication is tested from the F_WI images, it is observed that the binary watermark logo embedded 4th bit of host image could be partially recoverable with very low SSIM and PSNR. Then logo is embedded in the 5th and 6th-bits, found that the logo from F_WI could be recovered blindly.
Analysis of watermarked image via E-mail, WhatsApp and Facebook
So for this proposed algorithm, 5th-bit logo embedding is advisable for the Facebook image authentication, because the 6th-bit offers very low PSNR (less than 40dB) between I and F_WI from Table 2.
It is analyzed by comparing the histograms of WI and F_WI. The histogram difference between the WI and F_WI is similar to exponential noise. It is submitted in Annexure (Table 5) since the numbers of tables are more in this work:
All the Logos from WI are recovered blindly. So the proposed work falls in the category of blind image watermarking.
Table 5 Confusion matrix to test the algorithm and watermarking scheme
Attacks and tests for image authentication
To test the proposed scheme for image authentication, the WI tampers with various attacks. The watermark logo is obtained from the tampered watermarked image without the reference of the host image (Blind Watermarking) and compared with the original logo as shown in Fig. 7. The tampered watermarked image obtained at the receiver section is processed with the ESP algorithm and the watermark logo is extracted. The tampered image can be recovered using the steps followed in Fig. 7.
Image authentication and recovery from tampered watermarked image
The tampered image is recovered using the host image as a reference, at first the tampered image and the host image are subtracted to get the residual tampered region. Next, the residual tampered image region coordinates are collected and mapped with the coordinates of the host image. The intensity values of the coordinates from the host image are collected and the intensity values are replaced in the residual region to recover the tampered region. The PSNR between the recovered watermarked image and the watermarked image is infinity. In the proposed watermarking scheme even if the watermarked image tampers more than 90%, detection of the watermark is done as shown in Fig. 8h. Figure 8a–o shows the extraction of the watermark logo from the tampered watermarked images with different percentages of tampering. Tampering on the watermarked image was tested on the standard Lena image of size 512 × 512 with the tampering rate from 0.59 to 95% the proposed work provides the PSNR of the watermark logo in the range of [81.2441 to 52.9996]. This shows that the proposed ESP algorithm, the watermarking embedding and recovering scheme offers very good image authentication.
Recovery of watermark from tampered watermarked images for image authentication
Imperceptibility in color images
RGB color image model analysis: Lena.png image is processed by embedding Logo2.jpg in individual color frames and the PSNR of the watermarked image and RGB host image is calculated and shown in Table 3. The proposed work performance is very excellent compared with the other existing methods. Table 3 is evident for its performance. The BRISQUE score for the good quality image should be a low value, the ESP algorithm and the watermarking scheme provide the best watermarking, and the relationship between the PSNR, SSIM, and BRISQUE also satisfies the relation. Another observation is that the embedding can be carried out up to the 4th-bit position and 5th-bit position in the POI where the watermark would be still imperceptible.
The restoration process of RGB color images
In the practical view, when the digital images are transmitted in the channel, images may lead to different attacks by the intruders it may be an intentional or unintentional attack. The proposed work needs to offer good robustness and security to the intruders. So to check the performance of the proposed algorithm 10 standard sets of raw images from (ImageProcessingPlace.com) are taken. Different types of attacks are performed on the watermarked image and the performance of the proposed ESP algorithm and the watermarking scheme was tested using the performance metrics SSIM, cross-correlation function (CC), mean square error (MSE), and bit error rate (BER). Figures 9, 10, 11, 12, 13 show the visual quality metrics on the recovered watermark before and after restoration. The Logo2 is embedded in all three color frames Red, Green, and Blue. The average SSIM of Logo2 before restoration that is after all attacks on the RGB color watermarked image is 0.979294 and after restoration is 0.995141. The restoration process enhances the correlation coefficient value to a greater extent. The restoration process was done as follows:
Attacks vs. visual metrics—correlation coefficient
Attacks vs. visual metrics—SSIM
Attacks vs. bit error rate
Attacks vs. mean square error
Attacks vs. accuracy of logo obtained after restoration
Input Host image: RGB Color Model image.
Step 1: Store all the three color frames separately as 'R', 'G', and 'B' of the host image.
Step 2: After watermarking and transmitting from the transmitter section in Fig. 1, store the watermarked image in the receiver section.
Step 3: Let 'R_A', 'G_A', and 'B_A' be the three color frames after an attack.
Step 4: Restoration process: Let 'Res_A', 'Res_G', and 'Res_B' be the restored frames.
Calculate Res_A = R – R_A; Res_G = G – G_A; and Res_B = Res_B – B.
Step 5: Using the restored frames Logo2 can be obtained with acceptable visual quality.
Using the above restoration process, the average correlation coefficient on all the attacks was increased by 0.450106 times.
The average mean square error and average bit error rate on all the attacks were decreased by 0.232843 and 953.7059.
The accuracy is calculated by using the formula from [18] after the restoration process, it is given in Eq. (18):
$$\mathrm{Accuracy}= \frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{TN}+\mathrm{FP}+\mathrm{FN}},$$
where TP: true positive, TN: true negative, FP: false positive, and FN: false negative.
Figure 12 shows that after the restoration process, the watermark Logo2 can be obtained with an accuracy of more than 67%. So the proposed ESP algorithm, watermarking scheme, and restoration scheme work more accurately.
Watermark restoration
The watermark Logo2 was better restored after the restoration process.
In Fig. 14, watermark Logo2 after the attack and restored watermark Logo2 from the watermarked image is shown with all different kinds of attacks. The proposed watermarking scheme and the algorithm are immune to salt-and-pepper noise and sharpening filter that is before restoration.
Attacks and recovery of Logo2 before and after restoration
Comparison with the existing methods
Comparison of average PSNR values of 10 standard grayscale images with the other existing methods is presented in Table 4. The proposed ESP algorithm and watermarking methodology were analyzed by embedding a logical binary logo individually in every bit position of the host image. In Fig. 2 of Chuan Qin et al. in [3] at page 236, the watermark bits are used to recover MSB layers of the tampered image but watermark bits can authenticate the tampered images without recovering them since the purpose of watermarking is to provide image authentication.
In [3] tampering of Lena 512 × 512—the grayscale image at 6.84% leads to a PSNR of 44.16 dB and the recovered image has the PSNR of 46.37 dB which at the embedding mode (6, 2). In Table 4 the proposed method is compared with the particular references [3, 22, 31], and [32] because all these existing methods are purely based on spatial domain image watermarking technique as the proposed work, in which the PSNR metric is calculated depending upon the different significant bits i.e. from ISB to LSB. In a similar manner, the existing techniques calculated PSNR metric after restoration technique from tampering attack.
Comparison of statistical performance with the existing methods
In [33], the confusion matrix and its analysis were carried out to find the TPR (true positive rate) and FPR (false positive rate) to check the quality of the proposed work. Similarly, for the proposed work statistical performance was carried out. The confusion matrix was constructed and is shown in Table 5.
The formula for TPR and FPR is calculated from [18] and it is given in Eqs. (19) and (20):
$$\mathrm{True positive rate} \left(\mathrm{TPR}\right)= \frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}},$$
$$\mathrm{False positive rate} \left(\mathrm{FPR}\right)= \frac{\mathrm{FP}}{\mathrm{FP}+\mathrm{TN}}.$$
Using the above confusion matrix, Eqs. (19) and (20), Table 6 values are calculated for the proposed work and compared with the existing works.
Table 6 Comparison of statistical parameters with the existing methodologies
In Table 6 the statistical parameters TP, TN, FP FN, TPR and FPR are calculated for the proposed work to check the performance of the proposed watermark embedding and blind recovery using ESP algorithm with the other existing methods in [4], [33], [34] and [35]. In the existing methodologies similar statistical parameters were used to analyze the watermarking system, so these reference were chosen for comparison. From Table 6 it is evident that TPR is higher and FPR is lower for the proposed methodology than all other existing methods in [4], [33], [34] and [35]. So the proposed methodology is better than the other existing methodologies.
The proposed system is compared with Chun-Chi Lo et al. [4], and Zhaoxia Yin et al. [35]. It is found that the TPR value is high and the FPR value is very less compared with all the other existing methodologies, which confirms that the proposed ESP algorithm and the watermarking scheme is better than the other existing schemes. The receiver operating characteristics (ROC) curve is drawn in Fig. 15 for the 10 standard grayscale images and it is shown that the area under the ROC is nearly equal to unity. Similarly, raw RGB color images (17.png images and 10 raw.tif) were also analyzed with the ESP algorithm and the proposed watermarking methodology. The comparison of the proposed work is carried out with the standard Lena.tif (512 × 512 × 3) image and is shown in Table 7.
ROC curve using proposed system
Table 7 Comparison of color images with the existing methods
In Table 7, the proposed method is compared with [11, 13, 16, 22] and [34] existing methods because in these existing methods spatial domain watermarking on color images are focused. The proposed work has excellent image subjective quality than the other mentioned existing methods.
Jobin Abraham and Varghese Paul in [13] used masks to adjust the color variation after watermark embedding and the same Logo2 was used in embedding but the PSNR was very low compared with the proposed work and in this work, there is no need for such mask. The work is done in [11] and [16] also does not require such a mask, the proposed ESP algorithm and watermark embedding scheme provides very high PSNR at the LSB level. The PSNR 69.1941 dB is achieved when the Logo2 is embedded in the 4th bit plane of Lena image of size (512 × 512).
Blind and semi-fragile spatial domain watermarking is proposed for image authentication. The ESP algorithm is developed with fixed initial values and a secret key. The points of intercept, of GS and AS make the algorithm to be more secured where the confusion probability will be more. It is difficult for the intruder to identify the watermark presence in the host image since the logo is embedded appending the Diffie–Hellman key exchange protocol with the ESP algorithm. According to the proposed methodology embedding the logo is better in the 4th bit plane for the raw images and the 5th-bit plane for compressed images, where the authentication could be done perfectly. The proposed ESP spreads over the entire region of the host image, so the image authentication could be done from any corner of the image. The image authentication is tested by sending the WI in the E-mail, WhatsApp, and Facebook. It is observed that using this proposed method, image authentication in E-mail and WhatsApp can be done perfectly with 100% accuracy the SSIM of the original Logo and recovered Logo is unity. In FB, to obtain better image authentication, the embedding is done in the 5th-bit plane of the host image, because after downloading the WI from the FB the WI is in the compressed format. So the image authentication is done with more accuracy. Image tampering attacks are done on the WI with the tampering attack ranging from 0.59 to 95%, the proposed work provides the PSNR of the watermark logo in the range [81.2441 to 52.9996] dB. The proposed work is robust to JPEG compression from it can withstand up to 7:1 compression ratio as given in Table 2. Similarly, the work is robust to tampering, image cropping, salt-and-pepper noise, sharpening filter, semi-robust to Gaussian filtering, rotation (1°), and image resizing, and fragile to other geometrical attacks. Logo detection is 100% after averaging filter attack, tampering WI with 32 × 32 grayscale image attack, and sharpening filter attack. The average accuracy of logo detection is more than 65% for all the attacks. The proposed ESP algorithm and the watermark embedding scheme provide higher imperceptibility, security, and robustness to the raw and compressed grayscale images. Similar to the RGB images, it offers higher imperceptibility with a higher payload and the logo recovery rate is very high. Overall, the proposed methods provide the best image authentication compared with the other existing methods.
Future scope
The proposed work is handcrafted over different image data set of various sizes and it is working well for the grayscale images and also with the color images, but the downloaded colored watermarked images from the FB give poor authentication. After embedding the watermark logo in all three color frames separately, the PSNR between the host image and FB_WI is 45 dB which is acceptable. But the authentication is not satisfactory for the author, where the SSIM is very low. The downloaded watermarked image from FB is checked with the original WI image, it is observed that about 48.19% of pixels are affected in FB color image. So another spatial watermarking method is to be developed for color images to maintain authentication in FB. Deep Neural Networks (DNN)-based spatial domain watermarking with the handcrafted ESP algorithm is the future scope of this proposed work.
MALTAB Coding – Individually for Every Image Set.
Image Data Set (*.tiff, *.tif, *.png, *.jpg).
Example of the Proposed ESP Algorithm and Complete Quality Metric Analysis of images.
*.jpeg:
Joint Photo Experts Group—Image Format
*.jpg:
Joint Photo Graph—Image Format
*.png:
Portable Network Graphics—Image Format
*.tif:
Tagged Image File
*.tiff:
Tagged Image File Format
Arithmetic sequence
BER:
BRISQUE:
Blind/referenceless image spatial quality evaluator
Cross correlation
DHKEP:
Diffie–Hellman key exchange protocol
Euclidean space points—algorithm developed for the proposed work
FN:
False negative
FP:
GS:
Geometric sequence
I(M, N):
Input image, of size M—rows and N—columns
ISB:
Intermediate significant bit
Joint Photograph Experts Group
KB:
Kilo byte
LSB:
Least significant bit
MSB:
Most significant bit
MSE:
Mean square error
NIQE:
Natural image quality evaluator
POI:
Point of intercept
PSNR:
Peak signal-to-noise ratio
Red green and blue frames—from a color image
ROC:
Receiver operating characteristics
SSIM:
True negative
TP:
True positive
WI(M, N):
Watermarked image of size M, N
Anish M, Anush KM, Alan CB (2011) Blind/Referenceless Image Spatial quality Evaluator", a Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems, and Computers (ASILOMAR – 2011) – IEEE Xplore–2012, P 723–727. https://doi.org/10.1109/ACSSC.2011.6190099.
I.P. Christine, J.D. Edward, Digital Watermarking: Algorithm and Application. IEEE Signal Process. Magaz. (2001). https://doi.org/10.1109/79.939835
C. Qin, H. Wang, X. Zhang, X. Sun, Self-embedding fragile watermarking based on reference-data interleaving and adaptive selection of embedding mode. Inf. Sci. 373(2016), 233–250 (2016). https://doi.org/10.1016/j.ins.2016.09.001
C.-C. Lo, Hu. Yu-Chen, A novel reversible image authentication scheme for digital images. Signal Process. 98, 174–185 (2014). https://doi.org/10.1016/J.SIGPRO.2013.11.028
P.M. Dipti, M. Subhamoy, T.A. Scott, Spatial domain Watermarking of Multimedia object for Buyer Authentication. IEEE Trans. Multimed. 6(1), 1590–9210 (2004). https://doi.org/10.1109/TMM.2003.819759
Discrete Mathematics: An Open Introduction: http://discrete.openmathbooks.org/dmoi2/sec_seq-arithgeom.html
G. Voyatzis, I. Pitas, Protecting Digital-Image Copyrights: A Framework. January/February 19, 18–24 (1999). https://doi.org/10.1109/38.736465
C.L. Gerhurd, S. Iwan, L.L. Reginaid, Watermarking digital image, and video data – a state of the art overview. IEEE Signal Process. Magaz. (2000). https://doi.org/10.1109/79.879337
W. Hong, T.S. Chen, A novel data embedding method using adaptive pixel pair matching. IEEE Trans. Inform. Foren. Secur. 7(1), 176–184 (2012). https://doi.org/10.1109/TIFS.2011.2155062
H.-C. Chen, Y.-W. Chang, R.-C. Hwang, A watermarking algorithm. J. Inf. Optim. Sci. 32(3), 697–707 (2011). https://doi.org/10.1080/02522667.2011.10700081
T. Huynh-The, O. Banos, S. Lee, Y. Yoon, T. Le-Tien, Improving digital image watermarking by means of optimal channel selection. Expert Syst. Appl. 62(2016), 177–189 (2016). https://doi.org/10.1016/J.ESWA.2016.06.015
ImageProcessingPlace.com. http://www.imageprocessingplace.com/root_files_V3/image_databases.htm.
A. Jobin, P. Varghese (2019) An imperceptible spatial domain color image watermarking scheme. J. King Saud Univ. Computer Inf. Sci. 31(1), 125-133. DOI https://doi.org/10.1016/j.jksuci.2016.12.004.
S. Kostopoulos, A.M. Gilani, A.N. Skodras. Color image authentication based on a self-embedding technique. In: 14th International Conference on Digital Signal Processing Proceedings. (2002) DOI: https://doi.org/10.1109/ICDSP.2002.1028195.
L. Lei-Doa, G. Bao-Long, Localized image watermarking in spatial domain resistant to geometric attacks. Int. J. Electr. Commun. 63(2), 123–131 (2009). https://doi.org/10.1016/J.AEUE.2007.11.007
Liu, K.C., (2012). Color image watermarking for tamper-proofing and pattern-based recovery. IET Image Proc. 6 (5), 2012, pp. 445–454. DOI: https://doi.org/10.1049/IET-IPR.2011.0574
M. Kutter, F.A.P. Petitcolas, A fair benchmark for image watermarking system, Electronic Imaging '99, Security and Watermarking of Multimedia Contents, vol. 3657, Sans Jose, CA, USA, 25–27. Int. Soc. Opt. Eng. 1999, 1–14 (1999)
Machine Learning Crash Course. https://developers.google.com/machine-learning/crash-course/classification/accuracy
MathWorks. https://in.mathworks.com/help/images/ref/psnr.html#bt5uhgi-2_1
N. Nikolaidis, I. Pitas, Robust image watermarking in the spatial domain. Signal Process. 66(1998), 385–403 (1998)
Su. Qingtang, Y. Niu, A blind color image watermarking based on DC component in the spatial domain. Optik Int. J. Light Elect. Optics 124(23), 6255–6260 (2013). https://doi.org/10.1016/J.IJLEO.2013.05.013
R. Sinhal, I.A. Ansari, C.W. Ahn, Blind Image Watermarking for Localization and Restoration of Color Images. IEEE Access 8, 200157–200169 (2020). https://doi.org/10.1109/ACCESS.2020.3035428
S. Dadkhah, A.A. Manaf, Y. Hori, A.E. Hassanien, S. Sadeghi, An effective SVD – based image tampering detection and self-recovery using active watermarking. Signal Process. 1197–1210, 0923–5965 (2014). https://doi.org/10.1016/j.image.2014.09.001
B. Schneier, Applied Cryptography: Protocols, Algorithms, and Source Code in C, 2nd edn. (Wiley, New York, 2016), pp.513–516
S.A. Parah, J.A. Sheikh, U.I. Assad, G.M. Bhat, Realisation and robustness evaluation of a blind spatial domain watermarking technique. Int. J. Electron. 104(4), 659–672 (2017). https://doi.org/10.1080/00207217.2016.1242162
The Lab Book Pa. http://www.labbookpages.co.uk/software/imgProc/otsuThreshold.html
T. Kalker, Considerations on Watermarking security, IEEE Fourth Workshop on Multimedia Signal Processing, IEEE Xplore: 07th August 2002. IEEE 2001, 201–206 (2002). https://doi.org/10.1109/MMSP.2001.962734
UChicago REU 2013 Apprentice Program: Prof. Babai, Lecture 16, July 23rd, 2013, Axioms of Euclidean Space. https://home.ttic.edu/~madhurt/courses/reu2013/class723.pdf
H. Wang, Q. Su, A color image watermarking method combined QR decomposition and spatial domain. Multimed. Tools Appl. (2022). https://doi.org/10.1007/s11042-022-13064-y
Wolfram Math World. https://mathworld.wolfram.com/Diffie-HellmanProtocol.html
X. Zhang, S. Wang, Fragile watermarking with error-free restoration capability. IEEE Trans. Multimedia 10(8), 1490–1499 (2008). https://doi.org/10.1109/TMM.2008.2007334
X. Zhang, S. Wang, Z. Qjan, G. Feng, Reference Sharing Mechanism for watermark self-embedding. IEEE Trans. Image Process. 20(2), 485–495 (2011). https://doi.org/10.1109/tip.2010.2066981
Y. Peng, X. Niu, Fu. Lei, Z. Yin, Image authentication scheme based on reversible fragile watermarking with two images. J. Inf. Secur. Appl. 40, 236–246 (2018). https://doi.org/10.1016/J.JISA.2018.04.007
Z. Yuan, Q. Su, D. Liu et al., A blind image watermarking scheme combining spatial domain and frequency domain. Vis. Comput. 37, 1867–1881 (2021). https://doi.org/10.1007/s00371-020-01945-y
Y. Zhaxoia, N. Xuejing, Z. Zhili, T. Jin, L. Bin, Improved reversible image authentication scheme. Cogn. Comput. 8, 890–899 (2016). https://doi.org/10.1007/S12559-016-9408-6
W. Zhou, C.B. Alan, R.S. Hamid, P.S. Eero, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP
I extend my sincere thanks and gratitude to Dr. P. Suhail Parvaze, Ph. D, IIT Madras, and Research Consultant for Philips India, for his valuable suggestions and to improve the quality of the paper.
The proposed work was not applied to any of the organization, so there is no funding applicable for this work.
Department of Electronics and Communication Engineering, R.M.K. College of Engineering and Technology, Tamil Nadu, Chennai, India
Shaik Hedayath Basha
Department of Computer Science and Engineering, R.M.K. Engineering College, Tamil Nadu, Chennai, India
Jaison B
Contribution of the first author: designed, developed and MATLAB implementation of the three modules they are (i) new ESP algorithm (ii) DHKEP, and the watermarking system. (iii) Testing and analysis. Contribution of the second author: (i) discussed the literature survey and its outcomes (ii) suggested the test to send in Email, WhatsApp and Facebook. (iii) Watermark restoration process. Both authors read and approved the final manuscript.
Correspondence to Shaik Hedayath Basha.
Image authentication scheme [35] is proposed using Hilbert curve mapping which is complex instead of that the authors proposed simple ESP algorithm using geometric and arithmetic sequences with added security.
Compared with [35] our system is simple, semi-robust and secure.
In RGB watermarking, Jobin Abraham and Varghese Paul [13] used spatial mask to recover the distorted Red and Green Frames, In this work No such Spatial Masks are required.
The False Positive Rate of the proposed system is less compared with [3, 33] and [35] works.
Basha, S.H., B, J. A novel secured Euclidean space points algorithm for blind spatial image watermarking. J Image Video Proc. 2022, 21 (2022). https://doi.org/10.1186/s13640-022-00590-w
Euclidean space points (ESP)
Tampering
Watermarking | CommonCrawl |
\begin{document}
\global\long\def\spacingset#1{\global\long \global\long\def\baselinestretch{ } \small\normalsize}
\spacingset{1}
\title{\textbf{Structural nested mean models with irregularly spaced longitudinal observations}} \author{Shu Yang\thanks{Department of Statistics, North Carolina State University, North Carolina 27695, U.S.A. Email: [email protected]}}
\maketitle
{}
\begin{abstract} Structural Nested Mean Models (SNMMs) are useful for causal inference of treatment effects in longitudinal observational studies. Most existing works assume that the data are collected at pre-fixed time points for all subjects, which, however, is restrictive in practice. To deal with irregularly spaced observations, we assume a class of continuous-time SNMMs and a martingale condition of no unmeasured confounding (NUC) to identify the causal parameters. We develop the first semiparametric efficiency theory and locally efficient estimators for continuous-time SNMMs. This task is non-trivial due to the restrictions from the NUC assumption imposed on the SNMM parameter. In the presence of dependent censoring, we propose an inverse probability of censoring weighting estimator, which achieves a multiple robustness feature in that it is unbiased if either the model for the treatment process or the potential outcome mean function is correctly specified, regardless whether the censoring model is correctly specified. The new framework allows us to conduct causal analysis respecting the underlying continuous-time nature of the data processes. We estimate the effect of time to initiate highly active antiretroviral therapy on the CD4 count at year 2 from the observational Acute Infection and Early Disease Research Program database. \end{abstract} \noindent \textit{Keywords:} Causality; Counting process; Discretization; Multiple robustness; Martingale.
\noindent
{}
\spacingset{1.45}
\section{Introduction\label{sec:Introduction}}
\subsection{Causal inference methods with time-varying confounding}
The gold standard to draw causal inference of treatment effects is designing randomized experiments. However, randomized experiments are not always feasible due to practical constraints or ethical issues. Moreover, randomized experiments often have restrictive inclusion and exclusion criteria for patient enrollment, which limits the experiment results to be generalized to a larger real-world patient population. In these cases, observational studies are useful. In observational studies, confounding by indication poses a unique challenge to drawing valid causal inference of treatment effects. For example, sicker patients are more likely to take the active treatment, whereas healthier patients are more likely to take the control treatment. Consequently, it is not fair to compare the outcome from the treated group and the control group directly. Moreover, in longitudinal observational studies, confounding is likely to be time-dependent, in the sense that time-varying prognostic factors of the outcome affect the treatment assignment at each time, and thereby distort the association between treatment and outcome over time. In these cases, the traditional regression methods are biased even adjusting for the time-varying confounders \citep{robins1992g,hernan2000marginal,hernan2005structural,robins2009estimation,orellana2010dynamic_a}.
\subsection{A motivating application}
\textbf{}HAART (highly active antiretroviral therapy) is the standard of care as initial treatment for HIV. Our interest is motivated by the observational AIEDRP (Acute Infection and Early Disease Research Program) Core 01 study. This study established a cohort of HIV infected patients who have chosen to defer therapy but agree to be followed by this study. Deferring therapy may have an increased risk of permanent immune system damage but also a decreased risk of developing drug resistance. We aim to determine the effect of time to initiate HAART on disease progression for those patients who were diagnosed during acute or early HIV infection.
The outcome variable $Y$ is the CD4 count measured by the end of year $2$, for which lower counts indicate worse immunological function and disease progression. The inter-quantile range of the observed outcome in the AIEDRP database is from $443$ cells/mm$^{3}$ to $794$ cells/mm$^{3}$. In this database, $45\%$ of patients dropped out of the study before year 2, rendering $969$ patients with complete observations. Treatment initiation can only occur at follow-up visits and be determined by the discretion of physicians. By protocol, follow-up visits occur at weeks 2, 4, and 12, and then every 12 weeks thereafter, through week 96. However, as shown in Figure \ref{fig:irregular visit}, both the number and the timings of visits differ from one patient to the next. Among all patients, $36\%$ of patients did not initiate the treatment before year 2. The observed time to treatment initiation ranges continuously from $12$ days to $282$ days. The covariates include age at infection, gender, race, injection drug ever/never, and measured CD4 count and log viral load at follow-up visits.
To answer the question of interest using the AIEDRP database, two major concerns arise: first, the association between the treatment and outcome processes, i.e., time-varying confounding, that would obscure the causal effect of time to treatment initiation on the CD4 outcome at year 2; second, the observations are irregularly spaced.
\subsection{Structural nested mean models}
Structural Nested Models (SNMs; \citealp{robins1992g,robins1994correcting}) have been proposed to overcome the challenges for causal inference with time-varying confounding. We focus on a class of SNMs for continuous outcomes, namely, structural nested mean models (SNMMs). We discuss the extension to accommodate the binary outcome and the survival outcome in Section \ref{sec:Discussion}. Most existing works on SNMMs assume discrete-time data generating processes and require all subjects to be followed at the same pre-fixed time points, such as months. The literature of discrete-time SNMMs is fruitful; see, e.g., \citet{robins1998structural,robins2000sensitivity,almirall2010structural,chakraborty2013statistical,lok2012impact,lok2014opt,yang2015gof,yang2017sensitivity}. However, as in the AIEDRP database, observational data are often collected by user-initiated visits to clinics, hospitals and pharmacies, and data are more likely to be measured at irregularly spaced time points, which are not necessarily the same for all subjects. \textcolor{black}{Such data sources are now commonplace}, such as electronic health records, claims databases, disease data registries, and so on \citep{chatterjee2016constrained}.
The existing causal framework does not directly apply in such situations, requiring some (possibly arbitrary) discretization of the timeline \citep{neugebauer2010observational}. Such data pre-processing is quite standard and routine to practitioners, but leads to many unresolved problems: the treatment process depends transparently on the discretization, and therefore the interpretation of SNMMs depends on the definition of time interval \citep{robins1998correction}. \textcolor{black}{Moreover, after discretization, the data may need to be recreated at certain time points. Consider monthly data for example. If a subject had multiple visits within the same month, a common strategy is to take the average of the multiple measures as the observation for a given variable at that month. If a subject had no visit for a given month, one may need to impute the missing observation. Because of such distortions, the resulting data may not satisfy the standard causal consistency or no unmeasured confounding (NUC) assumptions. Consequently, model parameters may not have a causal interpretation.}
With irregularly spaced observations, it is more reasonable to assume that the data are generated from continuous-time processes. The work for causal models in continuous-time processes is somewhat sparse; exceptions include, e.g., \citet{robins1998correction,lok2004estimating,lok2008statistical,zhang2011causal,lok2017mimicking}. Extending the existing causal models with discrete-time processes to continuous-time processes is not trivial. An important challenge lies in time-dependent selection bias or confounding; e.g., in a health-related study, sicker patients may visit the doctor more frequently and are more likely to initiate the treatment. To overcome this challenge, following \citet{lok2008statistical}, we treat the observed treatment assignment process as a counting process $N_{T}(t)$ and assume a martingale condition of NUC on $N_{T}(t)$ to identify the SNMM parameters. Specifically, the NUC assumption entails that the jumping rate of $N_{T}(t)$ at $t$ does not depend on future potential outcomes, given the past treatment and covariate history up to $t$. A practical implication is that the covariate set should be rich enough to include all predictors of outcome and treatment, so that we can distinguish the treatment effect and the confounding effect. This assumption was also adopted in \citet{zhang2011causal} and \citet{yang2018modeling} to the settings where the effect of a treatment varies in continuous time. \citet{lok2017mimicking} provided a strategy of constructing unbiased estimating equations exploiting the relationship between the mimicking potential outcome process and the treatment process, which leads to a large class of estimators. While this strategy provides unbiased estimators, there is no guidance on how to choose an efficient estimator, and a naive choice can lead to inaccurate estimation.
\subsection{Semiparametric efficiency theory for continuous-time SNMMs}
We establish the new semiparametric efficiency theory for continuous-time SNMMs with irregularly spaced observations. Toward this end, we follow the geometric approach of \citet{bickel1993efficient} for the semiparametric model by characterizing the nuisance tangent space, its orthogonal complementary space, and lastly the semiparametric efficiency score for the SNMM parameter.
In our problem, the SNMM and the NUC assumption constitute the semiparametric model for the data. Given the close relationship of causal inference and missing data theory, it is worthwhile to discuss the connection of the semiparametric efficiency development in our paper and that in the missing data literature \citep{ding2017causal}. The NUC assumption for the treatment process plays the same role of the ignorability assumption for the missing data mechanism; therefore, our characterization of the nuisance tangent space for the treatment process follows the same as that for the continuous-time missing data process; see Section 5.2 of \citet{tsiatis2007semiparametric}. Besides this analogy, our theoretical task is \textcolor{black}{considerably more complicated}. Although the NUC assumption does not have any testable implications on the observed-data likelihood \citep{van2003unified,tan2006regression}, it imposes conditional independence restrictions on the treatment process and the counterfactual outcomes, given the past history, and hence restrictions for the SNMM parameter; see equation (\ref{eq:UNC2}). To circumvent this complication, we use the variable transformation technique and translate the restrictions into the new variables, which leads to the unconstrained observed data-likelihood. This step allows us to characterize the semiparametric efficiency score for the SNMM parameter and construct locally efficient estimators which achieve the semiparametric efficiency bound.
In the AIEDRP database, a large portion of patients dropped out of the study before year 2. To accommodate possible dependent censoring due to drop-out, we propose the inverse probability of censoring weighting (IPCW) estimator. We show that the proposed estimator is multiply robust in that it is consistent if either the potential outcome mean model is correctly specified or the model for the treatment process is correctly specified, regardless whether the censoring model is correctly specified. This amounts to six scenarios specified in Table \ref{tab:Multiply-Robustness} that guarantee consistent estimation, allowing some components in the union of the three models to be misspecified \citep{molina2017multiple,wang2018bounded}. Moreover, using the empirical process theory \citep{van1996weak}, we characterize the asymptotic property of the proposed estimator of the SNMM parameter under a parametric outcome mean model, and proportional hazards models for the treatment and censoring processes, allowing for multiply robust inference.
It is important to note that for regularly spaced observations, i.e. the data process can only take values at pre-fixed time points, the proposed estimator simplifies to the existing estimator with discrete-time data. For irregularly spaced observations, the new model and estimation framework allows us to deal with irregularly spaced observations directly and respects the nature of the underlying data generating mechanism. In contrast, the existing g-estimator requires data pre-processing and may introduce bias as demonstrated by simulation in Section \ref{sec:simulation}.
The rest of the article is organized as follows. In Section \ref{sec:discrete-SNMM}, we describe the SNMM with discrete-time processes, which serves as a building block to establishing the semiparametric efficiency theory for continuous-time processes and also enables us to establish their connection. In Section \ref{sec:continuous SNMM}, we present the semiparametric efficiency theory and locally efficient estimators for the continuous-time SNMM under the NUC assumption. Moreover, we propose an IPCW estimator to deal with dependent censoring due to premature dropout. In Section \ref{sec:Asymptotic-property}, we establish the asymptotic property of the estimator allowing for multiply robust inference. In Section \ref{sec:simulation}, we present simulation studies to investigate the performance of the proposed estimator compared to the existing competitor in finite samples. In Section \ref{sec:Application}, we apply the proposed estimator to estimate the effect of the time between HIV infection and initiation of HAART on the CD4 count at year 2 after infection in HIV-positive patients with early and acute infection. We conclude the article with discussions in Section \ref{sec:Discussion}.
\section{Structural nested mean models in discrete-time processes\label{sec:discrete-SNMM}}
\subsection{Setup, models, and assumptions}
\textcolor{black}{We first describe the SNMM in discrete-time processes. }We assume that $n$ subjects are followed at pre-fixed discrete times $t_{0}<\cdots<t_{K+1}$ with \textcolor{black}{$t_{0}=0$ and $t_{K+1}=\tau$.} We assume that the subjects are simple random samples from a larger population\textcolor{black}{{} \citep{rubin1978bayesian}. For simplicity, we suppress the subscript $i$ for subjects.} Let $L_{m}$ be a vector of covariates at time $t_{m}$. Let $A_{m}$ be the treatment indicator at $t_{m}$; i.e., $A_{m}=1$ if the subject was on treatment at $t_{m}$ and $A_{m}=0$ otherwise. We use the overline notation to denote a variable's history; e.g., $\overline{A}_{m}=(A_{0},\ldots,A_{m})$. We assume that once treatment is initiated, it is never discontinued, so each treatment regime corresponds to one treatment initiation time. Let $T$ be the time to treatment initiation, and let $T=\infty$ if the subject never initiated the treatment during the follow up. Let $\Gamma$ be the indicator that the treatment initiation time is less than $\tau$; i.e., $\Gamma=1$ if the subject initiated the treatment before $\tau$ and $\Gamma=0$ otherwise. \textcolor{black}{Let $Y^{(m)}$ be the potential outcome at the end of study $\tau$, had the subject initiated the treatment at $t_{m}$, }and let $Y^{(\infty)}$ \textcolor{black}{be the potential outcome at $\tau$ had the subject never initiated the treatment during the study follow up. }Let $V_{m}=(A_{m-1},L_{m})$ be the vector of treatment and covariate. Let $Y$ be the continuous outcome measured at $\tau$. Finally, the subject's full record is $F=(\overline{A}_{K},\overline{L}_{K},Y)$.
Following \citet{lok2012impact}, we describe the discrete-time SNMM for the treatment effect as follows.
\begin{assumption}[Discrete-time SNMM]\label{asump:disc-SNMMs}For $0\leq m\leq K$, the discrete-time SNMM is \begin{equation} \gamma_{m}(\overline{L}_{m})=\mathbb{E}\left\{ Y^{(m)}-Y^{(\infty)}\mid\overline{A}_{m-1}=\overline{0},\overline{L}_{m}\right\} =\gamma_{m}(\overline{L}_{m};\psi^{*});\label{eq:disc-SNMM} \end{equation} i.e., $\gamma_{m}(\overline{L}_{m};\psi)$ with $\psi\in$$\mathbb{R}^{p}$ is a correctly specified model for $\gamma_{m}(\overline{L}_{m})$ with the true parameter value $\psi^{*}.$
\end{assumption}
This model specifies the conditional expectation of the treatment contrasts $Y^{(m)}-Y^{(\infty)}$, given subject's observed treatment and covariates history $(\overline{A}_{m-1}=\overline{0},\overline{L}_{m})$. Intuitively, it states that the conditional mean of the outcome is shifted by $\gamma_{m}(\overline{L}_{m};\psi^{*})$ had the subject initiated the treatment at $t_{m}$ comparing to never starting. Therefore, the parameter $\psi^{*}$ has a causal interpretation. To help understand the model, consider $\gamma_{m}(\overline{L}_{m};\psi^{*})=(\psi_{1}^{*}+\psi_{2}^{*}t_{m})(\tau-t_{m})$, where $\psi^{*}=(\psi_{1}^{*},\psi_{2}^{*})$. This model entails that on average, the treatment would increase the mean of the outcome had the subject initiated the treatment at $t_{m}$ by $(\psi_{1}^{*}+\psi_{2}^{*}t_{m})(\tau-t_{m})$, and the magnitude of the increase depends on the duration of the treatment and the treatment initiation time. If $\psi_{1}^{*}+\psi_{2}^{*}t_{m}>0$ and $\psi_{2}^{*}<0$, it indicates the treatment is beneficial and earlier initiation is better.
We make the consistency assumption to link the observed data to the potential outcomes.
\begin{assumption}[Consistency]\label{asump:(Consistency)}The observed outcome is equal to the potential outcome under the actual treatment received; i.e., $Y=Y^{(T)}$.
\end{assumption}
If all potential outcomes were observed for each subject, we can directly compare these outcomes to infer the treatment effect; however, the fundamental problem in causal inference is that we can not observe all potential outcomes for a particular subject \citep{holland1986statistics}. In particular, we can observe $Y^{(\infty)}$ only for the subjects who did not initiate the treatment during the follow up. To overcome this issue, we define \begin{equation} H(\psi^{*})=Y-\gamma_{T}(\overline{L}_{T};\psi^{*}).\label{eq:def of H} \end{equation} Intuitively, $H(\psi^{*})$ subtracts the treatment effect $\gamma_{T}(\overline{L}_{T};\psi^{*})$ from the observed outcome $Y$, so it mimics the potential outcome $Y^{(\infty)}$ had the treatment never been initiated. We provide the formal statement as proved in \citet{lok2012impact}.
\begin{proposition}[Mimicking $Y^{(\infty)}$]\label{(Mimicking-counterfactual-outcomes)}Under Assumption \ref{asump:(Consistency)}, $H(\psi^{*})$ mimics $Y^{(\infty)}$, in the sense that \[ \mathbb{E}\left\{ H(\psi^{*})\mid\overline{A}_{m-1}=\overline{0},A_{m},\overline{L}_{m}\right\} =\mathbb{E}\left\{ Y^{(\infty)}\mid\overline{A}_{m-1}=\overline{0},A_{m},\overline{L}_{m}\right\} , \] for $0\leq m\leq K,$ where by convention, $\mathbb{E}\left(\cdot\mid\overline{A}_{-1}=\overline{0},A_{0},\overline{L}_{0}\right)=\mathbb{E}\left(\cdot\mid A_{0},\overline{L}_{0}\right)$.
\end{proposition}
We can not fit the SNMM by a regression model pooled over time, because the model involves the unobserved potential outcomes. Parameter identification requires the NUC assumption \citep{robins1992g}.
\begin{assumption}[No unmeasured confounding]\label{asump:(No-unmeasured-confounding)}$A_{m}\indep Y^{(\infty)}\mid(\overline{A}_{m-1},\overline{L}_{m})$ for $0\leq m\leq K$, where $\indep$ means ``is (conditionally) independent of'' \citep{dawid1979conditional}.
\end{assumption}
Assumption \ref{asump:(No-unmeasured-confounding)} holds if $(\overline{A}_{m-1},\overline{L}_{m})$ contains all prognostic factors for $Y^{(\infty)}$ that affect the treatment decision at $t_{m}$ for $0\leq m\leq K$. Under this assumption, the observational study can be conceptualized as a sequentially randomized experiment.
Proposition \ref{(Mimicking-counterfactual-outcomes)} implies that under Assumption \ref{asump:(No-unmeasured-confounding)}, for $0\leq m\leq K$, \textit{ \begin{equation} \mathbb{E}\left\{ H(\psi^{*})\mid\overline{A}_{m-1}=\overline{0},A_{m},\overline{L}_{m}\right\} =\mathbb{E}\left\{ H(\psi^{*})\mid\overline{A}_{m-1}=\overline{0},\overline{L}_{m}\right\} ;\label{eq:model-part1} \end{equation} }see, e.g., \citet{robins1992g,lok2004estimating,lok2012impact}. Equation (\ref{eq:model-part1}) also poses restrictions for $\psi^{*}$.
\subsection{Semiparametric efficiency theory}
The semiparametric model is characterized by the discrete-time SNMM (\ref{eq:disc-SNMM}) and restriction (\ref{eq:model-part1}), where the parameter of primary interest is $\psi^{*}$.
We first present the general semiparametric efficiency theory. Suppose the data consist of $n$ independent and identically distributed random variables $F_{1},\ldots,F_{n}$. We consider regular asymptotically linear (RAL) estimators $\widehat{\psi}_{n}$ for $\psi^{*}$ as \begin{equation} n^{1/2}(\widehat{\psi}_{n}-\psi^{*})=n^{1/2}\mathbb{P}_{n}\Phi(F)+o_{p}(1),\label{eq:linear} \end{equation} where $\mathbb{P}_{n}$ denotes the empirical mean; i.e., $\mathbb{P}_{n}\Phi(F)=n^{-1}\sum_{i=1}^{n}\Phi(F_{i})$, $\Phi(F)$ is called the influence function of $\widehat{\psi}_{n}$, with mean zero and finite and non-singular variance. Because $\psi^{*}$ is $p$-dimensional, $\Phi(F)$ is also $p$-dimensional. From (\ref{eq:linear}), the asymptotic variance of $n^{1/2}(\widehat{\psi}_{n}-\psi^{*})$ is equal to the variance of its influence function. As a result, to construct the efficient RAL estimator, it suffices to find the influence function with the smallest variance.
To do this, we take a geometric approach of \citet{bickel1993efficient}. Consider the Hilbert space $\mathcal{H}$ of all $p$-dimensional, mean-zero finite variance measurable functions of $F$, denoted by $h(F)$, equipped with the covariance inner product $<h_{1},h_{2}>=\mathbb{E}\left\{ h_{1}(F)^{\mathrm{\scriptscriptstyle T}}h_{2}(F)\right\} $
and the norm $||h||=\mathbb{E}\left\{ h(F)^{\mathrm{\scriptscriptstyle T}}h(F)\right\} ^{1/2}<\infty$. \citet{bickel1993efficient} stated that influence functions for RAL estimators lie in the orthogonal complement of the nuisance tangent space in $\mathcal{H}$. To motive the concept of the nuisance tangent space for a semiparametric model, we first consider a fully parametric model $f(F;\psi,\theta)$, where $\psi$ is a $p$-dimensional parameter of interest, and $\theta$ is an $q$-dimensional nuisance parameter. The score vectors of $\psi$ and $\theta$ are $S_{\psi}(F)=\partial\log f(F;\psi,\theta^{*})/\partial\psi$ and $S_{\theta}(F)=\partial\log f(F;\psi^{*},\theta)/\partial\theta$, both evaluated at the true values $(\psi^{*},\theta^{*})$, respectively. For a parametric model, the nuisance tangent space $\Lambda$ is the linear space in $\mathcal{H}$ spanned by the $q$-dimensional nuisance score vector $S_{\theta}(F)$. For semiparametric models, where the nuisance parameter is infinite-dimensional, the nuisance tangent space $\Lambda$ is defined as the mean squared closure of all parametric sub-model nuisance tangent spaces. The efficient score $S_{\mathrm{eff}}(F)$ for the semiparametric model is the projection of $S_{\psi}$ onto the orthogonal complementary space of the nuisance tangent space $\Lambda^{\bot}$; i.e., $S_{\mathrm{eff}}(F)=\prod\left(S_{\psi}\mid\Lambda^{\bot}\right)$, where $\prod$ is the projection operator in the Hilbert space. The efficient influence function is $\Phi_{\mathrm{eff}}(F)=\left[\mathbb{E}\left\{ S_{\mathrm{eff}}(F)S_{\mathrm{eff}}(F)^{\mathrm{\scriptscriptstyle T}}\right\} \right]^{-1}S_{\mathrm{eff}}(F)$, with the variance $\left[\mathbb{E}\left\{ S_{\mathrm{eff}}(F)S_{\mathrm{eff}}(F)^{\mathrm{\scriptscriptstyle T}}\right\} \right]^{-1}$, which achieves the semiparametric efficiency bound \citep{bickel1993efficient}. From this geometric point of view, to derive efficient semiparametric estimators for $\psi^{*}$, it suffices to find the efficient score $S_{\mathrm{eff}}(F)$.
\subsection{Influence functions}
The key step is to characterize the space where the influence functions of RAL estimators belong to, i.e., the orthogonal complementary space of the nuisance tangent space $\Lambda^{\bot}$. Following \citet{robins1994correcting}, Proposition \ref{Prop: discr-nuisance} characterizes all influence functions of RAL estimators for $\psi^{*}$.
\begin{proposition}\label{Prop: discr-nuisance}For the semiparametric model characterized by the discrete-time SNMM (\ref{eq:disc-SNMM}) and restriction (\ref{eq:model-part1}), the influence function space for $\psi^{*}$ is \begin{equation} \Lambda^{\bot}=\left\{ G(\psi^{*};F,c):\ \text{for all }c(\overline{V}_{m})\in\mathbb{\mathbb{R}}^{p}\right\} ,\label{eq:Lambda prop} \end{equation} where $\overline{V}_{m}=(\overline{A}_{m-1},\overline{L}_{m})$ and \[ G(\psi;F,c)=\sum_{m=1}^{K}c(\overline{V}_{m})\{A_{m}-{P}(A_{m}=1\mid\overline{V}_{m})\}[H(\psi)-\mathbb{E}\{H(\psi)\mid\overline{V}_{m}\}], \] indexed by $c$. To make the notation accurate, the abbreviation $c$ in $G(\psi;F,c)$ means $c(\overline{V}_{m})$.
\end{proposition}
Although \citet{robins1994correcting} provided this result, the technical proofs were dense and less accessible to general readers. In the future, we will write a technical report that provides details to guide general readers in deriving the semiparametric efficiency theory in similar contexts.
The semiparametric efficiency score, i.e. the most efficient one among the class in (\ref{eq:Lambda prop}), often does not have a closed-form expression. We now make a working assumption, which extends restriction (\ref{eq:model-part1}) and allows us to derive an analytical expression of the semiparametric efficient score of $\psi^{*}.$
\begin{assumption}[Homoscedasticity]\label{assump: Homo }For $0\leq m\leq K$, \textit{ }$\mathrm{var}\{H(\psi^{*})\mid\overline{A}_{m},\overline{L}_{m}\}=\mathrm{var}\left\{ H(\psi^{*})\mid\overline{V}_{m}\right\} $.
\end{assumption}
\begin{proposition}[Discrete-time semiparametric efficient score]\label{Thm: Theorem9}Consider $\gamma_{m}(\overline{L}_{m};\psi^{*})=(\psi_{1}^{*}+\psi_{2}^{*}t_{m})(\tau-t_{m})$. Suppose Assumptions \ref{asump:(Consistency)}\textendash \ref{assump: Homo } holds. The semiparametric efficient score of $\psi^{*}$ is \begin{equation} S_{\mathrm{eff}}(\psi^{*};F)=G(\psi^{*};F,c_{\mathrm{eff}}),\label{eq:semipar ee 1} \end{equation} where \[ c_{\mathrm{eff}}(\overline{V}_{m})=\left(\begin{array}{c} (\tau-t_{m})-\mathbb{E}\left\{ \mathrm{dur}(t_{m})\mid\overline{A}_{m}=\overline{0},\overline{L}_{m}\right\} \\ t_{m}(\tau-t_{m})-\mathbb{E}\left\{ T\times\mathrm{dur}(t_{m})\mid\overline{A}_{m}=\overline{0},\overline{L}_{m}\right\} \end{array}\right)\left[\mathrm{var}\left\{ H(\psi^{*})\mid\overline{V}_{m}\right\} \right]^{-1}, \]
and $\mathrm{dur}(t_{m})=\sum_{l=m}^{K-1}A_{l}(t_{l+1}-t_{l})$ is the observed treatment duration from $t_{m}$ to $\tau$.
\end{proposition}
\section{SNMMs in continuous-time processes\label{sec:continuous SNMM}}
\subsection{Setup, models, and assumptions\label{subsec:Setup-cont}}
We now extend the discrete-time SNMM in Section \ref{sec:discrete-SNMM} to the continuous-time SNMM. We assume that the variables can change their values at any real time between $0$ and $\tau$. We assume that all subjects are followed until $\tau$ and consider censoring in Section \ref{sec:Censoring}.
Each subject has multiple visit times. Let $N(t)$ be the counting process for the visit times. Let $L_{t}$ be the multidimensional covariate process. In contrast to the setting with discrete-time data processes, $L_{t}$ is a vector of covariates at $t$ and additional information of the past visit times up to but not including $t$. This is because the past visit pattern, e.g., the number and frequency of the visit times may be important confounders for the treatment and outcome processes. Let $A_{t}$ be the binary treatment process. In our motivating application, the treatment can only be initiated at the follow-up visits; i.e., if $A_{t}=1$, then $N(t)=1$. We will model the treatment process directly, although one can model first the visit time process and then treatment assignment at the visit times. Define $Y^{(t)}$ as the potential outcome at $\tau$ had the subject initiated the treatment at $t$, and define $Y^{(\infty)}$ as the potential outcome at $\tau$ had the subject never initiated the treatment before $\tau$. Let $Y$ be the continuous outcome measured at $\tau$. For the regularization purpose, we assume that the processes are Càdlàg processes, i.e., the processes are right continuous with left limits. Let $V_{t}=(A_{t-},L_{t})$ be the combined treatment and covariate process, where $A_{t-}$ is the available treatment information right before $t$. We use the overline notation to denote a variable's observed history; e.g., $\overline{A}_{t}=\{A_{u}:0\leq u\leq t,\mathrm{d} N(u)=1\}$. The subject's full record is $F=\{\overline{V}_{\tau},(Y^{(t)}:0\leq t\leq\tau)\}.$ The observed data for a subject through $\tau$ is $O=(\overline{V}_{\tau},Y)$.
We assume the continuous-time SNMM as follows.
\begin{assumption}[Continuous-time SNMM]\label{cont-SNMMs}For $0\leq t\leq\tau$, the continuous-time SNMM is \begin{equation} \gamma_{t}(\overline{L}_{t})=\mathbb{E}\left\{ Y^{(t)}-Y^{(\infty)}\mid\overline{L}_{t},T\geq t\right\} =\gamma_{t}(\overline{L}_{t};\psi^{*});\label{eq:cont-SNMM} \end{equation} i.e., $\gamma_{t}(\overline{L}_{t};\psi)$ with $\psi\in$$\mathbb{R}^{p}$ is a correctly specified model for $\gamma_{t}(\overline{L}_{t})$ with the true parameter value $\psi^{*}.$ Moreover, $Y^{(t)}\sim Y^{(\infty)}+\gamma_{t}(\overline{L}_{t};\psi^{*})$ given $(\overline{L}_{t},T\geq t)$, where $\sim$ means ``is (conditionally) distributed as''.
\end{assumption}
In the continuous-time SNMM (\ref{eq:cont-SNMM}), $\psi^{*}$ can be interpreted as the treatment effect rate for the outcome. For the continuous-time SNMM, we assume that given $(\overline{L}_{t},T\geq t)$, the treatment effect only changes the location of the distribution of the outcome but not on other aspects of the distribution such as the variance. This assumption is stronger than the discrete-time SNMM in Assumption \ref{asump:disc-SNMMs}. But this assumption is weaker than the rank-preserving assumption of $Y^{(t)}=Y^{(\infty)}+\gamma_{t}(\overline{L}_{t};\psi^{*})$ considered in \citet{zhang2011causal}. It has been argued that by mapping the potential outcomes directly rather than between distributions, rank preserving models are easier to understand and communicate \citep{vansteelandt2014structural}. However, the rank preservation may be restrictive in practice, because it implies that for two subjects $i$ and $j$ with the same treatment and covariate history, $Y_{i}>Y_{j}$ must imply $Y_{i}^{(\infty)}>Y_{j}^{(\infty)}$. We relax this restriction by imposing a distributional assumption.
The continuous-time SNMM (\ref{eq:cont-SNMM}) can model the treatment effect flexibly. For example, the two-parameter model $\gamma_{t}(\overline{L}_{t};\psi^{*})=(\psi_{1}^{*}+\psi_{2}^{*}t)(\tau-t)I(t\leq\tau)$ entails that the treatment effect depends on the treatment initiation time and the duration of the treatment. To allow for treatment effect modifiers, we can specify an elaborated treatment effect model including time-varying covariates, such as viral load in the blood. For example, one can consider $\gamma_{t}(\overline{L}_{t};\psi^{*})=(\psi_{1}^{*}+\psi_{2}^{*}t+\psi_{3}^{*}\text{lvl}_{t})(\tau-t)I(t\leq\tau)$, where lvl$_{t}$ is the log viral load at $t$. We discuss effect modification and model selection in Section \ref{sec:Discussion}.
To link the observed outcome to the potential outcomes, we assume that $Y=Y^{(T)}$. Define the mimicking outcome for $Y^{(\infty)}$ as $H(\psi^{*})=Y-\gamma_{T}(\overline{L}_{T};\psi^{*})$. By Assumption \ref{cont-SNMMs}, $H(\psi^{*})\sim Y^{(\infty)},$ given $(\overline{L}_{t},T\geq t)$.
An important issue with data from user-initiated visits and treatment initiation is the potential selection bias and confounding, e.g., sicker patients may visit the doctor more frequently and are likely to initiate treatment earlier. To overcome this issue, we impose the NUC assumption on the treatment process \citep{yang2018modeling}.
\begin{assumption}[No unmeasured confounding]\label{asumption:CT-UNC}The hazard of treatment initiation is \begin{eqnarray} \lambda_{T}(t\mid F) & = & \lim_{h\rightarrow0}h^{-1}P(t\leq T<t+h,\Gamma=1\mid\overline{V}_{t},T\geq t,Y^{(\infty)})\nonumber \\
& = & \lim_{h\rightarrow0}h^{-1}P(t\leq T<t+h,\Gamma=1\mid\overline{V}_{t},T\geq t)=\lambda_{T}\left(t\mid\overline{V}_{t}\right).\label{eq:UNC} \end{eqnarray}
\end{assumption}
Assumption \ref{asumption:CT-UNC} implies that the hazard of treatment initiation at $t$ depends only on the observed treatment and covariate history $\overline{V}_{t}$ but not on the future observations and potential outcomes. This assumption holds if the set of historical covariates contains all prognostic factors for the outcome that affect the decision of patient visiting the doctor and initiating treatment. As an example, in the motivating application, time-invariant characteristics such as age at infection, gender, race and whether ever used injection drugs are important confounders for the treatment and outcome processes. Moreover, time-varying CD4 and viral load are important confounders. Often, poor disease progression necessitates more frequent follow-up visits and earlier treatment initiation.
The treatment process $A_{t}$ can also be represented in terms of the counting process $N_{T}(t)$ and the at-risk process $Y_{T}(t)$ of observing treatment initiation. Let $\sigma(V_{t})$ be the $\sigma$-field generated by $V_{t}$, and let $\sigma(\overline{V}_{t})$ be the $\sigma$-field generated by $\cup_{u\leq t}\sigma(V_{u})$. Under the standard regularity conditions for the counting process, $M_{T}(t)=N_{T}(t)-\int_{0}^{t}\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u$ is a martingale with respect to the filtration $\sigma(\overline{V}_{t})$. Assumption \ref{asumption:CT-UNC} entails that the jumping rate of $N_{T}(t)$ at $t$ does not depend on $Y^{(\infty)}$, given $\overline{V}_{t}$. Because $H(\psi^{*})$ mimics $Y^{(\infty)}$ in the sense that it has the same distribution as $Y^{(\infty)}$ given $\overline{V}_{t}$, Assumption \ref{asumption:CT-UNC} also implies that the jumping rate of $N_{T}(t)$ at $t$ does not depend on $H(\psi^{*})$, given $\overline{V}_{t}$. To be formal, we show in the supplementary material that \begin{equation} \lambda_{T}\{t\mid\overline{V}_{t},H(\psi^{*})\}=\lambda_{T}(t\mid\overline{V}_{t}).\label{eq:UNC2} \end{equation} Therefore, under the standard regularity conditions, $M_{T}(t)$ is a martingale with respect to the filtration $\sigma\{\overline{V}_{t},H(\psi^{*})\}$. \citet{lok2008statistical} imposed this martingale condition to formulate the NUC assumption for the treatment process.
\subsection{Semiparametric efficiency score}
To estimate the causal parameter precisely, we establish the new semiparametric efficiency theory for the continuous-time SNMMs. We defer all proofs to the supplementary material.
\begin{theorem}\label{Thm: cont-nuisance}For the semiparametric model characterized by the continuous-time SNMM (\ref{eq:cont-SNMM}) and Assumption \ref{asumption:CT-UNC}, the influence function space for $\psi^{*}$ is \[ \Lambda^{\bot}=\left\{ G(\psi^{*};F,c):\ \text{for all }c(\overline{V}_{u})\in\mathbb{\mathbb{R}}^{p}\right\} , \] where \begin{equation} G(\psi;F,c)=\int_{0}^{\tau}c(\overline{V}_{u})\left[H(\psi)-\mathbb{E}\left\{ H(\psi)\mid\overline{V}_{u},T\geq u\right\} \right]Y_{T}(u)\mathrm{d} M_{T}(u).\label{eq:G} \end{equation}
\end{theorem}
The semiparametric efficiency score for $\psi^{*}$ is $S_{\mathrm{eff}}(\psi^{*};F)=\prod\{S(\psi^{*};F)\mid\Lambda^{\bot}\}$. To derive $S_{\mathrm{eff}}(\psi^{*};F)$, we calculate the projection of any $B=B(F)$ onto $\Lambda^{\bot}.$
\begin{theorem}\label{Thm:projection}For any $B=B(F)$, the projection of $B$ onto $\Lambda^{\bot}$ is \begin{multline} \prod\left(B\mid\Lambda^{\bot}\right)=\int_{0}^{\tau}\left[\mathbb{E}\left\{ B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\right\} -\mathbb{E}\left\{ B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \right]\\ \times\left[\mathrm{var}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \right]^{-1}\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \right]\mathrm{d} M_{T}(u),\label{eq:projection} \end{multline} where $\dot{H}_{u}(\psi^{*})=H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$.
\end{theorem}
Considering $B=S(\psi^{*};F)$ in Theorem \ref{Thm:projection}, we can derive the semiparametric efficient score for $\psi^{*}$.
\begin{theorem}[Continuous-time semiparametric efficient score]\label{Thm: efficient score}For the semiparametric model characterized by the continuous-time SNMM (\ref{eq:cont-SNMM}) and Assumption \ref{asumption:CT-UNC}, the semiparametric efficient score of $\psi^{*}$ is \begin{equation} S_{\mathrm{eff}}(\psi^{*};F)=G(\psi^{*};F,c_{\mathrm{eff}}),\label{eq:semipar score} \end{equation} where $G(\psi;F,c)$ is defined in (\ref{eq:G}), and \begin{multline} c_{\mathrm{eff}}(\overline{V}_{u})=[\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T=u\}\\ -\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T\geq u\}]\times[\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]^{-1}.\label{eq:c_eff} \end{multline}
\end{theorem}
To illustrate the theorem, we provide the explicit expression of the semiparametric efficient score using an example.
\begin{example}Consider $\gamma_{t}(\overline{L}_{t};\psi)=(\psi_{1}+\psi_{2}t)(\tau-t)I(t\leq\tau)$. Suppose Assumption \ref{asumption:CT-UNC} holds. The semiparametric efficient score of $\psi^{*}$ is $S_{\mathrm{eff}}(\psi^{*};F)=G(\psi^{*};F,c_{\mathrm{eff}})$, where \begin{multline} c_{\mathrm{eff}}(\overline{V}_{u})=\left(\begin{array}{c} (\tau-u)I(u\leq\tau)-\mathbb{E}\{(\tau-T)I(T\leq\tau)\mid\overline{V}_{u},T\geq u\}\\ u(\tau-u)I(u\leq\tau)-\mathbb{E}\{T(\tau-T)I(T\leq\tau)\mid\overline{V}_{u},T\geq u\} \end{array}\right)\\ \times[\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]^{-1}.\label{eq:c_eff_eg} \end{multline}
\end{example}
\begin{remark}
The proposed continuous-time semiparametric efficient score contains the discrete-time semiparametric efficient score as a special case. If the processes take observations at discrete times $\{t_{0},\ldots,t_{K}\}$, then (i) the conditioning event $(\overline{V}_{u},T\geq u)$ at $t_{m}$ is the same as $(\overline{A}_{m}=\overline{0},\overline{L}_{m})$, (ii) $M_{T}(t)=N_{T}(t)-\int_{0}^{t}\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u$ at $t=t_{m}$ becomes $A_{m}-{P}(A_{m}=1\mid\overline{A}_{m-1}=\overline{0},\overline{L}_{m})$, and $\mathbb{E}\{\partial\dot{H}_{t}(\psi^{*})/\partial\psi\mid\overline{V}_{t},T=t\}$ at $t=t_{m}$ becomes \[ \mathbb{E}\{\partial\dot{H}_{m}(\psi^{*})/\partial\psi\mid\overline{V}_{m},T=t_{m}\}=-\left(\begin{array}{c} (\tau-t_{m})-\mathbb{E}\left\{ \mathrm{dur}(t_{m})\mid\overline{A}_{m}=\overline{0},\overline{L}_{m}\right\} \\ t_{m}(\tau-t_{m})-\mathbb{E}\left\{ T\times\mathrm{dur}(t_{m})\mid\overline{A}_{m}=\overline{0},\overline{L}_{m}\right\} \end{array}\right). \] Therefore, the continuous-time semiparametric efficient score (\ref{eq:semipar score}) reduces to the discrete-time semiparametric efficient score (\ref{eq:semipar ee 1}).
\end{remark}
\subsection{Doubly robust and locally efficient estimators}
We now construct a general class of estimators based on the estimating function $G(\psi^{*};F,c)$. Because $\mathbb{E}\{G(\psi^{*};F,c)\}=0$, we obtain the estimator of $\psi^{*}$ by solving \begin{equation} \mathbb{P}_{n}\left\{ G(\psi;F,c)\right\} =0.\label{eq:ee4} \end{equation} In particular, the estimating equation (\ref{eq:ee4}) with $c_{\mathrm{eff}}$ provides the semiparametric efficient estimator of $\psi^{*}$.
In (\ref{eq:ee4}), we assume that the model for the treatment process and $\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} $ are known. In practice, they are often unknown and must be modeled and estimated from the data. We posit a proportional hazards model with time-dependent covariates for the treatment process; i.e., \begin{eqnarray} \lambda_{T}\left(t\mid\overline{V}_{t};\alpha\right) & = & \lambda_{T,0}(t)\exp\left\{ \alpha^{\mathrm{\scriptscriptstyle T}}W_{T}(t,\overline{V}_{t})\right\} ,\label{eq:ph-V} \end{eqnarray} where $\lambda_{T,0}(t)$ is an unknown baseline hazard function, $W_{T}(t,\overline{V}_{t})$ is a pre-specified function of $t$ and $\overline{V}_{t}$, and $\alpha$ is a vector of unknown parameters. Under Assumption \ref{asumption:CT-UNC}, we can estimate $\lambda_{T,0}(t)$ and $\alpha$ from the standard software such as ``coxph'' in R (R Development Core Team, 2012) \nocite{R:2010}. To estimate $\alpha$, fit the time-dependent proportional hazards model to the data $\{(\overline{V}_{T_{i},i},T_{i},\Gamma_{i}):i=1,\ldots,n\}$ treating the treatment initiation as the failure event. Once we obtain $\widehat{\alpha},$ we can estimate the cumulative baseline hazard, $\lambda_{T,0}(t)\mathrm{d} t$ by \[ \widehat{\lambda}_{T,0}(t)\mathrm{d} t=\frac{\sum_{i=1}^{n}\mathrm{d} N_{T,i}(t)}{\sum_{i=1}^{n}\exp\left\{ \widehat{\alpha}^{\mathrm{\scriptscriptstyle T}}W_{T}(t,\overline{V}_{t,i})\right\} Y_{T_{i}}(t)}. \] Then, we obtain $\widehat{\lambda}_{T}(u\mid\overline{V}_{u})=\exp\left\{ \widehat{\alpha}^{\mathrm{\scriptscriptstyle T}}W_{T}(u,\overline{V}_{u})\right\} \widehat{\lambda}_{T,0}(u)$ and $\widehat{M}_{T}(t)=N_{T}(t)-\int_{0}^{t}\widehat{\lambda}_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u$.
We also posit a working model $\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta\right\} $, such as a linear regression model, where $\beta$ is a vector of unknown parameters.
The estimating equation for $\psi^{*}$ achieves the double robustness or double protection \citep{rotnitzky2015double}.
\begin{theorem}[Double robustness]\label{Thm:2-dr}Under the continuous-time SNMM (\ref{eq:cont-SNMM}) and Assumption \ref{asumption:CT-UNC}, the proposed estimator $\widehat{\psi}$ solving the estimating equation (\ref{eq:ee4}) is doubly robust in that it is unbiased if either the model for the treatment process is correctly specified, or the potential outcome mean model $\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta\right\} $ is correctly specified, but not necessarily both.
\end{theorem}
The choice of $c$ does not affect the double robustness but the efficiency of the resulting estimator. For efficiency consideration, we consider $c_{\mathrm{eff}}$ in (\ref{eq:c_eff}). The resulting estimator solving the estimating equation (\ref{eq:ee4}) with $c_{\mathrm{eff}}$ is locally efficient, in the sense that it achieves the semiparametric efficiency bound if the working models for the treatment process and the potential outcome mean are correctly specified. Because $c_{\mathrm{eff}}$ depends on the unknown distribution, we require additional models for $\mathbb{E}\{(\tau-T)I(T\leq\tau)\mid\overline{V}_{u},T\geq u\}$ and $\mathbb{E}\{T(\tau-T)I(T\leq\tau)\mid\overline{V}_{u},T\geq u\}$ to approximate $c_{\mathrm{eff}}.$ For example, we can approximate $\mathbb{E}\{(\tau-T)I(T\leq\tau)\mid\overline{V}_{u},T\geq u\}$ by $P(T\leq\tau\mid\overline{V}_{u},T\geq u)\times\mathbb{E}\{\tau-T\mid\overline{V}_{u},u\leq T\leq\tau\}$ and each approximated by (logistic) linear models. For $\mathrm{\mathrm{var}}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$, we consider the following options: (i) assume $\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$ to be a constant, and (ii) approximate $\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$ by the sample variance of $H(\widehat{\psi}_{p})$ among subjects with $T\geq u$, where $\widehat{\psi}_{p}$ is a preliminary estimator. We compare the two options via simulation. Although option (ii) provides a slight efficiency gain in estimation, for ease of implementation we recommend option (i). Option (i) is common in the generalized estimating equation framework. From here on, we use this option for $c$ and suppress the dependence on $c$ for estimating functions.
\subsection{Censoring\label{sec:Censoring}}
As in the AIEDRP study, in most longitudinal observational studies, subjects may drop out the study prematurely before the end of study, which renders the data censored at the time of dropout. If the censoring mechanism depends on time-varying prognostic factors, e.g. sicker patients drop out of the study with a higher probability than healthier patients, the patients remaining in the study is a biased sample of the full population. We now introduce $C$ to be the time to censoring. Let $X=\min(C,\tau)$ be time to censoring or the end of the study, whichever came first. Let $\delta_{C}=I(C\geq\tau)$ be the indicator of not censoring before $\tau$. The observed data is $O=(X,\overline{V}_{X},\delta_{C},\delta_{C}Y)$.
In the presence of censoring, the estimating equation (\ref{eq:ee4}) is not feasible. We consider inverse probability of censoring weighting (IPCW; \citealp{robins1993information}). We assume a dependent censoring mechanism as follows.
\begin{assumption}[Dependent censoring]\label{asp:NUC-1}The hazard of censoring is \begin{eqnarray} \lambda_{C}(t\mid F,T>t) & = & \lim_{h\rightarrow0}h^{-1}P(t\leq C<t+h\mid F,T>t,C\geq t)\nonumber \\
& = & \lim_{h\rightarrow0}h^{-1}P(t\leq C<t+h\mid\overline{V}_{t},T>t,C\geq t)=\lambda_{C}\left(t\mid\overline{V}_{t}\right).\label{eq:censoring} \end{eqnarray}
\end{assumption}
Assumption \ref{asp:NUC-1} states that $\lambda_{C}(t\mid F,T>t)$ depends only on the past treatment and covariate history until $t$, but not on the future variables and potential outcomes. This assumption holds if the set of historical covariates contains all prognostic factors for the outcome that affect the possibility of loss to follow up at $t$. Under this assumption, the missing data due to censoring are missing at random \citep{rubin1976inference}.
We discuss the implication of Assumption \ref{asp:NUC-1} on estimation of the treatment process model. Under Assumption \ref{asp:NUC-1}, the hazard of treatment initiation in (\ref{eq:UNC}) is equal to $\lim_{h\rightarrow0}h^{-1}P(t\leq T<t+h,\Gamma=1\mid\overline{V}_{t},T>t,C\geq t)$. Redefining $T$ to be the time to treatment initiation, or censoring, or the end of the study, whichever came first, (\ref{eq:UNC}) can be estimated by conditioning on $T\geq t$ with the new definition of $T.$
From $\lambda_{C}\left(t\mid\overline{V}_{t}\right)$, we define $K_{C}\left(t\mid\overline{V}_{t}\right)=\exp\left\{ -\int_{0}^{t}\lambda_{C}\left(u\mid\overline{V}_{u}\right)\mathrm{d} u\right\} ,$ which is the probability of the subject not being censored before $t$. For regularity, we impose a positivity condition for $K_{C}\left(t\mid\overline{V}_{t}\right)$.
\begin{assumption}[Positivity]\label{asp:positivity}There exists a constant $\delta$ such that with probability one, $K_{C}\left(t\mid\overline{V}_{t}\right)\geq\delta>0$ for $t$ in the support of $T$.
\end{assumption}
Following \citet{rotnitzky2007analysis}, we obtain the IPCW estimator $\widehat{\psi}$ as the solution to the following equation: \begin{equation} \mathbb{P}_{n}\left\{ \frac{\delta_{C}}{K_{C}(\tau\mid\overline{V}_{\tau})}G(\psi;F)\right\} =0.\label{eq:IPCW ee} \end{equation}
In (\ref{eq:IPCW ee}), we assume that $K_{C}(t\mid\overline{V}_{t})$ is known. In practice, $K_{C}(t\mid\overline{V}_{t})$ is often unknown and must be modeled and estimated from the data. To facilitate estimation, we posit a proportional hazards model for the censoring process with time-dependent covariates; i.e., \begin{equation} \lambda_{C}(t\mid\overline{V}_{t})=\lambda_{C,0}(t)\exp\{\eta{}^{\mathrm{\scriptscriptstyle T}}W_{C}(t,\overline{V}_{t})\},\label{eq:ph-C} \end{equation} where $\lambda_{C,0}(t)$ is an unknown baseline hazard function for censoring, $W_{C}(t,\overline{V}_{t})$ is a pre-specified function of $t$ and $\overline{V}_{t}$, and $\eta$ is a vector of unknown parameters. Under Assumption \ref{asp:NUC-1}, we can estimate $\lambda_{C,0}(t)$ and $\alpha$ from the standard software such as ``coxph'' in R. To estimate $\eta$, fit the time-dependent proportional hazards model to the data $\{(\overline{V}_{X_{i},i},X_{i},\delta_{C,i}):i=1,\ldots,n\}$ treating the censoring as the failure event. Once we obtain $\widehat{\eta},$ we can estimate $\lambda_{C,0}(t)\mathrm{d} t$ by \[ \widehat{\lambda}_{C,0}(t)\mathrm{d} t=\frac{\sum_{i=1}^{n}\mathrm{d} N_{C,i}(t)}{\sum_{i=1}^{n}\exp\left\{ \widehat{\eta}^{\mathrm{\scriptscriptstyle T}}W_{C}(t,\overline{V}_{t,i})\right\} Y_{C_{i}}(t)}, \] where $N_{C}(t)=I(C\leq t,\delta_{C}=0)$ and $Y_{C}(t)=I(C\geq t)$ are the counting process and the at-risk process of observing censoring. Then, we estimate $K_{C}\left(t\mid\overline{V}_{t}\right)$ by \begin{eqnarray*} \widehat{K}_{C}\left(t\mid\overline{V}_{t}\right) & = & \exp\left[-\int_{0}^{t}\exp\{\widehat{\eta}^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\}\widehat{\lambda}_{C,0}(u)\mathrm{d} u\right]\\
& = & \prod_{0\leq u\leq t}\left[1-\exp\left\{ \widehat{\eta}^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\right\} \widehat{\lambda}_{C,0}\left(u\right)\mathrm{d} u\right]. \end{eqnarray*} Then, we obtain the estimator $\widehat{\psi}$ of $\psi$ by solving (\ref{eq:IPCW ee}) with unknown quantities replaced by their estimates.
In the literature, augmented IPCW estimators have been developed to improve efficiency and robustness over IPCW estimators; see, e.g., \citet{rotnitzky2007analysis,rotnitzky2009analysis} for survival data and \citet{lok2016cumincidentfunction} for competing risks data. However, the typical efficiency gain is little in practice at the expense of additional complexity in computation. More importantly, we show in the next section that the proposed IPCW estimator already has the multiple robustness property against possible model misspecification.
\section{Multiple robustness and asymptotic distribution\label{sec:Asymptotic-property}}
Because the proposed estimator depends on nuisance parameter estimation, we summarize the following nuisance models: (i) $\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta\}$ indexed by $\beta$; (ii) the proportional hazards model for the treatment process (\ref{eq:ph-V}), denoted by $M_{T}$; and (iii) the proportional hazards model for the censoring process (\ref{eq:ph-C}), denoted by $K_{C}$. Let $\widehat{\beta}$, $\widehat{M}_{T}$, and $\widehat{K}_{C}$ be the estimates of $\beta$, $M_{T}$, and $K_{C}$ under the specified parametric and semiparametric models. Denote the probability limits of $\widehat{\beta}$, $\widehat{M}_{T}$, and $\widehat{K}_{C}$ as $\beta^{*}$, $M_{T}^{*}$, and $K_{C}^{*}$, respectively. If the outcome model is correctly specified, $\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\}=\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$; if the model for the treatment process is correctly specified, $M_{T}^{*}=M_{T}$; and if the model for the censoring process is correctly specified, $K_{C}^{*}=K_{C}$. To reflect that the estimating function depends on the nuisance parameters, we denote \begin{eqnarray*} G(\psi,\beta,M_{T};F) & = & \int c(\overline{V}_{u})\left[H(\psi)-\mathbb{E}\left\{ H(\psi)\mid\overline{V}_{u},T\geq u;\beta\right\} \right]\mathrm{d} M_{T}(u),\\ \Phi(\psi,\beta,M_{T},K_{C};F) & = & \frac{\delta_{C}G(\psi,\beta,\lambda_{T};F)}{K_{C}\left(\tau\mid\overline{V}_{\tau}\right)}. \end{eqnarray*} Then, the proposed estimator $\widehat{\psi}$ solves \begin{equation} \mathbb{P}_{n}\left\{ \Phi(\psi,\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)\right\} =0,\label{eq:IPCW2-1} \end{equation} for $\psi$, which achieves the multiple robustness or multiple protection \citep{molina2017multiple}.
\begin{theorem}[Multiple robustness]\label{Thm:3-mr}Under the continuous-time SNMM (\ref{eq:cont-SNMM}) and Assumption \ref{asumption:CT-UNC}, the proposed estimator $\widehat{\psi}$ solving estimating equation (\ref{eq:IPCW2-1}) is multiply robust in that it is unbiased under all scenarios specified in Table \ref{tab:Multiply-Robustness}.
\begin{table}[h] \protect\protect\protect\protect\protect\caption{\label{tab:Multiply-Robustness}Multiply Robustness of the Proposed Estimator}
\centering{} \begin{tabular}{llccccccccccc} \hline \multicolumn{13}{l}{The proposed estimator $\widehat{\psi}$ is unbiased if}\tabularnewline \hline (i) Model for $H(\psi^{*})$ & & $\checked$ & & $\checked$ & & $\times$ & & $\checked$ & & $\checked$ & & $\times$\tabularnewline (ii) Model for the treatment process $M_{T}$ & & $\checked$ & & $\times$ & & $\checked$ & & $\checked$ & & $\times$ & & $\checked$\tabularnewline (iii) Model for the censoring process $K_{C}$ & & $\checked$ & & $\checked$ & & $\checked$ & & $\times$ & & $\times$ & & $\times$\tabularnewline \hline \end{tabular}
$\checked$ (is correctly specified), $\times$ (is misspecified) \end{table}
\end{theorem}
It is important to establish the asymptotic property of $\widehat{\psi}$ under the multiple robustness condition, which allows for multiply robust inference of $\psi^{*}$. Let $P$ denote the true data generating distribution of $F$, and for any $g(F)$, let $\mathbb{P}\{g(F)\}=\int g(f)\mathrm{d} P(f)$ and let $\mathbb{G}_{n}=n^{1/2}(\mathbb{P}_{n}-\mathbb{P})$. We define \begin{eqnarray*} J_{1}(\beta) & = & \mathbb{P}\left\{ \Phi(\psi^{*},\beta,M_{T}^{*},K_{C}^{*};F)\right\} ,\\ J_{2}(M_{T}) & = & \mathbb{P}\left\{ \Phi(\psi^{*},\beta^{*},M_{T},K_{C}^{*};F)\right\} ,\\ J_{3}(K_{C}) & = & \mathbb{P}\left\{ \Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C};F)\right\} , \end{eqnarray*} and \[ J(\beta,M_{T},K_{C})=\mathbb{P}\left\{ \Phi(\psi^{*},\beta,M_{T},K_{C};F)\right\} . \]
Similar to \citet{yang2015gof}, we impose the regularity conditions from the empirical process literature \citep{van1996weak}.
\begin{assumption}\label{asump:donsker} \begin{description} \item [{(i)}] $\Phi(\psi,\beta,M_{T},K_{C};F)$ and $\partial\Phi(\psi,\beta,M_{T},K_{C};F)/\partial\psi$ are $P$-Donsker classes; i.e., \begin{eqnarray*} \mathbb{G}_{n}\{\Phi(\widehat{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)\} & = & \mathbb{G}_{n}\{\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\}+o_{p}(1),\\ \mathbb{G}_{n}\left\{ \frac{\partial\Phi(\widehat{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)}{\partial\psi}\right\} & = & \mathbb{G}_{n}\left\{ \frac{\partial\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)}{\partial\psi}\right\} +o_{p}(1). \end{eqnarray*} \item [{(ii)}] Assume that \begin{eqnarray*}
\mathbb{P}\left\{ ||\Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)-\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)||\right\} & = & o_{p}(1),\\
\mathbb{P}\left\{ ||\frac{\partial}{\partial\psi}\Phi(\widehat{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)-\frac{\partial}{\partial\psi}\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)||\right\} & = & o_{p}(1). \end{eqnarray*} \item [{(iii)}] $A(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*})=\mathbb{P}\left\{ \partial\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)/\partial\psi\right\} $ is invertible. \item [{(iv)}] Assume that \begin{multline*} J(\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C})-J(\beta^{*},M_{T}^{*},K_{C}^{*})=J_{1}(\widehat{\beta})-J_{1}(\beta^{*})+J_{2}(\widehat{M}_{T})-J_{2}(M_{T}^{*})\\ +J_{3}(\widehat{K}_{C})-J_{3}(K_{C}^{*})+o_{p}(n^{-1/2}), \end{multline*} and that $J_{1}(\widehat{\beta})$, $J_{2}(\widehat{M}_{T})$, and $J_{3}(\widehat{K}_{C})$ are regular asymptotically linear with influence functions $\Phi_{1}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)$, $\Phi_{2}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)$, and $\Phi_{3}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)$, respectively. \end{description} \end{assumption}
We discuss the implications of these conditions. First, the $P$-Donsker class condition requires that the nuisance models should not be too complex. Under Assumption \ref{asp:positivity} for the censoring process, Assumption \ref{asump:donsker} (i) is a standard condition for the empirical processes. We refer the interested readers to Section 4.2 of \citet{kennedy2016semiparametric} for a thorough discussion of Donsker classes of functions. Second, Assumption \ref{asump:donsker} (ii) states that $\widehat{\beta}$, $\widehat{M}_{T}$, and $\widehat{K}_{C}$ have probability limits $\beta^{*}$, $M_{T}^{*}$, and $K_{C}^{*}$, and that the multiple robustness condition in Theorem \ref{Thm:3-mr} holds. Third, Assumption \ref{asump:donsker} (iv) holds for smooth functionals of parametric or semiparametric efficient estimators under specified models. Therefore, this assumption would hold under mild regularity conditions if $\widehat{\beta}$, $\widehat{M}_{T}$, and $\widehat{K}_{C}$ are the parametric and semiparametric maximum likelihood estimators under specified models.
We present the asymptotic property of the proposed estimator $\widehat{\psi}$ solving equation (\ref{eq:IPCW2-1}).
\begin{theorem}\label{thm:4}Under the continuous-time SNMM (\ref{eq:cont-SNMM}) and Assumptions \ref{asumption:CT-UNC}, \ref{asp:positivity} and \ref{asump:donsker}, $\widehat{\psi}$ is consistent for $\psi^{*}$ and is asymptotically linear with the influence function \[ \widetilde{\Phi}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)=\left\{ A(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*})\right\} ^{-1}\widetilde{B}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F), \] where $A(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*})$ is defined in Assumption \ref{asump:donsker} (iii), and \begin{eqnarray} \widetilde{B}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F) & = & \Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)+\Phi_{1}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\nonumber \\
& & +\Phi_{2}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)+\Phi_{3}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F).\label{eq:influence fctn} \end{eqnarray}
\end{theorem}
Theorem \ref{thm:4} allows for variance estimation of $\widehat{\psi}$. If the nuisance models are correctly specified, we have \begin{eqnarray} \widetilde{B}(\psi^{*},\beta^{*},M_{T},K_{C};F) & = & \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)-\mathbb{E}\left\{ \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)S_{\alpha}^{\mathrm{\scriptscriptstyle T}}\right\} \mathbb{E}\left(S_{\alpha}S_{\alpha}^{\mathrm{\scriptscriptstyle T}}\right)^{-1}S_{\alpha}\nonumber \\
& & -\mathbb{E}\left\{ \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)S_{\eta}^{\mathrm{\scriptscriptstyle T}}\right\} \mathbb{E}\left(S_{\eta}S_{\eta}^{\mathrm{\scriptscriptstyle T}}\right)^{-1}S_{\eta}\nonumber \\
& & +\int\frac{\mathbb{E}\left[G(\psi^{*},\beta^{*},M_{T};F)\exp\left\{ \alpha^{\mathrm{\scriptscriptstyle T}}W_{T}(u,\overline{V}_{u})\right\} \delta_{C}/K_{C}(\tau\mid\overline{V}_{\tau})\right]}{\mathbb{E}\left[\exp\left\{ \alpha^{\mathrm{\scriptscriptstyle T}}W_{T}(u,\overline{V}_{u})\right\} Y_{T}(u)\right]}\mathrm{d} M_{C}(u)\nonumber \\
& & +\int\frac{\mathbb{E}\left[G(\psi^{*},\beta^{*},M_{T};F)\exp\left\{ \eta^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\right\} \delta_{C}/K_{C}(\tau\mid\overline{V}_{\tau})\right]}{\mathbb{E}\left[\exp\left\{ \eta^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\right\} Y_{C}(u)\right]}\mathrm{d} M_{T}(u),\label{eq:tilde-J-1} \end{eqnarray} where $S_{\alpha}$ and $S_{\eta}$ are the scores of the partial likelihood functions of $\alpha$ and $\eta$, respectively; see (\ref{eq:S_alpha}) and (\ref{eq:S_eta}) in the supplementary material.
Then, we obtain the variance estimator of $\widehat{\psi}$ as the empirical variance of the individual influence function with the unknown parameters replaced by their estimates. Under the multiple robustness condition if some nuisance models are misspecified, it is difficult to characterize the influence function $\widetilde{\Phi}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)$. We suggest estimating the asymptotic variance of $\widehat{\psi}$ by nonparametric bootstrap \citep{efron1979}. The consistency of the bootstrap is guaranteed by the regularity and asymptotic properties of $\widehat{\psi}$ in Theorem \ref{thm:4}.
\section{Simulation study\label{sec:simulation}}
We now evaluate the finite-sample performance of the proposed estimator on simulated datasets with two objectives. First, we assess the double robustness and efficiency of the proposed estimator based on the semiparametric efficiency score, compared to some preliminary estimator. Second, to demonstrate the impact of data discretization as commonly done in practice, we include the g-estimator applied to the pre-processed data.
We simulate $1,000$ datasets under two settings with and without censoring. In Setting I, we generate two covariates, one time-independent
($L_{TI}$) and one time-dependent ($L_{TD}$). The time-independent covariate $L_{TI}$ is generated from a Bernoulli distribution with mean equal to $0.55$. The time-dependent covariate is $L_{TD,t}=l_{1}\times I(0\leq t<0.5)+l_{2}\times I(0.5\leq t<1)+l_{3}\times I(1\leq t<1.5)+l_{4}\times I(1.5\leq t\leq2)$, where $(l_{1},l_{2},l_{3},l_{4})^{\mathrm{\scriptscriptstyle T}}$ is a $1\times4$ row vector generated from a multivariate normal distribution with mean equal to $(0,0,0,0)$ and covariance equal to $0.7^{|i-j|}$ for$i,j=1,\ldots,4$. We assume that the time-dependent variable remains constant between measurements. The maximum follow up time is $\tau=2$ (in year). We generate the time to treatment initiation $T$ with the hazard rate $\lambda_{T}(t\mid\overline{V}_{t})=\lambda_{T,0}(t)\exp($$\alpha_{1}$ $\times L_{TI}+\alpha_{2}L_{TD,t})$ with $\lambda_{T,0}(t)=\lambda_{T,0}=0.4$, $\alpha_{1}=0.15$, and $\alpha_{2}=0.8$. We generate $T$ according to the time-dependent model sequentially. This is because the hazard of treatment initiation in the time interval from $t_{1}=0$ to $t_{2}=0.5$ differs from the hazard of treatment initiation in the next interval and so on; see the supplementary material for details. We let $Y^{(\infty)}=L_{TD,\tau}$ be the potential outcome had the subject never initiated the treatment before $\tau$. The observed outcome is $Y=Y^{(\infty)}+\gamma_{T}(\overline{V}_{T};\psi^{*})$, where $\gamma_{t}(\overline{V}_{t};\psi^{*})=(\psi_{1}^{*}+\psi_{2}^{*}t)(\tau-t)I(t\leq\tau)$ with $\psi_{1}^{*}=15$ and $\psi_{2}^{*}=-1$. \begin{table} \caption{\label{tab:results1}Simulation results in Setting I without censoring based on $1,000$ simulated datasets: the Monte Carlo bias, standard error, root mean square error of the estimators, and coverage rate of $95\%$ confidence intervals.}
\centering{} \begin{tabular}{cclcccccccc} \hline
& & & \multicolumn{2}{c}{Bias ($\times10^{2}$)} & \multicolumn{2}{c}{SE ($\times10^{2}$)} & \multicolumn{2}{c}{rMSE ($\times10^{2}$)} & \multicolumn{2}{c}{CR ($\times10^{2}$)}\tabularnewline $n$ & & Method & $\psi_{1}^{*}$ & $\psi_{2}^{*}$ & $\psi_{1}^{*}$ & $\psi_{2}^{*}$ & $\psi_{1}^{*}$ & $\psi_{2}^{*}$ & $\psi_{1}^{*}$ & $\psi_{2}^{*}$\tabularnewline \hline \multicolumn{11}{c}{Scenario (i) with $M_{T}$ ($\checked$)}\tabularnewline \hline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 0.3 & -0.1 & 5.3 & 9.6 & 5.3 & 9.6 & 95.0 & 94.0\tabularnewline \multirow{2}{*}{$1000$} & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & 0.2 & 0.1 & 5.0 & 8.9 & 5.0 & 8.9 & 95.4 & 94.0\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & 0.2 & 0.1 & 4.9 & 8.7 & 4.9 & 8.7 & 95.3 & 94.4\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 28.6 & 34.5 & 6.0 & 10.5 & 29.3 & 36.1 & 0.0 & 7.2\tabularnewline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 0.2 & -0.1 & 3.4 & 6.2 & 3.4 & 6.2 & 95.9 & 96.0\tabularnewline $2000$ & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & 0.1 & 0.1 & 3.3 & 5.8 & 3.3 & 5.8 & 95.2 & 95.4\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & 0.1 & 0.1 & 3.2 & 5.6 & 3.2 & 5.6 & 95.1 & 95.6\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.8 & 37.1 & 3.9 & 6.7 & 28.1 & 37.7 & 0.0 & 0.0\tabularnewline \hline \multicolumn{11}{c}{Scenario (ii) with $M_{T}$ ($\times$)}\tabularnewline \hline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 7.4 & 20.2 & 5.2 & 9.9 & 9.1 & 22.5 & 68.8 & 44.6\tabularnewline \multirow{2}{*}{$1000$} & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & 0.5 & 0.5 & 5.1 & 9.1 & 5.1 & 9.1 & 95.4 & 94.0\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & 0.5 & 0.4 & 5.1 & 9.0 & 5.1 & 9.0 & 95.0 & 95.4\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.7 & 38.6 & 5.9 & 10.2 & 28.4 & 40.0 & 0.2 & 3.4\tabularnewline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 7.4 & 20.1 & 3.5 & 6.4 & 8.1 & 21.1 & 46.2 & 17.2\tabularnewline $2000$ & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & 0.4 & 0.3 & 3.4 & 5.9 & 3.4 & 5.9 & 95.0 & 95.4\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & 0.3 & 0.3 & 3.4 & 5.8 & 3.4 & 5.8 & 95.3 & 95.6\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.3 & 39.5 & 3.9 & 6.7 & 27.6 & 40.0 & 0.0 & 0.0\tabularnewline \hline \end{tabular}$\checked$ (is correctly specified), $\times$ (is misspecified) \end{table}
We consider the following estimators with details for the nuisance models and their estimation presented in the supplementary material: \begin{description} \item [{(a)}] A preliminary estimator $\widehat{\psi}_{p}$ solves the estimating equation (\ref{eq:eq4}) with $E\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}\equiv0$ and $c(\overline{V}_{u})=(1,u)^{\mathrm{\scriptscriptstyle T}}(\tau-u)I(u\leq\tau)-\mathbb{E}\{(1,T)^{\mathrm{\scriptscriptstyle T}}(\tau-T)I(T\leq\tau)\mid\overline{V}_{u},T\geq u\}.$ Therefore, $\widehat{\psi}_{p}$ corresponds to the proposed estimator with a misspecified model for $E\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$. \item [{(b)}] The proposed estimator $\widehat{\psi}_{\mathrm{cont},1}$ solves the estimating equation (\ref{eq:eq4}), where we replace $\mathrm{var}\{H(\psi)\mid\overline{V}_{u},T\geq u\}$ by a constant. \item [{(c)}] The proposed estimator $\widehat{\psi}_{\mathrm{cont},2}$ solves the estimating equation (\ref{eq:eq4}), where we obtain $\widehat{\mathrm{var}}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$ by the empirical variance of $H(\widehat{\psi}_{p})-\mathbb{E}\{H(\widehat{\psi}_{p})\mid\overline{V}_{u},T\geq u;\widehat{\beta}\}$, restricted to subjects with $T\geq u$. \item [{(d)}] The g-estimator $\widehat{\psi}_{\mathrm{disc},g}$ in Section \ref{sec:discrete-SNMM} applies to the monthly data after discretization with $24$ equally-spaced time points from $0$ to $\tau$. For $m\geq1$, at the $m$th time point $t_{m}$, $L_{m}$ is the the average of $L_{t}$ from $t_{m-1}\leq t\leq t_{m}$, $A_{m}$ is the indicator of whether the treatment is initiated before $t_{m}$, and the time to treatment initiation $T$ is $t_{m}$ if $A_{m}=1$ and $\overline{A}_{m-1}=\overline{0}$. The g-estimator solves the estimating equation based on (\ref{eq:semipar ee 1}), where the nuisance models are estimated similar to what are used for $\widehat{\psi}_{\mathrm{cont},1}$ but with the re-shaped data. \end{description} To investigate the double robustness in Theorem \ref{Thm:2-dr}, we consider two models for estimating $M_{T}$: the correctly specified proportional hazards model with both time-independent and time-dependent covariates; and the misspecified proportional hazards model with only time-independent covariate. For all estimators, we use the bootstrap for variance estimation with the bootstrap size $100$.
Table \ref{tab:results1} shows the simulation results in Setting I. Under Scenario (i) when the model for the treatment process is correctly specified, $\widehat{\psi}_{p}$, $\widehat{\psi}_{\mathrm{cont},1}$ and $\widehat{\psi}_{\mathrm{cont},2}$ show small biases. As a result, the coverage rates are close to the nominal level. Under Scenario (ii) when the model for the treatment process is misspecified, $\widehat{\psi}_{p}$ shows large biases, but $\widehat{\psi}_{\mathrm{cont},1}$ and $\widehat{\psi}_{\mathrm{cont},2}$ still show small biases. Moreover, the root mean squared errors of $\widehat{\psi}_{\mathrm{cont},1}$ and $\widehat{\psi}_{\mathrm{cont},2}$ decrease as the sample size increases. This confirms the double robustness of the proposed estimators. The proposed estimator $\widehat{\psi}_{\mathrm{cont},2}$ with $\widehat{\mathrm{var}}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$ produces slightly smaller standard errors; however, this reduction is not large. In practice, we recommend $\widehat{\psi}_{\mathrm{cont},1}$ because of its simpler implementation than $\widehat{\psi}_{\mathrm{cont},2}$. We note large biases in the g-estimator, which illustrates the consequence of data pre-processing for the subsequent analysis.
\begin{table} \caption{\label{tab:results2}Simulation results in Setting II with censoring based on $1,000$ simulated datasets: the Monte Carlo bias, standard error, root mean square error of the estimators, and coverage rate of $95\%$ confidence intervals.}
\centering{} \begin{tabular}{cclcccccccc} \hline
& & & \multicolumn{2}{c}{Bias ($\times10^{2}$)} & \multicolumn{2}{c}{SE ($\times10^{2}$)} & \multicolumn{2}{c}{rMSE ($\times10^{2}$)} & \multicolumn{2}{c}{CR ($\times10^{2}$)}\tabularnewline $n$ & & Method & $\psi_{1}^{*}$ & $\psi_{2}^{*}$ & $\psi_{1}^{*}$ & $\psi_{2}^{*}$ & $\psi_{1}^{*}$ & $\psi_{2}^{*}$ & $\psi_{1}^{*}$ & $\psi_{2}^{*}$\tabularnewline \hline \multicolumn{11}{c}{Scenario (i) with $M_{T}$ ($\checked$) and $K_{C}$ $(\checked)$}\tabularnewline \hline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & -0.1 & 0.2 & 5.8 & 10.9 & 5.8 & 10.9 & 95.2 & 94.8\tabularnewline \multirow{2}{*}{$1000$} & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.1 & 0.5 & 5.7 & 10.3 & 5.7 & 10.3 & 95.4 & 95.4\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.1 & 0.5 & 5.6 & 10.2 & 5.6 & 10.2 & 94.5 & 95.5\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.7 & 32.5 & 6.7 & 12.0 & 28.5 & 34.7 & 2.4 & 24.6\tabularnewline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & -0.3 & 0.3 & 4.2 & 7.9 & 4.2 & 7.9 & 94.6 & 94.8\tabularnewline $2000$ & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.3 & 0.4 & 4.2 & 7.5 & 4.2 & 7.5 & 95.0 & 94.8\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.3 & 0.4 & 4.2 & 7.4 & 4.2 & 7.4 & 95.1 & 95.0\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.5 & 32.9 & 4.7 & 8.2 & 27.9 & 33.9 & 0.0 & 1.6\tabularnewline \hline \multicolumn{11}{c}{Scenario (ii) with $M_{T}$ $(\times)$ and $K_{C}$ $(\checked)$}\tabularnewline \hline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 7.0 & 21.0 & 6.0 & 11.4 & 9.2 & 23.9 & 82.2 & 57.8\tabularnewline \multirow{2}{*}{$1000$} & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.1 & 1.1 & 5.7 & 10.3 & 5.7 & 10.4 & 95.0 & 95.2\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.1 & 1.1 & 5.5 & 10.1 & 5.5 & 10.2 & 95.2 & 95.3\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.4 & 33.4 & 6.7 & 12.0 & 28.2 & 35.5 & 3.2 & 22.2\tabularnewline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 7.0 & 21.2 & 4.2 & 8.2 & 8.2 & 22.8 & 63.6 & 29.2\tabularnewline $2000$ & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.3 & 1.0 & 4.1 & 7.5 & 4.2 & 7.6 & 94.4 & 95.2\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.3 & 1.1 & 4.0 & 7.4 & 4.1 & 7.5 & 94.7 & 95.4\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.2 & 33.7 & 4.7 & 8.1 & 27.6 & 34.7 & 0.0 & 1.2\tabularnewline \hline \multicolumn{11}{c}{Scenario (iii) with $M_{T}$ $(\checked)$ and $K_{C}$ $(\times)$}\tabularnewline \hline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & -0.1 & 0.2 & 5.8 & 11.0 & 5.8 & 11.0 & 95.0 & 95.0\tabularnewline \multirow{2}{*}{$1000$} & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.1 & 0.4 & 5.7 & 10.4 & 5.7 & 10.4 & 95.2 & 95.6\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.1 & 0.3 & 5.7 & 10.4 & 5.7 & 10.4 & 95.0 & 95.3\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.7 & 32.3 & 6.7 & 12.1 & 28.5 & 34.5 & 1.8 & 26.2\tabularnewline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & -0.3 & 0.4 & 4.2 & 7.9 & 4.3 & 7.9 & 95.0 & 94.8\tabularnewline $2000$ & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.3 & 0.4 & 4.2 & 7.5 & 4.2 & 7.6 & 95.2 & 95.4\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.3 & 0.4 & 4.1 & 7.2 & 4.1 & 7.2 & 95.4 & 95.2\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.4 & 32.6 & 4.7 & 8.2 & 27.8 & 33.7 & 0.0 & 1.8\tabularnewline \hline \multicolumn{11}{c}{Scenario (iv) with $M_{T}$ ($\times$) and $K_{C}$ $(\times)$}\tabularnewline \hline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 6.9 & 20.5 & 5.9 & 11.3 & 9.1 & 23.5 & 81.0 & 58.6\tabularnewline \multirow{2}{*}{$1000$} & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.0 & 1.0 & 5.7 & 10.4 & 5.7 & 10.4 & 94.8 & 95.0\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.0 & 1.0 & 5.5 & 10.3 & 5.5 & 10.3 & 95.0 & 95.2\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.5 & 33.1 & 6.8 & 12.1 & 28.3 & 35.2 & 3.0 & 24.0\tabularnewline
& Model for $H(\psi^{*})$ $(\times)$ & $\widehat{\psi}_{p}$ & 6.9 & 20.8 & 4.1 & 8.1 & 8.1 & 22.3 & 63.4 & 30.2\tabularnewline $2000$ & \multirow{2}{*}{Model for $H(\psi^{*})$ $(\checked)$} & $\widehat{\psi}_{\mathrm{cont},1}$ & -0.2 & 0.9 & 4.2 & 7.5 & 4.2 & 7.6 & 94.2 & 95.4\tabularnewline
& & $\widehat{\psi}_{\mathrm{cont},2}$ & -0.2 & 0.8 & 4.1 & 7.4 & 4.1 & 7.4 & 94.6 & 95.6\tabularnewline
& \textendash{} & $\widehat{\psi}_{\mathrm{disc},g}$ & 27.2 & 33.4 & 4.7 & 8.1 & 27.6 & 34.4 & 0.0 & 1.6\tabularnewline \hline \end{tabular}
$\checked$ (is correctly specified), $\times$ (is misspecified) \end{table} In Setting II, we further generate the time to censoring $C$ with the hazard rate $\lambda_{C}(t\mid\overline{V}_{t})=\lambda_{C,0}(t)\exp(\eta_{1}L_{TI}+\eta_{2}L_{TD,t})$, with $\lambda_{C,0}(t)=0.2$, and $\eta_{1}=\eta_{2}=0.2$. In the presence of censoring, we consider the four estimators (a)\textendash (d) considered in Setting I with weighting; i.e., the corresponding estimating functions are now weighted by $\delta_{C}/\widehat{K}_{C}(\tau\mid\overline{V}_{\tau})$. To investigate the multiple robustness in Theorem \ref{Thm:3-mr}, we additionally consider two models for estimating $K_{C}$: the correctly specified proportional hazards model with both time-independent and time-dependent covariates; and the misspecified proportional hazards model without covariate.
Table \ref{tab:results2} shows the simulation results in Setting II. Under Scenarios (i) and (iii) when the model for the treatment process is correctly specified, $\widehat{\psi}_{p}$, $\widehat{\psi}_{\mathrm{cont},1}$ and $\widehat{\psi}_{\mathrm{cont},2}$ show small biases, regardless whether the models for $H(\psi^{*})$ and the censoring process are correctly specified or not. Moreover, under Scenarios (ii) and (iv) when the model for the treatment process is misspecified, $\widehat{\psi}_{p}$ shows large biases, but as predicted by the multiple robustness, $\widehat{\psi}_{\mathrm{cont},1}$ and $\widehat{\psi}_{\mathrm{cont},2}$ still show small biases. Again, the discretized g-estimator shows large biases across all scenarios.
\section{Estimating the effect of time to initiating HAART \label{sec:Application}}
\subsection{Acute infection and early disease research program}
We apply our method to the observational AIEDRP database consisting of $1762$ HIV-positive patients diagnosed during acute and early infection. \citet{lok2012impact} investigated how the time to initiation of HAART after HIV infection predicts the effect of one year of treatment based on this database. \citet{yang2015gof,yang2017sensitivity} developed a goodness-of-fit procedure to assess the treatment effect model and a sensitivity analysis to the departure of the NUC assumption. All these methods were based on the monthly data after discretization. However, the observations from the original data are collected by user-initiated visits and are irregularly spaced \citep{hecht2006multicenter}. Figure \ref{fig:irregular visit} shows the visit times for $5$ random patients. As can be seen, we have irregular visits, and the number and frequency of visits vary from patients to patients.
\begin{figure}
\caption{CD4 count and log viral load for $5$ random patients measured at irregularly spaced time points, which are colored by patients.}
\label{fig:irregular visit}
\end{figure}
\subsection{Objective}
We aim to estimate the averaged causal effect of the time to HAART initiation on the mean CD4 count at year 2 after HIV infection directly on the basis of the original data without discretization. We assume a continuous-time SNMM $\gamma(\overline{V}_{u};\psi^{*})=(\psi_{1}^{*}+\psi_{2}^{*}t)(\tau-t)I(t\leq\tau)$. As discussed before, $\psi_{2}^{*}$ quantifies the impact of time to treatment initiation. The rationale for this modeling choice is because the duration of treatment may well be predictive of its effect.
\subsection{Estimator and nuisance models}
We consider the proposed estimators $\widehat{\psi}_{\mathrm{cont},1}$ and $\widehat{\psi}_{\mathrm{cont},2}$ specified in Section \ref{sec:simulation}. The estimation procedure requires specifying and fitting nuisance models, which we now consider.
\textit{Model for the treatment process.} The model for the treatment process $(M_{T})$ is a time-dependent proportional hazards model adjusting for gender, age (age at infection), race (white non-Hispanic race), injdrug (injection drug ever/never), CD4$_{u}^{1/2}$ (square root of current CD4 count ), lvl$_{u}$ (log viral load), days from last visit$_{u}$ (number of days since the last visit), first visit$_{u}$ (whether the visit is the first visit), second visit$_{u}$ (whether the visit is the second visit). Table \ref{tab:AIEDRP2} (the left portion) reports the point and standard error estimates of coefficients in the treatment process model. Male and injection drug user are negatively associated with the hazard of treatment initiation, which are significant at the $0.05$ level. Moreover, higher CD4 count and viral load, more days from the last visit, and whether the visit is the first visit are associated with a decreased hazard of treatment initiation.
\textit{Model for the censoring process.} The model for the censoring process $(K_{C})$ is a time-dependent proportional hazards model adjusting for gender, age, white non-Hispanic race, injdrug, CD4$_{u}^{1/2}$, lvl$_{u}$, and Treated$_{u}$ (whether a patient had initiated HAART). Table \ref{tab:AIEDRP2} (the right portion) reports the point and standard error estimates of coefficients in the censoring model. Age is negatively associated with the hazard of censoring, while being an injection drug user is positively associated with the hazard of censoring, both are highly significant. Moreover, higher CD4 count, more days from the last visit, and whether the visit is the first visit are highly associated with a decreased hazard of censoring.
\begin{table} \caption{\label{tab:AIEDRP2}Fitted time-dependent proportional hazards models for time to treatment initiation and time to censoring}
\centering{} \begin{tabular}{ccccccccc} \hline
& \multicolumn{4}{c}{time to treatment initiation} & \multicolumn{3}{c}{time to censoring} & \tabularnewline \hline
& Est & SE & p-val & & Est & SE & p-val & \tabularnewline \hline male & -0.35 & 0.161 & 0.03 & {*} & 0.21 & 0.159 & 0.19 & \tabularnewline age & 0.01 & 0.003 & 0.08 & . & -0.02 & 0.004 & 0.00 & {*}{*}{*}\tabularnewline white non-hispanic & 0.12 & 0.066 & 0.07 & . & 0.02 & 0.077 & 0.77 & \tabularnewline injdrug & -0.50 & 0.180 & 0.01 & {*}{*} & 0.74 & 0.156 & 0.00 & {*}{*}{*}\tabularnewline CD4$_{u}^{1/2}$ & -0.06 & 0.007 & 0.00 & {*}{*}{*} & -0.03 & 0.007 & 0.00 & {*}{*}{*}\tabularnewline lvl$_{u}$ & -0.14 & 0.013 & 0.00 & {*}{*}{*} & 0.04 & 0.016 & 0.02 & {*}\tabularnewline days from last visit$_{u}$ & -0.03 & 0.002 & 0.00 & {*}{*}{*} & -0.01 & 0.001 & 0.00 & {*}{*}{*}\tabularnewline first visit$_{u}$ & -3.06 & 0.111 & 0.00 & {*}{*}{*} & -1.24 & 0.231 & 0.00 & {*}{*}{*}\tabularnewline second visit$_{u}$ & -0.04 & 0.081 & 0.61 & & 0.68 & 0.178 & 0.00 & {*}{*}{*}\tabularnewline Treated$_{u}$ & \textendash{} & \textendash{} & \textendash{} & & -0.15 & 0.102 & 0.15 & \tabularnewline \hline \end{tabular}
Signif. codes: 0 '{*}{*}{*}' 0.001 '{*}{*}' 0.01 '{*}' 0.05 '.' \end{table}
\textit{Model for the potential outcome mean function.} The outcome model $E\{H(\widehat{\psi}_{p})\mid\overline{V}_{u},T\geq u;\beta\}$ is a linear regression model where the covariates include age, male, race, injdrug, CD4$_{u}$, lvl$_{u}$, CD4$_{u}^{3/4}(\tau-u)$, CD4$_{u}^{3/4}\times(\tau-u)\times$age, CD4$_{u}^{3/4}\times(\tau-u)\times$male, CD4$_{u}^{3/4}\times(\tau-u)\times$race, CD4$_{u}^{3/4}\times(\tau-u)\times$injdrug, CD4$_{u}^{3/4}\times(\tau-u)\times$lvl$_{u}$, CD4slope$_{u}$ measured, CD4slope$_{u}\times(\tau-u)^{1/2}$ $I(u\leq6)(6-u)$, and $I(u\leq6)(36-u^{2})$. This model specification is motivation based on the substantive literature including \citet{taylor1994stochastic,taylor1998does,rodriguez2006predictive,may2009cd4}.\textbf{}
\textit{Other nuisance models}. $E(\tau-T\mid\overline{L}_{u},T\geq u)$ and $E\{T(\tau-T)\mid\overline{L}_{u},T\geq u)\}$ are linear regression models where the covariates include $u$, $(\tau-u)$, male$\times(\tau-u)$, age$\times(\tau-u)$, race$\times(\tau-u)$, injdrug$\times(\tau-u)$, CD4$_{u}^{1/2}$$\times(\tau-u)$, lvl$_{u}$$\times(\tau-u)$, days from last visit$_{u}\times(\tau-u)$, first visit$_{u}\times(\tau-u)$, second visit$_{u}\times(\tau-u)$.
The confounding variables and nuisance models are chosen on the basis of the substantive knowledge and the established literature, and therefore the NUC assumption is plausible in this application. We use bootstrap for variance estimation with the bootstrap size $100$ and compute the $95\%$ Wald confidence interval.
\subsection{Results}
Table \ref{tab:Results-2} shows the results for the effect of time to HAART initiation on the CD4 count at year 2. We note only slight differences in the point estimates between our estimators. Based on our results, on average, initiation of HAART at the time of infection ($t=0$) can increase CD4 counts at year 2 by $14.1\text{cells/mm\ensuremath{^{3}} per month}\times24\text{ months}\approx338$ cells/mm$^{3}$; while initiation of HAART $3$ months after the time of infection can increase CD4 counts at year 2 by $(14.1-1.00\times3)\times(24-3)\approx233$ cells/mm$^{3}$.
\begin{table} \caption{\label{tab:Results-2}Results of the effect of time to HAART initiation on the CD4 count at year 2}
\centering{} \begin{tabular}{lccccc} \hline Method & Est & SE & lower .95 & upper .95 & p-val \tabularnewline \hline
& \multicolumn{5}{c}{$\psi_{1}^{*}$ cells/mm$^{3}$ per month}\tabularnewline Proposed 1: $\widehat{\psi}_{\mathrm{cont},1}$ & 14.1 & 1.1 & 12.0 & 16.3 & 0.000 \tabularnewline Proposed 2: $\widehat{\psi}_{\mathrm{cont},2}$ & 14.3 & 1.1 & 12.2 & 16.6 & 0.000 \tabularnewline \hline
& \multicolumn{5}{c}{$\psi_{2}^{*}$ cells/mm$^{3}$ per month$^{2}$}\tabularnewline Proposed 1: $\widehat{\psi}_{\mathrm{cont},1}$ & -1.00 & 0.23 & -1.42 & -0.50 & 0.000\tabularnewline Proposed 2: $\widehat{\psi}_{\mathrm{cont},2}$ & -1.01 & 0.23 & -1.43 & -0.52 & 0.000\tabularnewline \hline \end{tabular} \end{table}
\section{Discussion \label{sec:Discussion}}
In this article, we have developed a new semiparametric estimation framework for continuous-time SNMMs to evaluate treatment effects with irregularly spaced longitudinal observations. Our approach does not require specifying the joint distribution of the covariate, treatment, outcome and censoring processes. Moreover, our method achieves a multiple robustness property requiring the correct specification of either the model for the potential outcome mean function or the model for the treatment process, regardless whether the censoring process model is correctly specified. This robustness property will be useful when there is little prior or substantive knowledge about the data processes. Below, we discuss several directions for future work.
\subsection{Other types of outcome}
To accommodate different types of outcome, we consider a general specification of the continuous-time SNMM as \begin{equation} \gamma_{t}(\overline{L}_{t})=g\left[\mathbb{E}\left\{ Y^{(t)}\mid\overline{L}_{t},T\geq t\right\} \right]-g\left[\mathbb{E}\left\{ Y^{(\infty)}\mid\overline{L}_{t},T\geq t\right\} \right]=\gamma_{t}(\overline{L}_{t};\psi^{*}),\label{eq:g-SNMM-1} \end{equation} where $g(\cdot)$ is a pre-specified link function. For the continuous outcome, $g(\cdot)$ can be an identity link, i.e. $g(x)=x$, as we adopt in this article. For the binary outcome, $g(\cdot)$ can be a logit link, i.e. $g(x)=\text{logit}(x)\coloneqq\log\{x/(1-x)\}$. Then, (\ref{eq:g-SNMM-1}) specifies the treatment effect on the odds ratio scale, i.e. $\text{odds}\left\{ Y^{(t)}\mid\overline{L}_{t},T\geq t\right\} /\text{odds}\left\{ Y^{(\infty)}\mid\overline{L}_{t},T\geq t\right\} $, where $\text{odds}(Y\mid X)=P(Y=1\mid X)/P(Y=0\mid X)$. In this case, $H(\psi^{*})$ can be constructed as $H(\psi^{*})=\text{expit}\left[\text{logit}\{\mathbb{E}(Y\mid\overline{L}_{t},T\geq t)\}-\gamma_{t}(\overline{L}_{t};\psi^{*})\right]$. We can develop the corresponding semiparametric efficiency theory for $\psi^{*}$ similarly. For a time to event outcome, we can consider the structural nested failure time models \citep{robins1991correcting,robins1992estimation,yang2018semiparametric}.
\subsection{Effect modification and model selection\label{subsec:Effect-modification}}
Effect modification occurs when the magnitude of the treatment effect varies as a function of observed covariates. To allow for time-varying treatment effect modifiers, assume $\gamma_{t}(\overline{V}_{t};\psi^{*})=\{\psi_{1}^{*}+\psi_{2}^{*}t+\psi_{3}^{*\mathrm{\scriptscriptstyle T}}W(t,\overline{V}_{t})\}(\tau-t)I(t\leq\tau)$, when $W(t,\overline{V}_{t})$ is a pre-specified and possibly high-dimensional function of $t$ and $\overline{V}_{t}$. It is important to identify the true treatment effect modifiers, which can facilitate development of optimal treatment strategies in personalized medicine \citep{murphy2003optimal}. We will develop a variable selection procedure for identifying effect modifiers. The insight is that we have a larger number of estimating functions than the number of parameters. The problem for effect modifiers selection falls into the recent work of \citet{chang2017new} on high-dimensional statistical inferences with over-identification.
\subsection{Sensitivity analysis to the NUC assumption}
The key assumption to identify the causal parameters in the continuous-time SNMM is the NUC assumption. However, this assumption is not verifiable based on the observed data. In future studies, it is desirable that the follow-up visits and treatment assignment be determined by study protocol. By formalizing the visit process and treatment assignment, one knows by design which covariates contribute to the treatment process to ensure the NUC assumption holds with all the relevant covariates. In the absence of study protocol, we then recommend conducting sensitivity analysis to assess the impact of possible uncontrolled confounding. For the discrete-time SNMMs, \citet{yang2017sensitivity} assumed a bias function $b(\overline{L}_{m})=\mathbb{E}\{Y^{(\infty)}\mid\overline{A}_{m-1}=\overline{0},A_{m}=1,\overline{L}_{m}\}-\mathbb{E}\{Y^{(\infty)}\mid\overline{A}_{m-1}=\overline{0},A_{m}=0,\overline{L}_{m}\}$ that quantifies the impact of unmeasured confounding and developed a modified g-estimator. For the continuous-time SNMMs, it would also be important to develop a sensitivity analysis methodology, along the lines of \citet{robins1999sensitivity} or \citet{yang2017sensitivity}, to evaluate the sensitivity of causal inference to departures from the NUC assumption.
\section*{Acknowledgment}
The author would like to thank Anastasio A. Tsiatis for insightful and fruitful discussions. Dr. Yang is partially supported by NSF DMS 1811245 and NCI P01 CA142538.
\section*{Supplementary Material}
Supplementary material online includes proofs, technical and simulation details.
{}
\appendix
\global\long\defS\arabic{equation}{S\arabic{equation}}
\setcounter{equation}{0}
\global\long\defS\arabic{lemma}{S\arabic{lemma}}
\setcounter{lemma}{0}
\global\long\defS\arabic{example}{S\arabic{example}}
\setcounter{equation}{0}
\global\long\defS\arabic{section}{S\arabic{section}}
\setcounter{section}{0}
\global\long\defS\arabic{theorem}{S\arabic{theorem}}
\setcounter{equation}{0}
\global\long\defS\arabic{condition}{S\arabic{condition}}
\setcounter{equation}{0}
\global\long\defS\arabic{remark}{S\arabic{remark}}
\setcounter{equation}{0}
\global\long\defS\arabic{step}{S\arabic{step}}
\setcounter{equation}{0}
\global\long\defS\arabic{assumption}{S\arabic{assumption}}
\setcounter{assumption}{0}
\global\long\defS\arabic{proof}{S\arabic{proof}}
\setcounter{equation}{0}
\global\long\defS{proposition}{S{proposition}}
\setcounter{equation}{0}
\textbf{\large{}Supplementary material for ``Structural nested mean models with irregularly spaced observations''}{\large\par}
\section{Proofs}
\subsection{Proof of (\ref{eq:UNC2})}
First, we express \begin{eqnarray*}
& & \lambda_{T}\{t\mid\overline{V}_{t},Y^{(\infty)}\}=\lim_{h\rightarrow0}h^{-1}P\{t\leq T<t+h\mid\overline{V}_{t},Y^{(\infty)},T\geq t\}\\
& = & \lim_{h\rightarrow0}h^{-1}\frac{f\{Y^{(\infty)}\mid\overline{V}_{t},t\leq T<t+h\}P\{t\leq T<t+h\mid\overline{V}_{t},T\geq t\}}{f\{Y^{(\infty)}\mid\overline{V}_{t},T\geq t\}}\\
& = & \lim_{h\rightarrow0}h^{-1}\frac{f\{H(\psi^{*})\mid\overline{V}_{t},t\leq T<t+h\}P\{t\leq T<t+h\mid\overline{V}_{t},T\geq t\}}{f\{H(\psi^{*})\mid\overline{V}_{t},T\geq t\}}\\
& = & \lim_{h\rightarrow0}h^{-1}P\{t\leq T<t+h\mid\overline{V}_{t},H(\psi^{*}),T\geq t\}\\
& = & \lambda_{T}\{t\mid\overline{V}_{t},H(\psi^{*})\}, \end{eqnarray*} where the second equality follows by the Bayes rule, and the third equality follows by Model (\ref{eq:cont-SNMM}) which implies that the distribution of $\{\overline{V}_{t},Y^{(\infty)}\}$ is the same as the distribution of $\{\overline{V}_{t},H(\psi^{*})\}$.
Second, by Assumption \ref{asumption:CT-UNC}, $\lambda_{T}\{t\mid\overline{V}_{t},Y^{(\infty)}\}=\lambda_{T}(t\mid\overline{V}_{t})$. Therefore, $\lambda_{T}\{t\mid\overline{V}_{t},H(\psi^{*})\}=\lambda_{T}\{t\mid\overline{V}_{t},Y^{(\infty)}\}=\lambda_{T}(t\mid\overline{V}_{t})$.
\subsection{Proof of Theorem \ref{Thm: cont-nuisance}}
First, we characterize the semiparametric likelihood function of $\psi^{*}$ based on a single variable $O=(\overline{V}_{\tau},Y)$. The semiparametric likelihood is \begin{equation} f_{O}\left(\overline{V}_{\tau},Y\right)=\left\{ \frac{\mathrm{d} H(\psi^{*})}{\mathrm{d} Y}\right\} f_{\{\overline{V}_{\tau},H(\psi^{*})\}}\{\overline{V}_{\tau},H(\psi^{*})\}=f_{\{\overline{V}_{\tau},H(\psi^{*})\}}\{\overline{V}_{\tau},H(\psi^{*})\},\label{eq:slik} \end{equation} where the first equality follows by the transformation of $O$ to $\{\overline{V}_{\tau},H(\psi^{*})\}$, and the second equality follows because $\mathrm{d} H(\psi^{*})/\mathrm{d} Y=1$. To express (\ref{eq:slik}) further, we let the observed times to treatment initiation among the $n$ subjects be $v_{0}=0<v_{1}<\cdots<v_{M}$. By Assumption \ref{asumption:CT-UNC} and (\ref{eq:UNC2}), we express \begin{eqnarray} f_{O}\left(\overline{V}_{\tau},Y;\psi^{*},\theta\right) & = & f\left\{ H(\psi^{*});\theta_{1}\right\} \prod_{k=1}^{M}f\left\{ L_{v_{k}}\mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k-1}},H(\psi^{*});\theta_{2}\right\} \nonumber \\
& & \times\prod_{v=v_{1}}^{v_{M}}f\left\{ A_{v_{k}}\mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k}},H(\psi^{*});\theta_{3}\right\} ,\nonumber \\
& = & f\left\{ H(\psi^{*});\theta_{1}\right\} \prod_{k=1}^{M}f\left\{ L_{v_{k}}\mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k-1}},H(\psi^{*});\theta_{2}\right\} \nonumber \\
& & \times\prod_{v=v_{1}}^{v_{M}}f\left\{ A_{v_{k}}\mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k}};\theta_{3}\right\} \nonumber \\
& = & f\left\{ H(\psi^{*});\theta_{1}\right\} \prod_{k=1}^{M}f\left\{ L_{v_{k}}\mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k-1}},H(\psi^{*});\theta_{2}\right\} \nonumber \\
& & \times f(T,\Gamma\mid\overline{V}_{T};\theta_{3}),\label{eq:slik2} \end{eqnarray} where $\theta=(\theta_{1},\theta_{2},\theta_{3})$ is a vector of the infinite-dimensional nuisance parameters given the nonparametric models, and the third equality follows because $\prod_{k=1}^{M}f\left(A_{v_{k}}\mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k}};\theta_{3}\right)$ can be equivalently expressed as the likelihood based on the data $(T,\Gamma)$ given $\overline{V}_{T}$.
Second, we characterize $\Lambda_{k}$, the nuisance tangent space for $\theta_{k},$ for $k=1,2,3$. Assuming $f\left\{ H(\psi^{*});\theta_{1}\right\} $ and $\prod_{k=1}^{M}f\left\{ L_{v_{k}}\mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k-1}},H(\psi^{*});\theta_{2}\right\} $ are nonparametric, it follows from Section 4.4 of \citet{tsiatis2007semiparametric} that the tangent space regarding $\theta_{1}$ is \[ \Lambda_{1}=\left\{ s\left\{ H(\psi^{*})\right\} \in\mathbb{\mathbb{R}}^{p}:\mathbb{E}\left[s\left\{ H(\psi^{*})\right\} \right]=0\right\} , \] and the tangent space of $\theta_{2}$ is \begin{multline*} \Lambda_{2}=\sum_{k=1}^{M}\left\{ S\left\{ \overline{V}_{v_{k}-1},L_{v_{k}},H(\psi^{*})\right\} \in\mathbb{R}^{p}:\right.\\ \left.\mathbb{E}\left[S\left\{ \overline{V}_{v_{k}-1},L_{v_{k}},H(\psi^{*})\right\} \mid\overline{A}_{v_{k-1}}=\overline{0},\overline{L}_{v_{k-1}},H(\psi^{*})\right]=0\right\} . \end{multline*} By writing \begin{multline*} f_{(T,\Gamma\mid\overline{V}_{T})}(T,\Gamma\mid\overline{V}_{T})=\lambda_{T}(T\mid\overline{V}_{T})^{\Gamma}\exp\left\{ -\int_{0}^{T}\lambda_{T}(u\mid\overline{V}_{u})\mathrm{d} u\right\} \\ \times\left\{ f_{T\mid\overline{V}_{T}}(T\mid\overline{V}_{T})\right\} ^{1-\Gamma}\left\{ \int_{T}^{\infty}f_{T\mid\overline{V}_{T}}(u\mid\overline{V}_{u})\mathrm{d} u\right\} ^{\Gamma}, \end{multline*} it follows from \citet{tsiatis2007semiparametric} that the tangent space of $\theta_{3}$ is \[ \Lambda_{3}=\left\{ \int h_{u}(\overline{V}_{u})\mathrm{d} M_{T}(u):\text{for all }h_{u}(\overline{V}_{u})\in\mathbb{\mathbb{R}}^{p}\right\} . \] Then, the nuisance tangent space becomes $\Lambda=\Lambda_{1}\oplus\Lambda_{2}\oplus\Lambda_{3}$, where $\oplus$ denotes a direct sum. This is because $\theta_{1},$ $\theta_{2},$ and $\theta_{3}$ separate out in the likelihood function and therefore $\Lambda_{1}$, $\Lambda_{2}$ and $\Lambda_{3}$ are mutually orthogonal.
Third, we characterize $\Lambda^{\bot}$ using the following technical trick. Define \[ \Lambda_{3}^{*}=\left\{ \int h_{u}\{\overline{V}_{u},H(\psi^{*})\}\mathrm{d} M_{T}(u):\ h_{u}\{\overline{V}_{u},H(\psi^{*})\}\in\mathbb{R}^{p}\right\} . \] Because the tangent space $\Lambda_{1}\oplus\Lambda_{2}\oplus\Lambda_{3}^{*}$ is that for a nonparametric model; i.e., a model that allows for all densities of $O$, and because the tangent space for a nonparametric model is the entire Hilbert space, we obtain that $\mathcal{H}=\Lambda_{1}\oplus\Lambda_{2}\oplus\Lambda_{3}^{*}.$ Because $\Lambda^{\bot}$ must be orthogonal to $\Lambda_{1}\oplus\Lambda_{2}$, $\Lambda^{\bot}$ consists of all elements of $\Lambda_{3}^{*}$ that are orthogonal to $\Lambda_{3}$. It then suffices to find the projection of all elements of $\Lambda_{3}^{*}$, $\int h_{u}\{\overline{V}_{u},H(\psi^{*})\}\mathrm{d} M_{T}(u)$, onto $\Lambda_{3}^{\bot}$. To find the projection, we derive $h_{u}^{*}(\overline{V}_{u})$ such that \[ \left[\int h_{u}\{\overline{V}_{u},H(\psi^{*})\}\mathrm{d} M_{T}(u)-\int h_{u}^{*}(\overline{V}_{u})\mathrm{d} M_{T}(u)\right]\in\Lambda_{3}^{\bot}. \] Therefore, we have \begin{equation} \mathbb{E}\left(\int\left[h_{u}\{\overline{V}_{u},H(\psi^{*})\}-h_{u}^{*}(\overline{V}_{u})\right]\mathrm{d} M_{T}(u)\times\int h_{u}(\overline{V}_{u})\mathrm{d} M_{T}(u)\right)=0,\label{eq:eq1} \end{equation} for any $h_{u}(\overline{V}_{u})$. It is important to note that by Assumption \ref{asumption:CT-UNC}, $M_{T}(t)$ is a martingale with respect to the filtration $\sigma\{\overline{V}_{t},H(\psi^{*})\}$. If $P_{1}(u)$ and $P_{2}(u)$ are locally bounded $\sigma\{\overline{V}_{t},H(\psi^{*})\}$-predictable processes, then we have the following useful result: \begin{equation} \mathbb{E}\left\{ \int_{0}^{t}P_{1}(u)\mathrm{d} M_{T}(u)\int_{0}^{t}P_{2}(u)\mathrm{d} M_{T}(u)\right\} =\int_{0}^{t}P_{1}(u)P_{2}(u)\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u.\label{eq:lemma1} \end{equation} By (\ref{eq:lemma1}), (\ref{eq:eq1}) becomes \begin{multline*} \mathbb{E}\left(\int\left[h_{u}\{\overline{V}_{u},H(\psi^{*})\}-h_{u}^{*}(\overline{V}_{u})\right]h_{u}(\overline{V}_{u})\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\right)\\ =\mathbb{E}\left(\int\mathbb{E}\left(\left[h_{u}\{\overline{V}_{u},H(\psi^{*})\}-h_{u}^{*}(\overline{V}_{u})\right]Y_{T}(u)\mid\overline{V}_{u}\right)h_{u}(\overline{V}_{u})\lambda_{T}(u\mid\overline{V}_{u})\mathrm{d} u\right)=0, \end{multline*} for any $h_{u}(\overline{V}_{u})$. Because $h_{u}(\overline{V}_{u})$ is arbitrary, we obtain \begin{equation} \mathbb{E}\left(\left[h_{u}\{\overline{V}_{u},H(\psi^{*})\}-h_{u}^{*}(\overline{V}_{u})\right]Y_{T}(u)\mid\overline{V}_{u}\right)=0.\label{eq:eq2} \end{equation} Solving (\ref{eq:eq2}) for $h_{u}^{*}(\overline{V}_{u})$, we obtain \[ h_{u}^{*}(\overline{V}_{u})=\mathbb{E}\left[h_{u}\{\overline{V}_{u},H(\psi^{*})\}\mid\overline{V}_{u},T\geq u\right]. \] This completes the proof.
\subsection{Proof of Theorem \ref{Thm:projection}}
For any $B=B(F)$, let \begin{multline*} G=G(F)=\int_{0}^{\tau}\left[\mathbb{E}\left\{ B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\right\} -\mathbb{E}\left\{ B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \right]\\ \times\left[\mathrm{var}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \right]^{-1}\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \right]\mathrm{d} M_{T}(u). \end{multline*} To show $\prod\left(B\mid\Lambda^{\bot}\right)=G$, it is easy to see that $G\in\Lambda^{\bot},$ so the remaining is to show that $B-G\in\Lambda$. Toward this end, we show that for any $\tilde{G}=\tilde{G}(F)=\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})[H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]Y_{T}(u)\mathrm{d} M_{T}(u)\in\Lambda^{\bot}$, $(B-G)\indep\tilde{G}$ or $\mathbb{E}\{(B-G)\tilde{G}\}=0$. We now verify $\mathbb{E}(B\tilde{G})=\mathbb{E}(G\tilde{G})$ by the following calculation.
First, by (\ref{eq:lemma1}), we calculate \begin{eqnarray} \mathbb{E}\left(G\tilde{G}\right) & = & \mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})[\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}-\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\nonumber \\
& & \times[\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]^{-1}[H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]{}^{2}\nonumber \\
& & \times\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\nonumber \\
& = & \mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})[\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}-\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\nonumber \\
& & \times\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u.\label{eq:right} \end{eqnarray} Second, we calculate \begin{eqnarray} \mathbb{E}\left(B\tilde{G}\right) & = & \mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})B[H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{L}_{u},T\geq u\}]\mathrm{d} M_{T}(u)\nonumber \\
& = & \mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})B\dot{H}_{u}(\psi^{*})\mathrm{d} N_{T}(u)\nonumber \\
& & -\mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})B\dot{H}_{u}(\psi^{*})\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\nonumber \\
& = & \mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})[\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}-\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\nonumber \\
& & \times\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u,\label{eq:left} \end{eqnarray} where the last equality follows because \begin{multline*} \mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})B\dot{H}_{u}(\psi^{*})\mathrm{d} N_{T}(u)=\mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\}\mathrm{d} N_{T}(u)\\ =\mathbb{E}\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u, \end{multline*} and \begin{eqnarray*}
& & \mathbb{E}\left\{ \int_{0}^{\tau}\tilde{c}(\overline{V}_{u})B\dot{H}_{u}(\psi^{*})\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\right\} \\
& = & \mathbb{E}\left[\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},Y_{T}(u)\}\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\right]\\
& = & \mathbb{E}\left[\int_{0}^{\tau}\tilde{c}(\overline{V}_{u})\mathbb{E}\{B\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\right]. \end{eqnarray*} Therefore, by (\ref{eq:right}) and (\ref{eq:left}), $\mathbb{E}(B\tilde{G})=\mathbb{E}(G\tilde{G})$ for any $\tilde{G}\in\Lambda^{\bot}$, proving (\ref{eq:projection}).
\subsection{Proof of Theorem \ref{Thm: efficient score}}
The semiparametric efficient score is $S_{\mathrm{eff}}^{*}(\psi^{*})=\prod\left(S_{\psi}\mid\Lambda^{\bot}\right)$. By Theorem \ref{Thm:projection}, we have \begin{eqnarray*} S_{\mathrm{eff}}^{*}(\psi^{*}) & = & \int_{0}^{\tau}[\mathbb{E}\{S_{\psi}\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}-\mathbb{E}\{S_{\psi}\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\\
& & \times[\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]{}^{-1}[H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\mathrm{d} M_{T}(u)\\
& = & -\int_{0}^{\tau}[\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T=u\}-\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T\geq u\}]\\
& & \times[\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]{}^{-1}[H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\mathrm{d} M_{T}(u)\\
& = & -\int_{0}^{\tau}\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T=u\}[\mathrm{var}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]^{-1}\\
& & \times H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\mathrm{d} M_{T}(u), \end{eqnarray*} where the last equality follows by using the generalized information equality: because $\dot{H}_{u}(\psi^{*})=H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$, we have $\mathbb{E}\{\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}=0$. Take the derivative of $\psi$ at both sides, we have $\mathbb{E}\{S_{\psi}\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}+\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T\geq u\}=0$, or equivalently $\mathbb{E}\{S_{\psi}\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T\geq u\}=-\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T\geq u\}$. Similarly, noticing $\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}=\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T=u\}$, we have $\mathbb{E}\{\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}=0$. Take the derivative of $\psi$ at both sides, we have $\mathbb{E}\{S_{\psi}\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}+\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T=u\}=0$, or equivalently $\mathbb{E}\{S_{\psi}\dot{H}_{u}(\psi^{*})\mid\overline{V}_{u},T=u\}=-\mathbb{E}\{\partial\dot{H}_{u}(\psi^{*})/\partial\psi\mid\overline{V}_{u},T=u\}$. Ignoring the negative sign, the result in Theorem \ref{Thm: efficient score} follow.
\subsection{Proof of Theorem \ref{Thm:2-dr} \label{subsec:Proof-of-dr}}
We show that $\mathbb{E}\{G(\psi^{*};F,c)\}=0$ in two cases.
First, if $\lambda_{T}(t\mid\overline{V}_{t})$ is correctly specified, under Assumption \ref{asumption:CT-UNC}, $M_{T}(t)$ is a martingale with respect to the filtration $\sigma\{\overline{V}_{t},H(\psi^{*})\}$. Because $c(\overline{V}_{u})[H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]$ is a $\sigma\{\overline{V}_{t},H(\psi^{*})\}$-predictable process, $\int_{0}^{t}c(\overline{V}_{u})[H(\psi^{*})-\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}]\mathrm{d} M_{T}(u)$ is a martingale for $t\geq0$. Therefore, $\mathbb{E}\{G(\psi^{*};F,c)\}=0$.
Second, if $\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} $ is correctly specified but $\lambda_{T}(t\mid\overline{V}_{t})$ is not necessarily correctly specified, let $\lambda_{T}^{*}(t\mid\overline{V}_{t})$ be the probability limit of the possibly misspecified model. We obtain \begin{eqnarray}
& & \mathbb{E}\int c(\overline{V}_{u})\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\left\{ \mathrm{d} N_{T}(u)-\lambda_{T}^{*}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\right\} \nonumber \\
& = & \mathbb{E}\int c(\overline{V}_{u})\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\left\{ \mathrm{d} N_{T}(u)-\lambda_{T}(u\mid\overline{V}_{u})Y_{T}(u)\mathrm{d} u\right\} \nonumber \\
& & +\mathbb{E}\int c(\overline{V}_{u})\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\left\{ \lambda_{T}(u\mid\overline{V}_{u})-\lambda_{T}^{*}(u\mid\overline{V}_{u})\right\} Y_{T}(u)\mathrm{d} u\nonumber \\
& = & 0+\mathbb{E}\int c(\overline{V}_{u})E\left(\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mid\overline{V}_{u},T\geq u\right)\label{eq:eq3}\\
& & \times\left\{ \lambda_{T}(u\mid\overline{V}_{u})-\lambda_{T}^{*}(u\mid\overline{V}_{u})\right\} Y_{T}(u)\mathrm{d} u\nonumber \\
& = & 0+\mathbb{E}\int c(\overline{V}_{u})\times0\times\left\{ \lambda_{T}(u\mid\overline{V}_{u})-\lambda_{T}^{*}(u\mid\overline{V}_{u})\right\} Y_{T}(u)\mathrm{d} u=0,\label{eq:eq4} \end{eqnarray} where zero in (\ref{eq:eq3}) follows because $\mathrm{d} M_{T}(u)=\mathrm{d} N_{T}(u)-\lambda_{T}(u\mid\overline{V}_{u})\mathrm{d} u$ is a martingale with respect to the filtration $\sigma\{\overline{V}_{t},H(\psi^{*})\}$, and zero in (\ref{eq:eq4}) follows because $\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} $ is correctly specified and therefore, $\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} =\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} $.
\subsection{Proof of Theorem \ref{Thm:3-mr}}
We show that $\mathbb{E}\left\{ \delta_{C}G(\psi^{*},\beta^{*},M_{T}^{*};F)/K_{C}^{*}(\tau\mid\overline{V}_{\tau})\right\} =0$ in three cases.
First, under Scenarios (a), (b), and (c) listed in Table \ref{tab:Multiply-Robustness} when $K_{C}^{*}$ is correctly specified for $K_{C}$, either $\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\}$ is correctly specified or $M_{T}^{*}$ is correctly specified for $M_{T}$, we show that (\ref{eq:IPCW ee}) is an unbiased estimating equation. Under these scenarios, we have $K_{C}^{*}(\tau\mid\overline{V}_{\tau})=K_{C}(\tau\mid\overline{V}_{\tau})$. It suffices to show that \begin{eqnarray*} \mathbb{E}\left\{ \frac{\delta_{C}}{K_{C}\left(\tau\mid\overline{V}_{\tau}\right)}G(\psi^{*},\beta^{*},M_{T}^{*};F)\right\} & = & \mathbb{E}\left[\mathbb{E}\left\{ \frac{\delta_{C}}{K_{C}\left(\tau\mid\overline{V}_{\tau}\right)}G(\psi^{*},\beta^{*},M_{T}^{*};F)\mid F\right\} \right]\\
& = & \mathbb{E}\left\{ \frac{\mathbb{E}(\delta_{C}\mid F)}{K_{C}\left(\tau\mid\overline{V}_{\tau}\right)}G(\psi^{*},\beta^{*},M_{T}^{*};F)\right\} \\
& = & \mathbb{E}\left[\frac{\mathbb{E}(\delta_{C}\mid\overline{V}_{\tau})}{K_{C}\left(\tau\mid\overline{V}_{\tau}\right)}\mathbb{E}\{G(\psi^{*},\beta^{*},M_{T}^{*};F)\mid\overline{V}_{\tau}\}\right], \end{eqnarray*} where the third equality follows by Assumption \ref{asp:NUC-1}, and the last equality follows by Theorem \ref{Thm:2-dr}.
Second, under Scenarios (b) and (d) listed in Table \ref{tab:Multiply-Robustness} when $\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\}$ is correctly specified, we have $\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\}=\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$. Also, under Assumption \ref{asumption:CT-UNC}, $\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} =\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u}\right\} $. Then, we have \begin{eqnarray*}
& & \mathbb{E}\{G(\psi^{*},\beta^{*},M_{T}^{*};F)\mid\overline{V}_{\tau}\}\\
& = & \mathbb{E}\left\{ \int c(\overline{V}_{u})\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mathrm{d} M_{T}^{*}(u)\mid\overline{V}_{\tau}\right\} \\
& = & \int c(\overline{V}_{u})\mathbb{E}\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \mid\overline{V}_{\tau}\right]\mathbb{E}\left\{ \mathrm{d} M_{T}^{*}(u)\mid\overline{V}_{\tau}\right\} \\
& = & \int c(\overline{V}_{u})\mathbb{E}\left[\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u}\right\} -\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \mid\overline{V}_{\tau}\right]\mathbb{E}\left\{ \mathrm{d} M_{T}^{*}(u)\mid\overline{V}_{\tau}\right\} \\
& = & \int c(\overline{V}_{u})\mathbb{E}\left[\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} -\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} \mid\overline{V}_{\tau}\right]\mathbb{E}\left\{ \mathrm{d} M_{T}^{*}(u)\mid\overline{V}_{\tau}\right\} =0. \end{eqnarray*} It follows that \begin{eqnarray*} \mathbb{E}\left\{ \frac{\delta_{C}}{K_{C}^{*}\left(\tau\mid\overline{V}_{\tau}\right)}G(\psi^{*},\beta^{*},M_{T}^{*};F)\right\} & = & \mathbb{E}\left[\mathbb{E}\left\{ \frac{\delta_{C}}{K_{C}^{*}\left(\tau\mid\overline{V}_{\tau}\right)}G(\psi^{*},\beta^{*},M_{T}^{*};F)\mid F\right\} \right]\\
& = & \mathbb{E}\left[\frac{K_{C}\left(\tau\mid\overline{V}_{\tau}\right)}{K_{C}^{*}\left(\tau\mid\overline{V}_{\tau}\right)}\mathbb{E}\{G(\psi^{*},\beta^{*},M_{T}^{*};F)\mid\overline{V}_{\tau}\}\right]\\
& = & \mathbb{E}\left\{ \frac{K_{C}\left(\tau\mid\overline{V}_{\tau}\right)}{K_{C}^{*}\left(\tau\mid\overline{V}_{\tau}\right)}\times0\right\} =0. \end{eqnarray*}
Third, under Scenario (e) listed in Table \ref{tab:Multiply-Robustness} when $M_{T}^{*}$ is correctly specified for $M_{T}$, we have \begin{equation} \mathbb{E}\left\{ \mathrm{d} M_{T}(u)\mid\overline{V}_{u}\right\} =0,\ (u>0).\label{eq:dM} \end{equation} Define $\kappa(\overline{V}_{u})=E\left\{ K_{C}\left(\tau\mid\overline{V}_{\tau}\right)/K_{C}^{*}\left(\tau\mid\overline{V}_{\tau}\right)\mid\overline{V}_{u}\right\} $ for all $u>0$. We show $\mathbb{E}\left\{ \delta_{C}G(\psi^{*},\beta^{*},M_{T};F)/K_{C}^{*}(\tau\mid\overline{V}_{\tau})\right\} =0$ by induction. Let $\Delta>0$ be a small increment. We start with \begin{eqnarray}
& & \mathbb{E}\left\{ \frac{\delta_{C}}{K_{C}^{*}\left(\tau\mid\overline{V}_{\tau}\right)}G(\psi^{*},\beta^{*},M_{T};F)\right\} \nonumber \\
& = & \mathbb{E}\left[\kappa(\overline{V}_{\tau})\int_{0}^{\tau}c(\overline{V}_{u})\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mathrm{d} M_{T}(u)\right]\label{eq:(tau)}\\
& = & \mathbb{E}\left[\kappa(\overline{V}_{\tau})\mathbb{E}\left\{ \left(\int_{0}^{\tau-\Delta}+\int_{\tau-\Delta}^{\tau}\right)c(\overline{V}_{u})\left[H(\psi^{*})-\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mathrm{d} M_{T}(u)\mid\overline{V}_{\tau}\right\} \right]\nonumber \\
& = & \mathbb{E}\left[\mathbb{E}\left\{ \kappa(\overline{V}_{\tau})\mid\overline{V}_{\tau-\Delta}\right\} \int_{0}^{\tau-\Delta}c(\overline{V}_{u})\left[\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} -\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mathrm{d} M_{T}(u)\right]\nonumber \\
& & +\mathbb{E}\left\{ \kappa(\overline{V}_{\tau})\int_{\tau-\Delta}^{\tau}c(\overline{V}_{u})\left[\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} -\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mathrm{d} M_{T}(u)\right\} \nonumber \\
& = & \mathbb{E}\left\{ \kappa(\overline{V}_{\tau-\Delta})\int_{0}^{\tau-\Delta}c(\overline{V}_{u})\left[\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} -\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mathrm{d} M_{T}(u)\right\} \nonumber \\
& & +\mathbb{E}\left\{ \kappa(\overline{V}_{\tau})c(\overline{V}_{\tau})\left[\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{\tau},T\geq\tau\right\} -\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{\tau},T\geq\tau;\beta^{*}\right\} \right]\mathbb{E}\left\{ \mathrm{d} M_{T}(\tau)\mid\overline{V}_{\tau}\right\} \right\} \nonumber \\
& = & \mathbb{E}\left\{ \left[\kappa(\overline{V}_{\tau-\Delta})\int_{0}^{\tau-\Delta}c(\overline{V}_{u})\left[\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u\right\} -\mathbb{E}\left\{ H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\right\} \right]\mathrm{d} M_{T}(u)\right]\right\} +0,\label{eq:(tau-)} \end{eqnarray} where $0$ in the last equality follows by (\ref{eq:dM}). Note that (\ref{eq:(tau-)}) is (\ref{eq:(tau)}) replacing $\tau$ by $\tau-\Delta$. We then repeat the same calculation for (\ref{eq:(tau-)}) until $\tau-\Delta$ reaches zero. The last step is to recognize that (\ref{eq:(tau-)}) with $\tau-\Delta=0$ is zero. This completes the proof.
\subsection{Proof of Theorem \ref{thm:4}}
We assume the multiple robustness condition holds; i.e. either the potential outcome mean model or the model for the treatment process is correctly specified, regardless the model for the censoring process is correctly specified.
Taylor expansion of $\mathbb{P}_{n}\left\{ \Phi(\widehat{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)\right\} =0$ around $\psi^{*}$ leads to \begin{eqnarray*} 0 & = & \mathbb{P}_{n}\left\{ \Phi(\widehat{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)\right\} \\
& = & \mathbb{P}_{n}\left\{ \Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)\right\} +\mathbb{P}_{n}\left\{ \frac{\partial\Phi(\widetilde{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)}{\partial\psi^{\mathrm{\scriptscriptstyle T}}}\right\} (\widehat{\psi}-\psi^{*}), \end{eqnarray*} where $\widetilde{\psi}$ is on the line segment between $\widehat{\psi}$ and $\psi^{*}$.
Under Assumption \ref{asump:donsker} (i) and (ii), \[ (\mathbb{P}_{n}-\mathbb{P})\left\{ \frac{\partial\Phi(\widetilde{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)}{\partial\psi^{\mathrm{\scriptscriptstyle T}}}\right\} =(\mathbb{P}_{n}-\mathbb{P})\left\{ \frac{\partial\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)}{\partial\psi^{\mathrm{\scriptscriptstyle T}}}\right\} =o_{p}(n^{-1/2}), \] and therefore, \begin{eqnarray*} \mathbb{P}_{n}\left\{ \frac{\partial\Phi(\widetilde{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)}{\partial\psi^{\mathrm{\scriptscriptstyle T}}}\right\} & = & \mathbb{P}\left\{ \frac{\partial\Phi(\widetilde{\psi},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)}{\partial\psi^{\mathrm{\scriptscriptstyle T}}}\right\} +o_{p}(n^{-1/2})\\
& = & A(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*})+o_{p}(n^{-1/2}). \end{eqnarray*} We then have \begin{equation} n^{1/2}(\widehat{\psi}-\psi^{*})=\left\{ A(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*})\right\} ^{-1}n^{1/2}\mathbb{P}_{n}\left\{ \Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)\right\} +o_{p}(1).\label{eq:1} \end{equation} Based on the multiple robustness, we have \begin{equation} \mathbb{P}\{\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\}=0.\label{eq:2-1} \end{equation} To express (\ref{eq:1}) further, based on (\ref{eq:2-1}), we have \begin{multline} \mathbb{P}_{n}\Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)=(\mathbb{P}_{n}-\mathbb{P})\Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)\\ +\mathbb{P}\left\{ \Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)-\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\right\} +\mathbb{P}\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F).\label{eq:2} \end{multline} By Assumption \ref{asump:donsker} (i) and (ii), the first term in (\ref{eq:eq2}) becomes \begin{eqnarray} (\mathbb{P}_{n}-\mathbb{P})\Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F) & = & (\mathbb{P}_{n}-\mathbb{P})\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)+o_{p}(n^{-1/2})\nonumber \\
& = & \mathbb{P}_{n}\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)+o_{p}(n^{-1/2}).\label{eq:2-2} \end{eqnarray} By Assumption \ref{asump:donsker} (iv), the second term in (\ref{eq:eq2}) becomes \begin{eqnarray}
& & \mathbb{P}\left\{ \Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)-\Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\right\} \nonumber \\
& = & J(\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C})-J(\beta^{*},M_{T}^{*},K_{C}^{*})+o_{p}(n^{-1/2})\nonumber \\
& = & J_{1}(\widehat{\beta})-J_{1}(\beta^{*})+J_{2}(\widehat{M}_{T})-J_{2}(M_{T}^{*})+J_{3}(\widehat{K}_{C})-J_{3}(M_{C}^{*})+o_{p}(n^{-1/2})\nonumber \\
& = & \mathbb{P}_{n}\Phi_{1}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)+\mathbb{P}_{n}\Phi_{2}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\nonumber \\
& & +\mathbb{P}_{n}\Phi_{3}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F).\label{eq:2-3} \end{eqnarray} Combining (\ref{eq:2-1})\textendash (\ref{eq:2-3}), \[ \mathbb{P}_{n}\Phi(\psi^{*},\widehat{\beta},\widehat{M}_{T},\widehat{K}_{C};F)=\mathbb{P}_{n}\{\widetilde{B}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\}, \] where \begin{eqnarray*} \widetilde{B}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F) & = & \Phi(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)+\Phi_{1}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)\\
& & +\Phi_{2}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)+\Phi_{3}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F). \end{eqnarray*} As a result, \begin{equation} n^{1/2}(\widehat{\psi}-\psi^{*})=n^{1/2}\mathbb{P}_{n}\widetilde{\Phi}(\psi^{*},\beta^{*},K_{V}^{*},K_{C}^{*};F)+o_{p}(1),\label{eq:(2.3)} \end{equation} where \[ \widetilde{\Phi}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F)=\left\{ A(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*})\right\} ^{-1}\widetilde{B}(\psi^{*},\beta^{*},M_{T}^{*},K_{C}^{*};F). \]
We now consider the case when all nuisance models are correctly specified, i.e., $\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u;\beta^{*}\}=\mathbb{E}\{H(\psi^{*})\mid\overline{V}_{u},T\geq u\}$, $M_{T}^{*}=M_{T},$ and $K_{C}^{*}=K_{C}$.
Define the score functions: $S_{\beta}=S_{\beta}\{H(\psi^{*}),\overline{V}_{u},T\geq u\}$. Then, the tangent space for $\beta$ is $\widetilde{\Lambda}_{1}=\{S_{\beta}\in\mathbb{R}^{p}:\mathbb{E}(S_{\beta}\mid\overline{V}_{u},T\geq u)=0\}$. Following \citet{tsiatis2007semiparametric}, the nuisance tangent space for the proportional hazards model (\ref{eq:ph-V}) is \[ \widetilde{\Lambda}_{2}=\left\{ S_{\alpha}+\int h(u)\mathrm{d} M_{T}(u):\ h(u)\in\mathbb{\mathbb{R}}^{p}\right\} , \] where \begin{equation} S_{\alpha}\coloneqq\int\left\{ W_{T}(u,\overline{V}_{u})-\frac{\mathbb{E}\left[W_{T}(u,\overline{V}_{u})\exp\left\{ \gamma^{\mathrm{\scriptscriptstyle T}}W_{T}(u,\overline{V}_{u})\right\} Y_{T}(u)\right]}{\mathbb{E}\left[\exp\left\{ \alpha^{\mathrm{\scriptscriptstyle T}}W_{T}(u,\overline{V}_{u})\right\} Y_{T}(u)\right]}\right\} \mathrm{d} M_{T}(u).\label{eq:S_alpha} \end{equation} The nuisance tangent space for the proportional hazards model (\ref{eq:ph-C}) is \[ \widetilde{\Lambda}_{3}=\left\{ S_{\eta}+\int h(u)\mathrm{d} M_{C}(u):\ h(u)\in\mathbb{\mathbb{R}}^{p}\right\} , \] where \begin{equation} S_{\eta}\coloneqq\int\left\{ W_{C}(u,\overline{V}_{u})-\frac{\mathbb{E}\left[W_{C}(u,\overline{V}_{u})\exp\left\{ \eta^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\right\} Y_{C}(u)\right]}{\mathbb{E}\left[\exp\left\{ \eta^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\right\} Y_{C}(u)\right]}\right\} \mathrm{d} M_{C}(u).\label{eq:S_eta} \end{equation} Assuming that the treatment process and the censoring process can not jump at the same time point, $\widetilde{\Lambda}_{1}$, $\widetilde{\Lambda}_{2}$, and $\widetilde{\Lambda}_{3}$ are mutually orthogonal to each other. Therefore, the nuisance tangent space for $\beta$ and the proportional hazards models (\ref{eq:ph-V}) and (\ref{eq:ph-C}) is $\widetilde{\Lambda}=\widetilde{\Lambda}_{1}\oplus\widetilde{\Lambda}_{2}\oplus\widetilde{\Lambda}_{3}$. The influence function for $\widehat{\psi}$ is \begin{eqnarray}
& & \widetilde{B}(\psi^{*},\beta^{*},M_{T},K_{C};F)\nonumber \\
& = & \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)-\prod\left\{ \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)\mid\widetilde{\Lambda}\right\} \nonumber \\
& = & \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)-E\left\{ \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)S_{\alpha}^{\mathrm{\scriptscriptstyle T}}\right\} \mathbb{E}\left(S_{\alpha}S_{\alpha}^{\mathrm{\scriptscriptstyle T}}\right)^{-1}S_{\alpha}\nonumber \\
& & -\mathbb{E}\left\{ \Phi(\psi^{*},\beta^{*},M_{T},K_{C};F)S_{\eta}^{\mathrm{\scriptscriptstyle T}}\right\} \mathbb{E}\left(S_{\eta}S_{\eta}^{\mathrm{\scriptscriptstyle T}}\right)^{-1}S_{\eta}\nonumber \\
& & +\int\frac{\mathbb{E}\left[G(\psi^{*},\beta^{*},M_{T};F)\exp\left\{ \alpha^{\mathrm{\scriptscriptstyle T}}W_{T}(u,\overline{V}_{u})\right\} \delta_{C}/K_{C}(\tau\mid\overline{V}_{\tau})\right]}{\mathbb{E}\left[\exp\left\{ \alpha^{\mathrm{\scriptscriptstyle T}}W_{T}(u,\overline{V}_{u})\right\} Y_{T}(u)\right]}\mathrm{d} M_{C}(u)\nonumber \\
& & +\int\frac{\mathbb{E}\left[G(\psi^{*},\beta^{*},M_{T};F)\exp\left\{ \eta^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\right\} \delta_{C}/K_{C}(\tau\mid\overline{V}_{\tau})\right]}{\mathbb{E}\left[\exp\left\{ \eta^{\mathrm{\scriptscriptstyle T}}W_{C}(u,\overline{V}_{u})\right\} Y_{C}(u)\right]}\mathrm{d} M_{T}(u).\label{eq:tilde-J} \end{eqnarray}
\section{Details for the simulation study}
First, Algorithm \ref{tab:Algorithm-for-generating} specifies the steps for generating $T$ according to a time-dependent proportional hazards model. \begin{algorithm} \caption{\label{tab:Algorithm-for-generating}Algorithm 1 for generating $T$ according to a time-dependent proportional hazards model}
\begin{description} \item [{Step$\ 1.$}] Set $k=1$. \item [{Step$\ 2.$}] Generate a temporary time to treatment initiation, $T_{\mathrm{temp},k}$, compatible with the hazard function for the time interval $[t_{k},t_{k+1})$, using the method of \citet{bender2005generating}; i.e., generate $u\sim$Uniform$[0,1]$ and let $T_{\mathrm{temp},k}=$$-\log(1-u)/\{\lambda_{T,0}\exp(\alpha_{1}^{*}L_{TI}+\alpha_{2}^{*}L_{TD,t_{k}})\}$. \begin{description} \item [{If}] $T_{\mathrm{temp},k}$ is contained within the first time interval $[0,t_{k+1}-t_{k})$, then set $T=T_{\mathrm{temp},k}+t_{k};$ \item [{else$\ $if}] $T_{\mathrm{temp},k}$ is not contained within the interval $[0,t_{k+1}-t_{k})$, increase $k$ by $1$ and move to the beginning of Step 2. \end{description} \end{description} \end{algorithm}
Second, we describe the nuisance models and their estimation. For $c(\overline{V}_{u})$, we approximate $\mathbb{E}\{(1,T)^{\mathrm{\scriptscriptstyle T}}(\tau-T)I(T\leq\tau)\mid\overline{V}_{u},T\geq u\}$ by $\widehat{{P}}(T\leq\tau\mid\overline{V}_{u},T\geq u)\times\widehat{\mathbb{E}}\{(1,T)^{\mathrm{\scriptscriptstyle T}}(\tau-T)\mid\overline{V}_{u},u\leq T\leq\tau\}$. We describe the details for fitting below: \begin{description} \item [{(a)}] $\widehat{{P}}(T\leq\tau\mid\overline{V}_{u},T\geq u)$ is the predicted value from a logistic regression model of $I(T\leq\tau)$ against $u$, $L_{TI}$, $L_{TD,u}$, and all interactions of these terms, restricted to subjects with $T\geq u$. \item [{(b)}] $\widehat{\mathbb{E}}(\tau-T\mid\overline{V}_{u},u\leq T\leq\tau)$ is the predicted value from a linear regression model of $\tau-T$ against $u$, $L_{TI}$, $L_{TD,u}$, and all interactions of these terms, restricted to subjects with $u\leq T\leq\tau$. \item [{(c)$\widehat{\mathbb{E}}\{T(\tau-T)\mid\overline{V}_{u},u\leq T\leq\tau\}$}] is the predicted value from a linear regression model of $T(\tau-T)$ against $u$, $L_{TI}$, $L_{TD,u}$, and all interactions of these terms, restricted to subjects with $u\leq T\leq\tau$. \item [{(d)}] $\mathbb{E}\{H(\widehat{\psi}_{p})\mid\overline{V}_{u},T\geq u;\widehat{\beta}\}$ by a linear regression model of $H(\widehat{\psi}_{p})$ against $u$, $L_{TI}$, $L_{TD,u}$, and all interactions of these terms, restricted to subjects with $T\geq u$. \end{description}
\end{document} | arXiv |
Marco Banterle 1,, , Clara Grazian 2, , Anthony Lee 3, and Christian P. Robert 4,
Department of Medical Statistics, London School of Hygiene and Tropical Medicine, Keppel St, Bloomsbury, London WC1E 7HT, UK
Dipartimento di Economia, Università degli Studi "Gabriele D'Annunzio", Viale Pindaro, 42, 65127 Pescara, Italy
School of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW, UK
Department of Statistics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK
* Corresponding author: Christian Robert
MCMC algorithms such as Metropolis--Hastings algorithms are slowed down by the computation of complex target distributions as exemplified by huge datasets. We offer a useful generalisation of the Delayed Acceptance approach, devised to reduce such computational costs by a simple and universal divide-and-conquer strategy. The generic acceleration stems from breaking the acceptance step into several parts, aiming at a major gain in computing time that out-ranks a corresponding reduction in acceptance probability. Each component is sequentially compared with a uniform variate, the first rejection terminating this iteration. We develop theoretical bounds for the variance of associated estimators against the standard Metropolis--Hastings and produce results on optimal scaling and general optimisation of the procedure.
Keywords: Large Scale Learning and Big Data, MCMC algorithms, likelihood function, acceptance probability, mixtures of distributions, Jeffreys prior.
Mathematics Subject Classification: Primary: 68U20, 65C40; Secondary: 62C10.
Citation: Marco Banterle, Clara Grazian, Anthony Lee, Christian P. Robert. Accelerating Metropolis-Hastings algorithms by Delayed Acceptance. Foundations of Data Science, 2019, 1 (2) : 103-128. doi: 10.3934/fods.2019005
C. Andrieu, A. Lee and M. Vihola, Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers, Bernoulli, 24 (2018), 842-872. doi: 10.3150/15-BEJ785. Google Scholar
E. Angelino, E. Kohler, A. Waterland, M. Seltzer and R. Adams, Accelerating MCMC via parallel predictive prefetching, UAI'14 Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, (2014), 22-31.Google Scholar
R. Bardenet, A. Doucet and C. Holmes, On Markov chain Monte Carlo methods for tall data, The Journal of Machine Learning Research, 18 (2017), Paper No. 47, 43 pp. Google Scholar
A. Brockwell, Parallel Markov chain Monte Carlo simulation by pre-fetching, J. Comput. Graphical Stat., 15 (2006), 246-261. doi: 10.1198/106186006X100579. Google Scholar
J. Christen and C. Fox, Markov chain Monte Carlo using an approximation, Journal of Computational and Graphical Statistics, 14 (2005), 795-810. doi: 10.1198/106186005X76983. Google Scholar
O. F. Christensen, G. O. Roberts and J. S. Rosenthal, Scaling limits for the transient phase of local Metropolis-Hastings algorithms, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67 (2005), 253-268. doi: 10.1111/j.1467-9868.2005.00500.x. Google Scholar
R. Cornish, P. Vanetti, A. Bouchard-Côté, G. Deligiannidis and A. Doucet, Scalable Metropolis-Hastings for exact Bayesian inference with large datasets, arXiv preprint, arXiv: 1901.09881.Google Scholar
L. Devroye, Nonuniform Random Variate Generation, Springer-Verlag, New York, 1986. doi: 10.1007/978-1-4613-8643-8. Google Scholar
J. Diebolt and C. P. Robert, Estimation of finite mixture distributions by Bayesian sampling, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 56 (1994), 363-375. doi: 10.1111/j.2517-6161.1994.tb01985.x. Google Scholar
C. Fox and G. Nicholls, Sampling conductivity images via MCMC, The Art and Science of Bayesian Image Analysis, (1997), 91-100.Google Scholar
A. Gelfand and S. Sahu, On Markov chain Monte Carlo acceleration, J. Comput. Graph. Statist., 3 (1994), 261-276. doi: 10.2307/1390911. Google Scholar
M. Girolami and B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73 (2011), 123-214. doi: 10.1111/j.1467-9868.2010.00765.x. Google Scholar
A. Golightly, D. Henderson and C. Sherlock, Delayed acceptance particle MCMC for exact inference in stochastic kinetic models, Statistics and Computing, 25 (2015), 1039-1055. doi: 10.1007/s11222-014-9469-x. Google Scholar
[14] H. Jeffreys, Theory of Probability, 1st ed. The Clarendon Press, Oxford, 1939.
A. Korattikara, Y Chen and M. Welling, Austerity in MCMC land: Cutting the Metropolis-Hastings budget, In ICML 2014, International Conference on Machine Learning, (2014), 181-189.Google Scholar
G. MacLachlan and D. Peel, Finite Mixture Models, John Wiley, New York, 2000. doi: 10.1002/0471721182. Google Scholar
K.L. Mengersen and R. Tweedie, Rates of convergence of the Hastings and Metropolis algorithms, Ann. Statist., 24 (1996), 101-121. doi: 10.1214/aos/1033066201. Google Scholar
P. Neal and G. O. Roberts, Optimal scaling of random walk Metropolis algorithms with non-Gaussian proposals, Methodology and Computing in Applied Probability, 13 (2011), 583-601. doi: 10.1007/s11009-010-9176-9. Google Scholar
R. Neal, Markov chain Monte Carlo methods based on 'slicing' the density function, Tech. rep., University of Toronto, 1997.Google Scholar
W. Neiswanger, C. Wang and E. Xing, Asymptotically exact, embarrassingly parallel MCMC, arXiv preprint, 2013, arXiv: 1311.4780.Google Scholar
P. Peskun, Optimum Monte Carlo sampling using Markov chains, Biometrika, 60 (1973), 607-612. doi: 10.1093/biomet/60.3.607. Google Scholar
M. Plummer, N. Best, K. Cowles and K. Vines, CODA: Convergence diagnosis and output analysis for MCMC, R News, 6 (2006), 7-11. Google Scholar
C. P. Robert, The Bayesian Choice, 2nd ed. Springer-Verlag, New York, 2001. Google Scholar
C. P. Robert and G. Casella, Monte Carlo Statistical Methods, 2nd ed. Springer-Verlag, New York, 2004. doi: 10.1007/978-1-4757-4145-2. Google Scholar
C. P. Robert and D. M. Titterington, Reparameterisation strategies for hidden Markov models and Bayesian approaches to maximum likelihood estimation, Statistics and Computing, 8 (1998), 145-158.Google Scholar
G. O. Roberts, A. Gelman and W. R. Gilks, Weak convergence and optimal scaling of random walk Metropolis algorithms, Ann. Appl. Probab., 7 (1997), 110-120. doi: 10.1214/aoap/1034625254. Google Scholar
G. O. Roberts and J. S. Rosenthal, Optimal scaling for various Metropolis-Hastings algorithms, Statist. Science, 16 (2001), 351-367. doi: 10.1214/ss/1015346320. Google Scholar
G. O. Roberts and J. S. Rosenthal, Coupling and ergodicity of adaptive MCMC, J. Applied Proba., 44 (2005), 458-475. doi: 10.1239/jap/1183667414. Google Scholar
G. O. Roberts and O. Stramer, Langevin diffusions and Metropolis-Hastings algorithms, Methodology and Computing in Applied Probability, 4 (2002), 337-357. doi: 10.1023/A:1023562417138. Google Scholar
G. O. Roberts and R. Tweedie, Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms, Biometrika, 83 (1996), 95-110. doi: 10.1093/biomet/83.1.95. Google Scholar
K. Roeder and L. Wasserman, Practical Bayesian density estimation using mixtures of Normals, J. American Statist. Assoc., 92 (1997), 894-902. doi: 10.1080/01621459.1997.10474044. Google Scholar
S. Scott, A. Blocker, F. Bonassi, M. Chipman, E. George and R. McCulloch, Bayes and big data: The consensus Monte Carlo algorithm, EFaBBayes 250 Conference, 16 (2013).Google Scholar
C. Sherlock, A. Golightly and D. A. Henderson, Adaptive, delayed-acceptance MCMC for targets with expensive likelihoods, Journal of Computational and Graphical Statistics, 26 (2017), 434-444. doi: 10.1080/10618600.2016.1231064. Google Scholar
C. Sherlock and G. O. Roberts, Optimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets, Bernoulli, 15 (2009), 774-798. doi: 10.3150/08-BEJ176. Google Scholar
C. Sherlock, A. Thiery and A. Golightly, Efficiency of delayed-acceptance random walk Metropolis algorithms, arXiv preprint, 2015, arXiv: 1506.08155.Google Scholar
C. Sherlock, A. Thiery, G. O. Roberts and J. S. Rosenthal, On the efficiency of pseudo-marginal random walk Metropolis algorithms, The Annals of Statistics, 43 (2015), 238-275. doi: 10.1214/14-AOS1278. Google Scholar
A. Y. Shestopaloff and R. M. Neal, MCMC for non-linear state space models using ensembles of latent sequences, arXiv preprint, 2013, arXiv: 1305.0320.Google Scholar
M. Stephens, Bayesian Methods for Mixtures of Normal Distributions, Ph.D. thesis, University of Oxford, 1997.Google Scholar
I. Strid, Efficient parallelisation of Metropolis-Hastings algorithms using a prefetching approach, Computational Statistics & Data Analysis, 54 (2010), 2814-2835. doi: 10.1016/j.csda.2009.11.019. Google Scholar
L. Tierney, A note on Metropolis-Hastings kernels for general state spaces, Ann. Appl. Probab., 8 (1998), 1-9. doi: 10.1214/aoap/1027961031. Google Scholar
L. Tierney and A. Mira, Some adaptive Monte Carlo methods for Bayesian inference, Statistics in Medicine, 18 (1998), 2507-2515.Google Scholar
X. Wang and D. Dunson, Parallel MCMC via Weierstrass sampler, arXiv preprint, 2013, arXiv: 1312.4605.Google Scholar
Figure 1. Fit of a two-step Metropolis-Hastings algorithm applied to a normal-normal posterior distribution $ \mu|x\sim N(x/(\{1+\sigma_\mu^{-2}\}, 1/\{1+\sigma_\mu^{-2}\}) $ when $ x = 3 $ and $ \sigma_\mu = 10 $, based on $ T = 10^5 $ iterations and a first acceptance step considering the likelihood ratio and a second acceptance step considering the prior ratio, resulting in an overall acceptance rate of 12%
Figure 2. (left) Fit of a multiple-step Metropolis-Hastings algorithm applied to a Beta-binomial posterior distribution $ p|x\sim Be(x+a, n+b-x) $ when $ N = 100 $, $ x = 32 $, $ a = 7.5 $ and $ b = .5 $. The binomial $ \mathcal{B}(N, p) $ likelihood is replaced with a product of $ 100 $ Bernoulli terms and an acceptance step is considered for the ratio of each term. The histogram is based on $ 10^5 $ iterations, with an overall acceptance rate of 9%; (centre) raw sequence of successive values of $ p $ in the Markov chain simulated in the above experiment; (right) autocorrelogram of the above sequence
Figure 3. Two top panels: behaviour of $\ell^*(\delta)$ and $\alpha^*(\delta)$ as the relative cost varies. Note that for $\delta >> 1$ the optimal values converges towards the values computed for the standard Metropolis--Hastings (dashed in red). Two bottom panels: close--up of the interesting region for $0 < \delta < 1$.
Figure 4. Optimal acceptance rate for the DA-MALA algorithm as a function of $\delta$. In red, the optimal acceptance rate for MALA obtained by [27] is met for $\delta = 1$.
Figure 5. Comparison between geometric MALA (top panels) and geometric MALA with Delayed Acceptance (bottom panels): marginal chains for two arbitrary components (left), estimated marginal posterior density for an arbitrary component (middle), 1D chain trace evaluating mixing (right).
Table 1. Comparison between MH and MH with Delayed Acceptance on a logistic model. ESS is the effective sample size, ESJD the expected square jumping distance, time is the computation time
Algorithm rel. ESS (av.) rel. ESJD (av.) rel. Time (av.) rel. gain (ESS)(av.) rel. gain (ESJD)(av.)
DA-MH over MH 1.1066 12.962 0.098 5.47 56.18
Table 2. Comparison between standard geometric MALA and geometric MALA with Delayed Acceptance, with ESS the effective sample size, ESJD the expected square jumping distance, time the computation time and a the observed acceptance rate
Algorithm ESS (av.) (sd) ESJD (av.) (sd) time (av.) (sd) a(aver.) ESS/time (aver.) ESJD/time (aver.)
MALA 7504.48 107.21 5244.94 983.47 176078 1562.3 0.661 0.04 0.03
DA-MALA 6081.02 121.42 5373.253 2148.76 17342.91 6688.3 0.09 0.35 0.31
Table 3. Comparison using different performance indicators in the example of mixture estimation, based on 100 replicas of the experiments according to model (9) with a sample size $ n = 500 $, $ 10^5 $ MH simulations and $ 500 $ samples for the prior estimation. ("ESS" is the effective sample size, "time" is the computational time). The actual averaged gain ($ \frac{ESS_{DA}/ESS_{MH}}{time_{DA}/time_{MH}} $) is $ 9.58 $, higher than the "double average" that the table above suggests as being around $ 5 $
Algorithm ESS (av.) (sd) ESJD (av.) (sd) time (av.) (sd)
MH 1575.96 245.96 0.226 0.44 513.95 57.81
MH + DA 628.77 87.86 0.215 0.45 42.22 22.95
Xiangmin Zhang. User perceived learning from interactive searching on big medical literature data. Big Data & Information Analytics, 2017, 2 (5) : 1-16. doi: 10.3934/bdia.2017019
Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu. Why curriculum learning & self-paced learning work in big/noisy data: A theoretical perspective. Big Data & Information Analytics, 2016, 1 (1) : 111-127. doi: 10.3934/bdia.2016.1.111
Danuta Gaweł, Krzysztof Fujarewicz. On the sensitivity of feature ranked lists for large-scale biological data. Mathematical Biosciences & Engineering, 2013, 10 (3) : 667-690. doi: 10.3934/mbe.2013.10.667
H.T. Banks, Jimena L. Davis. Quantifying uncertainty in the estimation of probability distributions. Mathematical Biosciences & Engineering, 2008, 5 (4) : 647-667. doi: 10.3934/mbe.2008.5.647
Nick Cercone, F'IEEE. What's the big deal about big data?. Big Data & Information Analytics, 2016, 1 (1) : 31-79. doi: 10.3934/bdia.2016.1.31
Richard Boire. Understanding AI in a world of big data. Big Data & Information Analytics, 2017, 2 (5) : 22-42. doi: 10.3934/bdia.2018001
Masataka Kato, Hiroyuki Masuyama, Shoji Kasahara, Yutaka Takahashi. Effect of energy-saving server scheduling on power consumption for large-scale data centers. Journal of Industrial & Management Optimization, 2016, 12 (2) : 667-685. doi: 10.3934/jimo.2016.12.667
Pankaj Sharma, David Baglee, Jaime Campos, Erkki Jantunen. Big data collection and analysis for manufacturing organisations. Big Data & Information Analytics, 2017, 2 (2) : 127-139. doi: 10.3934/bdia.2017002
Enrico Capobianco. Born to be big: Data, graphs, and their entangled complexity. Big Data & Information Analytics, 2016, 1 (2&3) : 163-169. doi: 10.3934/bdia.2016002
Ali Asgary, Jianhong Wu. ADERSIM-IBM partnership in big data. Big Data & Information Analytics, 2016, 1 (4) : 277-278. doi: 10.3934/bdia.2016010
Francis Ribaud. Semilinear parabolic equations with distributions as initial data. Discrete & Continuous Dynamical Systems - A, 1997, 3 (3) : 305-316. doi: 10.3934/dcds.1997.3.305
Weidong Bao, Wenhua Xiao, Haoran Ji, Chao Chen, Xiaomin Zhu, Jianhong Wu. Towards big data processing in clouds: An online cost-minimization approach. Big Data & Information Analytics, 2016, 1 (1) : 15-29. doi: 10.3934/bdia.2016.1.15
Yang Yu. Introduction: Special issue on computational intelligence methods for big data and information analytics. Big Data & Information Analytics, 2017, 2 (1) : i-ii. doi: 10.3934/bdia.201701i
Yaguang Huangfu, Guanqing Liang, Jiannong Cao. MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics. Big Data & Information Analytics, 2016, 1 (4) : 349-376. doi: 10.3934/bdia.2016015
A. Mittal, N. Hemachandra. Learning algorithms for finite horizon constrained Markov decision processes. Journal of Industrial & Management Optimization, 2007, 3 (3) : 429-444. doi: 10.3934/jimo.2007.3.429
Jian Mao, Qixiao Lin, Jingdong Bian. Application of learning algorithms in smart home IoT system security. Mathematical Foundations of Computing, 2018, 1 (1) : 63-76. doi: 10.3934/mfc.2018004
Chengxiang Wang, Li Zeng, Yumeng Guo, Lingli Zhang. Wavelet tight frame and prior image-based image reconstruction from limited-angle projection data. Inverse Problems & Imaging, 2017, 11 (6) : 917-948. doi: 10.3934/ipi.2017043
Melody Alsaker, Jennifer L. Mueller. Use of an optimized spatial prior in D-bar reconstructions of EIT tank data. Inverse Problems & Imaging, 2018, 12 (4) : 883-901. doi: 10.3934/ipi.2018037
Henrik Garde, Kim Knudsen. 3D reconstruction for partial data electrical impedance tomography using a sparsity prior. Conference Publications, 2015, 2015 (special) : 495-504. doi: 10.3934/proc.2015.0495
Marco Banterle Clara Grazian Anthony Lee Christian P. Robert | CommonCrawl |
André Martineau
André Martineau (born 14 May 1930 – 4 May 1972[1]) was a French mathematician, specializing in mathematical analysis.
Martineau studied at the École Normale Supérieure and received there, with Laurent Schwartz as supervisor, his Ph.D. with a thesis on analytic functionals and then worked for several years with Schwartz. Martineau became a professor at the University of Nice Sophia Antipolis. Shortly before his 42nd birthday, he died of cancer.[2]
His research deals with analysis in several complex variables, where he introduced Fourier-Borel transformations for analytic functionals.[3] (For one complex variable this type of functional transformation was introduced by Émile Borel.) Martineau was one of the early advocates of the theory of Sato's hyperfunctions and gave lectures on this topic in Seminar Bourbaki during 1960–1961.[4] According to Pierre Cartier, Martineau played a role in the development of the concept of schemes in algebraic geometry by means of a remark made to Jean-Pierre Serre.[5]
Consider a quotation from the year 2004:
A set in complex Euclidean space is called C-convex if all its intersections with complex lines are contractible, and it is said to be linearly convex if its complement is a union of complex hyperplanes. These notions are intermediate between ordinary geometric convexity and pseudoconvexity. Their importance was first manifested in the pioneering work of André Martineau from about forty years ago. Since then a large number of new results have been obtained by many different mathematicians.[6]
Martineau was an Invited Speaker at the International Congress of Mathematicians in 1962 in Stockholm with talk Croissance d'une fonction entiers de type exponentiel et supports des fonctionelles analytiques and in 1970 in Nice with talk Fonctionelles analytiques.[7] His doctoral students include Henri Skoda.
His son Jacques Martineau (born 1963) is a movie director and screenwriter.
Selected publications
• Oeuvre, Editions du CNRS 1977, 878 pages
• Martineau Sur la topologie des espaces de fonctions holomorphes, Mathematische Annalen, vol. 163, 1966, p. 62 doi:10.1007/BF02052485
See also
• Martineau's edge-of-the-wedge theorem
References
1. according to the reminiscences of Christer Kiselman, Christer Kiselman's mathematical ancestors
2. Schwartz A mathematician grappling with his century, p. 281
3. Springer Online Reference, Fourier-Borel-Transformation. This line of research by Martineau culminates in Equations différentialles d'ordre infini, Bull. Soc. Math. France , Tome 95, 1967, pp. 109–154
4. Schapira, Pierre (February 2007). "Michio Sato, a Visionary of Mathematics" (PDF): 243–245. {{cite journal}}: Cite journal requires |journal= (help)
5. Cartier A mad day's work, Bulletin of the AMS, vol. 38, 2001, p. 398 MR1848254
6. Andersson, Mats; Passare, Mikael; Sigurdsson, Ragnar (2012). Complex convexity and analytic functionals. Vol. 225. Birkhäuser. ISBN 9783034878715.
7. Martineau, André (1970). "Fonctionelles analytiques" (PDF). Actes, Congrès intern. Math. Vol. Tome 2. pp. 635–642.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
183 articles found
On the Use of Artificial Intelligence to Define Tank Transfer Functions
Pál Schmitt, Charles Gillan, Ciaran Finnegan
Subject: Engineering, Marine Engineering Keywords: tank transfer function; neural networks; machine learning; OpenFOAM; computational fluid dynamics
Experimental test facilities are generally characterised using linear transfer functions to relate the wavemaker forcing amplitude to wave elevation at a probe located in the wavetank. Second and third order correction methods are becoming available but are limited to certain ranges of waves in their applicability. Artificial intelligence has been shown to be a suitable tool to find even highly nonlinear functional relationships. This paper reports on a numerical wavetank implemented using the OpenFOAM software package which is characterised using artificial intelligence. The aim of the research is to train neural networks to represent non-linear transfer functions mapping a desired surface-elevation time-trace at a probe to the wavemaker input required to create it. These first results already demonstrate the viability of the approach and the suitability of a single setup to find solutions over a wide range of sea states and wave characteristics.
Application of Computational Intelligence Methods for the Automated Identification of Paper-Ink Samples Based on LIBS
Krzysztof Rzecki, Tomasz Sośnicki, Mateusz Baran, Michał Niedźwiecki, Małgorzata Król, Tomasz Łojewski, U Rajendra Acharya, Özal Yildirim, Paweł Pławiak
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: classification; computational intelligence methods; discrimination power; LIBS; machine learning; paper-ink analysis
Laser-induced breakdown spectroscopy (LIBS) is an important analysis technique with applications in many industrial branches and fields of scientific research. Nowadays, the advantages of LIBS are impaired by the main drawback in the analysis of collected data. This procedure is essentially based on the comparison of lines present in the spectrum with a literature database. This paper proposes the use of various computational intelligence methods to develop a reliable and fast classification of non-destructively acquired LIBS spectra into a set of predefined classes. We focus on a specific problem of classification of paper-ink samples into 30 separate, predefined classes. For each of 30 classes (10 pens of each of 5 ink types combined with 10 sheets of 5 paper types plus empty pages) 100 LIBS spectra are collected. Four variants of preprocessing, seven classifiers (Decision trees, Random forest, k-Nearest Neighbour, Support Vector Machine, Probabilistic Neural Network, Multi-Layer Perceptron, and Generalized Regression Neural Network), 5-fold stratified cross-validation and test on an independent set (for methods evaluation) scenarios are employed. Our developed system yielded an accuracy of 99.08% with average classification time of about 0.12 s is obtained using the random forest classifier. Our results clearly demonstrates that machine learning methods can be used to identify the paper-ink samples based on LIBS reliably at a faster rate.
A Selective Survey Review of Computational Intelligence Applications in the Primary Subdomains of Civil Engineering Specializations
Konstantinos Demertzis, Stavros Demertzis, Lazaros Iliadis
Subject: Engineering, Civil Engineering Keywords: Computational Intelligence; Machine/Deep Learning; Fuzzy Computing; Data Analysis; Blockchain; Cloud Computing; Internet of Things; Augmented Reality; Civil Engineering
Advanced state-of-the-art technologies, mainly computational intelligence including Machine/Deep Learning and Fuzzy Computing, through applied research, can provide added value to modern science and, in general, to entrepreneurship and the economy. Artificial intelligence is the field of computer science that attempts to model concepts such as learning, adaptability and perception to synthesize intelligent behavior in solving complex problems, utilizing elements of adaptation to the environment and philosophical reasoning. About the science of civil engineering and, in general, the construction industry, which is one of the most important in economic entrepreneurship, both in terms of the size of the workforce employed and the amount of capital invested, the penetration of artificial intelligence is possible to change industry business models, eliminate costly mistakes, reduce Jobsite injuries and generally make large-scale engineering projects more efficient. The purpose of the paper is to present the recent research on artificial intelligence methods (machine-deep learning, computer vision, natural language processing, fuzzy systems, robotics, etc.) and the corresponding related technologies related to it (extensive data analysis, blockchain, cloud computing, internet of things, augmented reality), in the fields of application of civil engineering science, including structural engineering, geotechnical engineering, hydraulics and water resources management, marine and coastal technology, transport and transportation infrastructure, planning and technical project management, critical infrastructure security and disaster mitigation..
Leishmania Proteomics: An in Silico Perspective
Carlos A. Padilla, Maria J. Alvarez, Aldo F. Combariza
Subject: Life Sciences, Biophysics Keywords: computational chemistry; biophysics; proteomics
Online: 24 May 2019 (12:49:58 CEST)
We report on the state of the art of proteins recognized as potential targets for the development of leishmania treatments through the search of biologically active chemical species, either from experimental in vitro, in vivo, or in silico sources. We classify the gathered information, in several ways: vector taxonomy and geographical distribution, leishmania parasite taxonomic and geographical distribution and enzymatic function (oxidoreductases, transferases, hydrolases, lyases, isomerases, ligases and cytokines). Our aim is to provide a much needed reference layout for research efforts aimed to understand the background of ligand-protein activation/inactivation processes, in this specific case, related with enzymes known to be part of biochemical cascade reactions initiated following a leishmania infectious episode.
Developing Computational Thinking to Help Tackle Pandemic Challenges
Roberto Araya, Masami Isoda, Johan van der Molen Morris
Subject: Keywords: COVID-19; Computational Thinking; Computational Modeling; Lesson Study
COVID-19 has been extremely difficult to control. The lack of understanding of key aspects of pandemics has affected virus transmission. On the other hand, there is a demand to incorporate Computational Thinking (CT) in the curricula with applications in STEM. However, there are still no exemplars in the curriculum that apply CT to real-world problems such as controlling a pandemic or other similar global crises. In this paper, we fill this gap by proposing exemplars of CT for modeling the pandemic. We designed exemplars following the three pillars of the APEC InMside framework for CT: algorithmic thinking, computational modeling, and machine learning. For each pillar, we designed a progressive sequence of activities that covers from elementary to high school. In an experimental study with elementary and middle school students from 2 schools of high vulnerability, we found that the computational modeling exemplar can be implemented by teachers and correctly understood by students. We conclude that it is feasible to introduce the exemplars at all grade levels, and that this is a powerful example of STEM integration that helps reflect and tackle real-world and challenging public health problems of great impact for students and their families.
Computational Evaluation of Thermal Barrier Coatings
Kevin Irick, Nima Fathi
Subject: Engineering, Mechanical Engineering Keywords: verification and validation; computational thermal analysis; computational physics
In the power plant industry, the turbine inlet temperature (TIT) plays a key role in the efficiency of the gas turbine and, therefore, the overall—in most cases combined—thermal power cycle efficiency. Gas turbine efficiency increases by increasing TIT. However, an increase of TIT would increase the turbine component temperature which can be critical (e.g., hot gas attack). Thermal barrier coatings (TBCs)—porous media coatings—can avoid this case and protect the surface of the turbine blade. This combination of TBC and film cooling produces a better cooling performance than conventional cooling processes. The effective thermal conductivity of this composite is highly important in design and other thermal/structural assessments. In this article, the effective thermal conductivity of a simplified model of TBC is evaluated. This work details a numerical study on the steady-state thermal response of two-phase porous media in two dimensions using personal finite element analysis (FEA) code. Specifically, the system response quantity (SRQ) under investigation is the dimensionless effective thermal conductivity of the domain. A thermally conductive matrix domain is modeled with a thermally conductive circular pore arranged in a uniform packing configuration. Both the pore size and the pore thermal conductivity are varied over a range of values to investigate the relative effects on the SRQ. In this investigation, an emphasis is placed on using code and solution verification techniques to evaluate the obtained results. The method of manufactured solutions (MMS) was used to perform code verification for the study, showing the FEA code to be second-order accurate. Solution verification was performed using the grid convergence index (GCI) approach with the global deviation uncertainty estimator on a series of five systematically refined meshes for each porosity and thermal conductivity model configuration. A comparison of the SRQs across all domain configurations is made, including uncertainty derived through the GCI analysis.
Application of Kinetic Flux Vector Splitting Scheme for Solving Viscous Quantum Hydrodynamical Model of Semiconductor Devices
Ubaid Ahmed Nisar, Waqas Ashraf, Shamsul Qamar
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: semiconductors; FDM; computational
In this article, one-dimensional viscous quantum hydrodynamical model of semiconductor devices is numerically investigated. The model treats the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. The nonlinear viscous quantum hydrodynamic models contain Euler-type equations for density and current, viscous and quantum correction terms, and a Poisson equation for electrostatic potential. Due to high nonlinearity of model equations, numerical solution techniques are applied to obtain their solutions.. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as device length, viscosities, different doping and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations.
Non-Monotonic Dc Stark Shifts in the Rapidly Ionizing Orbitals of the Water Molecule
Patrik Pirkola, Marko Horbatsch
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: Computational Atomic; Molecular Physics
We extend a previously developed model for the Stark resonances of the water molecule. The method employs a partial-wave expansion of the single-particle orbitals using spherical harmonics. To find the resonance positions and decay rates, we use the exterior complex scaling approach which involves the analytic continuation of the radial variable into the complex plane and yields a non-hermitian Hamiltonian matrix. The real part of the eigenvalues provides the resonance positions (and thus the Stark shifts), while the imaginary parts $-\Gamma/2$ are related to the decay rates $\Gamma$, i.e., the full-widths at half-maximum of the Breit-Wigner resonances. We focus on the three outermost (valence) orbitals, as they are dominating the ionization process. We find that for forces directed along the three Cartesian co-ordinates, the fastest ionizing orbital always displays a non-monotonic Stark shift. For the case of fields along the molecular axis we show results as a function of the number of spherical harmonics included ($\ell_{\max}=3,4$). Comparison is made with total molecule resonance parameters from the literature obtained with Hartree-Fock and coupled cluster methods.
Proof that P ≠ NP
Jamell Ivan Samuels
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Computational Complexity; Probability; Mathematics
The question does P = NP has confounded mathematicians and computer scientists alike for over 50 years and although there is an almost unanimous agreement that it in fact does not, there still is no absolute proof. In this paper, I attempt to prove to that P does not equal NP.
The Entropy Function for Non Polynomial Problems and Its Applications for Turing Machines
Matheus Santana Lima
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Computational Complexity; Information Theory; Machine Learning; Computational Statistics; Kolmogorov-Chaitin Complexity; Kelly criterion
We present a general process for the halting problem, valid regardless of the time and space computational complexity of the decision problem. It can be interpreted as the maximization of entropy for the utility function of a given Shannon-Kolmogorov-Bernoulli process. Applications to non-polynomials problems are given. The new interpretation of information rate proposed in this work is a method that models the solution space boundaries of any decision problem (and non polynomial problems in general) as a communication channel by means of Information Theory. We described a sort method that order objects using the intrinsic information content distribution for the elements of a constrained solution space - modeled as messages transmitted through any communication systems. The limits of the search space are defined by the Kolmogorov-Chaitin complexity of the sequences encoded as Shannon-Bernoulli strings. We conclude with a discussion about the implications for general decision problems in Turing machines.
Verification and Validation in Computational Mechanics
Shuvodeep De
Subject: Engineering, Mechanical Engineering Keywords: computational mechanics; composite materials; FEA
Decades ago, when computational was expensive and limited, the structural design was mostly performed by hand calculations using simple mathematical models. For example, it was a common practice to design a structure as complex as the wing of an aircraft by simple beam analysis. However, ever since the classic paper by Turner et al., due to a rapid increase in computational, more complex mathematical models are being used to simulate the physical behavior of complex structural components. To solve intractable problems unsolvable by hand calculations, numerical techniques like Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Finite Difference Method etc. are being employed. In fact, the availability of these methods has led to the development of an entirely new area of research known as Multidisciplinary Design Optimization (MDO) where various disciplines are considered in an optimization problem. The most important question while using a mathematical model to represent practical industrial problems is to what extent these models represent the real-life situation. Computational models are always built on upon assumptions. Simply at looking at the simulation outcomes i.e. the graphical and numerical results, it is often very difficult to ensure if the underlying assumption holds and that the results are reliable. This has led to the development of another field of research known as Verification and Validation (called V&V in short).
Automatic Dialect Adaptation in Finnish and its Effect on Perceived Creativity
Mika Hämäläinen, Niko Partanen, Khalid Alnajjar, Jack Rueter, Thierry Poibeau
Subject: Arts & Humanities, Linguistics Keywords: Finnish; dialect adaptation; computational creativity
Online: 8 September 2020 (05:02:35 CEST)
We present a novel approach for adapting text written in standard Finnish to different dialects. We experiment with character level NMT models both by using a multi-dialectal and transfer learning approaches. The models are tested with over 20 different dialects. The results seem to favor transfer learning, although not strongly over the multi-dialectal approach. We study the influence dialectal adaptation has on perceived creativity of computer generated poetry. Our results suggest that the more the dialect deviates from the standard Finnish, the lower scores people tend to give on an existing evaluation metric. However, on a word association test, people associate creativity and originality more with dialect and fluency more with standard Finnish.
Identifying Protein Features Responsible for Improved Drug Repurposing Accuracies Using The CANDO Platform: Implications for Drug Design
William Mangione, Ram Samudrala
Subject: Biology, Other Keywords: drug repurposing; drug repositioning; computational biology; drug discovery; computational pharmacology; malaria; multitargeting; malaria treatment
Drug repurposing is a valuable tool for combating the slowing rates of novel therapeutic discovery. The Computational Analysis of Novel Drug Opportunities (CANDO) platform performs shotgun repurposing of 2030 indications/diseases using 3733 drugs/compounds to predict interactions with 46,784 proteins and relating them via proteomic interaction signatures. An accuracy is calculated by comparing interaction similarities of drugs approved for the same indications. We performed a unique subset analysis by breaking down the full protein library into smaller subsets and then recombining the best performing subsets into larger supersets. Up to 14% improvement in accuracy is seen upon benchmarking the supersets, representing a 100–1000 fold reduction in the number of proteins considered relative to the full library. Further analysis revealed that libraries comprised of proteins with more equitably diverse ligand interactions are important for describing compound behavior. Using one of these libraries to generate putative drug candidates against malaria results in more drugs that could be validated in the biomedical literature than the list suggested by the full protein library. Our work elucidates the role of particular protein subsets and corresponding ligand interactions that play a role in drug repurposing, with implications for drug design and machine learning approaches to improve the CANDO platform.
Conventional Data Science Techniques to Bioinformatics and Utilizing a Grid Computing Approach to Computational Medicine
Andrew M. K. Nassief
Subject: Mathematics & Computer Science, Other Keywords: bioinformatics; computational genomics; computational medicine; data science; data visualization; parallel processing; grid computing; fog computing
Conventional data visualization software have greatly improved the efficiency of the mining and visualization of biomedical data. However, when one applies a grid computing approach the efficiency and complexity of such visualization allows for a hypothetical increase in research opportunities. This paper will present data visualization examples presented in conventional networks, then go into higher details about more complex techniques related to leveraging parallel processing architecture. Part of these complex techniques include the attempt to build a basic general adversarial network (GAN) in order to increase the statistical pool of biomedical data for analysis as well as an introduction to the project utilizing the decentralized-internet SDK. This paper is meant to show you said conventional examples then go into details about the deeper experimentation and self contained results.
Antifragile Control Systems: The Case of An Anti-Symmetric Network Model of the Tumor–Immune–Drug Interactions
Cristian Axenie, Daria Kurz, Matteo Saveriano
Subject: Engineering, Control & Systems Engineering Keywords: Computational Oncology; Cancer; Antifragility; Control Theory
A therapy's outcome is determined by a tumor's response to treatment which, in turn, depends on multiple factors such as the severity of the disease and the strength of patient's immune response. Gold standard cancer therapies are in most cases fragile when sought to break the ties to either tumor kill ratio or patient toxicity. Lately, research has shown that cancer therapy can be at most robust when handling adaptive drug resistance and immune escape patterns developed by evolving tumors. This is due to the stochastic and volatile nature of the interactions, at the tumor environment level, tissue vasculature, and immune landscape, induced by drugs. Herein, we explore the path towards antifragile therapy control, that generates treatment schemes that are not fragile but go beyond robustness. More precisely, we describe a first instantiation of a control-theoretic method to make therapy schemes cope with the systemic variability in the tumor–immune–drug interactions and gain more tumor kill with less patient toxicity. Considering the anti-symmetric interactions within a model of the tumor–immune–drug network, we introduce the antifragile control framework that demonstrates promising results in simulation. We evaluate our control strategy against state-of-the-art therapy schemes on various experiments and discuss the insights we gained on the potential that antifragile control could have in treatment design in clinical settings.
Are Microreactors the Future of Biodiesel Synthesis?
Rosilene Welter, João Silva Jr., Marcos de Souza, Mariana Lopes, Osvaldir Taranto, Harrson Santana
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Biodiesel; Microreactor; Transesterification; Computational Fluid Dynamics
Microfluidic devices or microdevices refer to systems with a characteristic length in the micrometer range. Systems in this size allow handling small quantities of reagents and samples, with reduced residence time, better control of chemical species concentration, high heat and mass transfers, and high surface/volume ratio. These characteristics led to the application of these microdevices in several areas, such as biological systems, energy, liquid-liquid extraction, food, agricultural sectors, pharmaceuticals, flow chemistry, microreactors, and biodiesel synthesis. Microreactors are devices that have interconnected microchannels, in which small amounts of reagents are manipulated and react for a certain period of time. The traditional characteristics of microreactors are less quantities of reagents and samples, high surface area in relation to volume (10000 m2 m-3), reduction of resistance to heat and mass transfer, reduced reaction times, and narrower residence time distributions. In recent years, several studies have been carried out on biodiesel production in microreactors that explore the influence of operating conditions, mixing and reaction yield, numbering, and especially the microdevices design. Despite all the advantages of microreactors, the literature shows that there are only a few applications on an industrial scale. Two main reasons that hinder the adoption of this technology are the scale-up to a large enough volume to deliver the necessary production capacity and the costs related to industrial manufacturing microreactors. It is often stated that large-scale production of microreactors can be easily achieved by numbering-up. However, researches show that an incredibly high number of microdevices would be needed, which results in a technical unfeasibility and a strong impact on the construction costs of the industrial system. The present review aims to show whether microreactors can replace conventional biodiesel production processes and how this replacement technology could be carried out. The current chapter was divided into the following sections: Introduction, Synthesis and Purification of Biodiesel in Microreactors, Fundamentals of CFD, and Fundamentals of Scale-up. Finally, conclusions and future perspectives are exposed.
Exploring Geometric Feature Hyper-Space in Data to Learn Representations of Abstract Concepts
Rahul Sharma, Bernardete Ribeiro, Alexandre Miguel Pinto, Amilcar F cardoso
Subject: Mathematics & Computer Science, Other Keywords: unsupervised machine learning; hierarchical learning; computational representation; computational cognitive modeling; contextual modeling; classification; IoT data modeling
The term Concept has been a prominent part of investigations in psychology and neurobiology where, mostly, it is mathematically or theoretically represented. The Concepts are also studied computationally through their symbolic, distributed and hybrid representations. The majority of these approaches focused on addressing concrete concepts notion, but the view of the abstract concept is rarely explored. Moreover, most computational approaches have a predefined structure or configurations. The proposed method, Regulated Activation Network (RAN), has an evolving topology and learns representations of Abstract Concepts by exploiting the geometrical view of Concepts, without supervision. In the article, the IRIS data was used to demonstrate: the RAN's modeling; flexibility in concept identifier choice; and deep hierarchy generation. Data from IoT's Human Activity Recognition problem is used to show automatic identification of alike classes as abstract concepts. The evaluation of RAN with 8 UCI benchmarks and the comparisons with 5 Machine Learning models establishes the RANs credibility as a classifier. The classification operation also proved the RAN's hypothesis of abstract concept representation. The experiments demonstrate the RANs ability to simulate psychological processes (like concept creation and learning) and carry out effective classification irrespective of training data size.
Domain's Sweep, a Computational Method to Optimise Chemical and Physical Processes
Alexandre César Balbino Barbosa Filho
Subject: Engineering, Biomedical & Chemical Engineering Keywords: computational methods; optimisation; plant optimisation; chemical processes; physical processes; computational intelligence; maximum value; minimum value; functions
Many of engineering's problems are about to optimise equations, functions and process models equations which appear on common or even complex science's cases. Most applied optimisation methods nowadays can only be used on particular cases, so the main objective of this article is to define a computational method that can optimise or even find target values for specified objective functions and variables, by just using computer effort. Chemical plants involve many differential equations and they can be optimised with more facility now. On this paper, there are shown as example, an optimisation for a chemical reactor, which is found the optimal temperature and volumetric flow rate of feed, for the given objective function being the composition of the desired product. Named Domain's Sweep, it is an algorithm that evaluate given(s) mathematical function(s) and/or equation(s) by varying your(s) independent variable(s) through loops with a given step size and solving it after closing the degrees of freedom, and finally, with some condition statements, store all the optimum values of given or created objective functions with its respective independent variables. In another words, the user create an objective function and this method find the function's maximum, minimum or a certain chosen target value, even if it does not have an inflection point in the given search interval of the independent variables.
Optimizing Efficiency and Motility of A Polyvalent Molecular Motor
Mark Rempel, Eldon Emberly
Subject: Physical Sciences, Acoustics Keywords: molecular motor; burnt bridge ratchet; computational model
Molecular motors play a vital role in the transport of material within the cell. A family of motors of growing interest are burnt bridge ratchets (BBRs). BBRs rectify spatial fluctuations into directed motion by creating and destroying motor-substrate bonds. It has been shown that the motility of a BBR can be optimized as a function of the system parameters. However, the amount of energy input required to generate such motion and the resulting efficiency has been less well characterized. Here, using a deterministic model, we calculate the efficiency of a particular type of BBR, namely a polyvalent hub interacting with a surface of substrate. We find that there is an optimal burn rate and substrate concentration that leads to optimal efficiency. Additionally, the substrate turnover rate has important implications on motor efficiency. We also consider the effects of force-dependent unbinding on the efficiency and find that under certain conditions the motor works more efficiently when bond breaking is included. Our results provide guidance for how to optimize the efficiency of BBRs.
Random Triangle Theory: a Computational Approach
Ivano Azzini
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Random Triangle; Quasiorthogonal Dimension; Combinatorics; Computational Problems
In this work we study the following problem, from a computational point of view: If three points are selected in the unit square at random, what is the probability that the triangle obtained is obtuse, acute or right? We provide two convergent strategies: the frst derived from the ideas introduced in [2] and the second built on the combinatorics theory. The combined use of these two methods allows us to address the random triangle theory from a new perspective and, we hope, to work out a general method of dealing with some classes of computational problems.
Computational Biology and Machine Learning Approaches to Study Mechanistic Microbiomehost Interactions
Padhmanand Sudhakar, Kathleen Machiels, Severine Vermeire
Subject: Life Sciences, Microbiology Keywords: Computational methods; machine learning; microbiome-host interactions
The microbiome, by virtue of its interactions with the host, is implicated in various host functions including its influence on inflammation, nutrition, and homeostasis. Although driven by a complex combination of intrinsic and extrinsic factors, many chronic diseases such as diabetes, cancer, Inflammatory Bowel Disease among others are characterized by a disruption of microbial communities in at least one biological niche/organ system. Various molecular mechanisms between microbial and host components such as proteins, RNAs, metabolites etc have recently been elucidated, thus filling many gaps in our understanding of how the microbiome modulates host processes. Concurrently, high throughput technologies have enabled the profiling of heterogeneous datasets capturing community level changes in the microbiome as well as the host responses. However, due to pragmatic limitations with respect to parallel sampling and analytical procedures, big gaps still exist in terms of how the microbiome mechanistically influences host functions at a systems and community level. In the past decade, various computational biology and machine learning methodologies and approaches have been developed with an aim to fill these existing gaps. Due to the agnostic nature of the tools, they have been applied in various disease contexts to analyze and infer the interactions between the microbiome and host molecular components, and in the case of a few selected tools, on downstream host processes. Generally, most of the tools are enabled by frameworks to statistically or mechanistically integrate different types of -omic and meta -omic datasets followed by functional/biological interpretation. In this review, we provide an overview of the landscape of computational approaches for investigating mechanistic microbiome-host interactions and their potential benefit for basic and clinical research. These could include but are not limited to the development of activity and mechanism based biomarkers, uncovering mechanisms for therapeutic interventions and generating integrated signatures to stratify patients.
Unbiased Approach for the Identification of Molecular Mechanisms Sensitive to Chemical Exposures
Alexander Suvorov, Victoria Salemme, Joseph McGaunn, Anthony Poluyanoff, Menna Teffera, Saira Amir
Subject: Life Sciences, Molecular Biology Keywords: adverse outcome pathway; toxicity pathway; computational toxicology
Background: Targeted methods that dominated toxicological research until recently did not allow for screening of all molecular changes involved in toxic response. Therefore, it is difficult to infer if all major mechanisms of toxicity have already been discovered, or if some of them are still overlooked. Objectives: To identify molecular mechanisms sensitive to chemical exposures in an unbiased manner. Methods: We used data on 641,516 unique chemical-gene interactions from the Comparative Toxicogenomic Database. Only data from high-throughput gene expression experiments with human, rat or mouse cells/tissues were extracted. The total number of chemical-gene interactions was calculated for every gene, and used as a measure of gene sensitivity to chemical exposures. These values were further used in enrichment analyses to identify molecular mechanisms sensitive to chemical exposures. Results: Remarkably, use of different input subsets with non-overlapping lists of chemical compounds identified largely the same genes and molecular pathways as most sensitive to chemical exposures, indicative of an unbiased nature of our analysis. One of the most important findings of this study is that almost every known molecular mechanism may be affected by chemical exposures. Predictably, xenobiotic metabolism pathways and mechanisms of cellular response to stress and damage were among the most sensitive. Additionally, our analysis identified a range of highly sensitive molecular pathways, which are not widely recognized by modern toxicology as major targets of toxicants, including lipid metabolism pathways, longevity regulation cascade and cytokine mediated signaling. Discussion: Molecular mechanisms identified as the most sensitive to chemical exposures are relevant for significant public health problems, such as aging, cancer, metabolic and autoimmune disease. Thus, public health system will likely benefit from future research focus on these sensitive molecular mechanisms. Additionally, approach used in this study may guide identification of priority adverse outcome pathways (AOP) for in-vitro and in-silico toxicity testing methods.
Preprint ESSAY | doi:10.20944/preprints202001.0189.v1
What Drives Computational Chemistry Forward: Theory or Computational Power?
Wenfa Ng
Subject: Chemistry, General & Theoretical Chemistry Keywords: theory; simulation; computational power; epochs, science history
History is often thought to be dull and boring – where large numbers of facts are memorized for passing exams. But the past informs the present and future, especially in delineating the context surrounding specific events that, in turn, help provide a deeper understanding of their causes and implications. Scientific progress (whether incremental or breakthroughs) is built upon prior work. Chronological examination of computational chemistry's evolution reveals the existence of major "epochs" (e.g., transition from semi-empirical methods to first principles calculations), and the centrality of key ideas (e.g., Schrodinger equation and Born Oppenheimer approximation) in potentiating progress in the field. The longstanding question of whether computing power (both capacity and speed) or theoretical insights play a more important role in advancing computational chemistry was examined by taking into account the field's development holistically. Specifically, availability of large amount of computing power at declining cost, and advent of graphics processing unit (GPU) powered parallel computing are enabling tools for solving hitherto intractable problems. On the other hand, this essay argues (using Born Oppenheimer approximation as an example) that theoretical insights' role in unlocking problems through simple (but insightful) assumptions is often overlooked. Collectively, the essay should be useful as a primer for appreciating major development periods in computational chemistry, from which counterfactual questions illuminate the relative importance of theoretical insights and advances in computer science in moving the field forward.
Macromolecular Modeling and Design in Rosetta: New Methods and Frameworks
Julia Koehler Leman, Brian D Weitzner, Steven M Lewis, RosettaCommons Consortium, Richard Bonneau
Subject: Chemistry, General & Theoretical Chemistry Keywords: structure prediction; Rosetta; computational modeling; protein design
The Rosetta software suite for macromolecular modeling, docking, and design is widely used in pharmaceutical, industrial, academic, non-profit, and government laboratories. Considering its broad modeling capabilities, Rosetta consistently ranks highly when compared to other leading methods created for highly specialized protein modeling and design tasks. Developed for over two decades by a global community of scientists at more than 60 institutions, Rosetta has undergone multiple refactorings, and now comprises over three million lines of code. Here we discuss the methods developed in the last five years, involving the latest protocols for structure prediction, protein–protein and protein–small molecule docking, protein structure and interface design, loop modeling, the incorporation of various types of experimental data, and modeling of peptides, antibodies and other proteins in the immune system, nucleic acids, non-standard amino acids, carbohydrates, and membrane proteins. We briefly discuss improvements to the energy function, user interfaces, and usability of the software. Rosetta is available at www.rosettacommons.org.
A Review and Introduction to New Aspects of Digital and Computational Approaches to Human and AI Ethics
Hector Zenil
Subject: Arts & Humanities, Philosophy Keywords: philosophy of information; organised complexity; Kolmogorov complexity; logical depth; ethics of information; computational ethics; infoethics; machine ethics; computational complexity
I review previous attempts, including recent ones, to introduce technical aspects of digital information and computation into the discussion of ethics. I survey some limitations and advantages of these attempts to produce guiding principles at different scales. In particular, I briefly introduce and discuss questions, approaches, challenges, and limitations based on, or related to, simulation, information theory, integrated information, computer simulation, intractability, algorithmic complexity, and measures of computational organisation and sophistication. I discuss and propose a set of features that ethical frameworks must possess in order to be considered well-grounded, both in theoretical and methodological terms. I will show that while global ethical frameworks that are uncomputable are desirable because they provide non-teleological direction and open-ended meaning, constrained versions should be able to provide guidelines at more local and immediate time scales. In connection to the ethics of artificial intelligence, one point that must be underscored about computational approaches is that (General) AI should only embrace an ethical framework that we humans are willing to adopt. I think that such a framework is possible, taking the form of a general and universal (in the sense of computation) framework built from first computational principles.
Prediction of Multi-Inputs Bubble Column Reactor Using a Novel Hybrid Model of Computational Fluid Dynamics and Machine Learning
Amir Mosavi, Shahab Shamshirband, Ely Salwana, Kwok-wing Chau, Joseph H. M. Tah
Subject: Engineering, Civil Engineering Keywords: machine learning, computational fluid dynamics (CFD), hybrid model, adaptive neuro-fuzzy inference system (ANFIS), artificial intelligence, big data, prediction, forecasting, optimization, hydrodynamics, fluid dynamics, soft computing, computational intelligence, computational fluid mechanics
The combination of artificial intelligence algorithms and numerical methods has recently become popular in the prediction of macroscopic and microscopic hydrodynamics parameters of bubble column reactors. The multi inputs and outputs machine learning can cover small phase interactions or large fluid behavior in industrial domains. This numerical combination can develop the smart multiphase bubble column reactor with the ability of low-cost computational time. It can also decrease case studies for the optimization process when big data is appropriately used during learning. There are still many model parameters that need to be optimized for a very accurate artificial algorithm, including data processing and initialization, the combination of inputs and outputs, number of inputs and model tuning parameters. For this study, we aim to train four inputs big data during learning process by an adaptive neuro-fuzzy inference system or adaptive-network-based fuzzy inference system (ANFIS) method, and we consider the superficial gas velocity as one of the input variables, while for the first time, one of the computational fluid dynamics (CFD) outputs named gas velocity is used as an output of the artificial algorithm. The results show that the increasing number of input variables improves the intelligence of the ANFIS method up to , and the number of rules during learning process has a significant effect on the accuracy of this type of modeling. The results also show that propper selection of model parameters results in more accuracy in prediction of the flow characteristics in the column structure.
Effect of Blood Transfusion on Cerebral Hemodynamics and Vascular Topology Described by Computational Fluid Dynamics in Sickle Cell Disease Patients.
Russell P. Sawyer, Sirjana Pun, Kristine A. Karkoska, Cherita A. Clendinen, Michael R. Debaun, Ephraim Gutmark, Riccardo Barrile, Hyacinth I. Hyacinth
Subject: Life Sciences, Genetics Keywords: Cell Disease; Stroke; Neuroimaging; Hematology; Computational fluid dynamics
The main objective of this study is to demonstrate proof of principle that computational fluid dynamics (CFD) modeling is a tool for studying the contribution of covert and overt vascular architecture to the risk of cerebrovascular disease in in sickle cell disease (SCD) as well as uncover one or more mechanism of response to therapy such as chronic red blood cell (cRBC) transfusion. We analyzed baseline (screening), pre-randomization and study exit magnetic resonance angiogram (MRA) images from 10 (5 each from the transfusion and observation arms) pediatric sickle SCD participants in the silent cerebral infarct transfusion (SIT) trial, using CFD modeling. We reconstructed the intracranial portion of the internal carotid artery and branches and extracted the geometry using 3D Slicer. We cut specific potions of the large intracranial artery to include segments of the internal carotid, middle, anterior, and posterior cerebral artery such that the vessel segment analyzed extended from the intracranial beginning of the internal carotid artery up to immediately after (~0.25 inches) the middle cerebral artery branching point. Cut models were imported into Ansys 2021R2/2022R1 and laminar and time-dependent flow simulation was performed. Change in time averaged mean velocity, wall shear stress, and vessel tortuosity were compared between the observation and cRBC arm. We did not observe a correlation between time averaged mean velocity (TAMV) and mean transcranial doppler (TCD) velocity at study entry. There was also no difference in change in time average mean velocity, wall shear stress (WSS), and vessel tortuosity between the observation and cRBC transfusion arms. WSS and TAMV were abnormal for 2 (developed TIA) out of the 3 participants (one participant had SCI) that developed neurovascular outcomes. CFD approaches allows for the evaluation of vascular topology and hemodynamics in SCD using MRA images. In this proof of principle study, we show that CFD could be a useful tool and we intend to carry out future studies with a larger sample to enable more robust conclusions.
Do Written Responses to Open-Ended Questions on Fourth-Grade Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests?
Felipe Urrutia, Roberto Araya
Subject: Social Sciences, Education Studies Keywords: Computational linguistics; elementary mathematics; formative assessments; student models
: Predicting long-term student learning is a critical task for teachers and for educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first is that teachers develop their own questions for formative assessment. Therefore, there are a huge number of possible questions, each of which is answered by only a few students. Second, formative assessment often involved open-ended questions that students answer in writing. These types of questions in formative assessment are highly valuable. However, analyzing the responses automatically can be a complex process. In this paper, we address these two challenges. We analyzed 621,575 answers to closed-ended questions and 16,618 answers to open-ended questions by 464 fourth-graders from 24 low-SES schools. We constructed a classifier to detect incoherent responses to open-ended mathematics questions. We then used it in a model to predict scores on an end-of-year national standardized test. We found that despite answering 36.4 times fewer open-ended questions than closed questions, including features of the students' open responses in our model improved our prediction of their end-of-year test scores. To the best of our knowledge, this is the first time that a predictor of end-of-year test scores has been improved by using automatically detected features of answers to open-ended questions on formative assessments.
Atmospheric Contamination of Coastal Cities by the Exhaust Emissions of Docked Marine Vessels: The Case of Tromsø
Asier Zubiaga, Synne Madsen, Hassan Khawaja, Gernot Boiger
Subject: Earth Sciences, Atmospheric Science Keywords: computational fluid dynamics; OpenFOAM; docked vessel; gas pollutants
Docked ships are a source of contamination for the city while they keep their engine working. Plumes emissions from large boats can carry a number of pollutants to the nearby cities causing a detrimental effect on the life quality and health of local citizens and ecosystems. A computational fluid dynamics model of the harbour area of Tromsø has been built in order to model the deposition of CO2 gas emitted by docked vessels within the city. The ground level distribution of the emitted gas has been obtained and the influence of the wind speed and direction, vessel chimney height, ambient temperature and exhaust gas temperature has been studied. The deposition range is found to be the largest when the wind speed is low. At high wind speeds, the deposition of pollutants along the wind direction is enhanced and spots of high pollutant concentration can be created. The simulation model is intended for the detailed study of the contamination in cities near the coast or an industrial pollutant source of any type of gas pollutants and can easily be extended for the study of particulate matter.
On Bioquantum State Systems, The Study of Algebraic Ecology and Macrocellular Biology
Max Gotts
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: biology; mathematics; computational biology; linear algebra; abstract algebra
This dissertation is a rigorous study of ecology and macrocellular biology as a subfield of abstract algebra. We begin with the creation of an axiomatic paradigm, then move onto constructing a universal genetic code of biology. We use this to define increasingly complex algebraic structures (ecosystem, evolving populations, etc.). We prove a variety of theorems regarding to the members of the previous mathematical constructs, notably the following three: 1. There is one unique phenotypic representation of each organism. For example, if you subdivide any piece of genetic code into its phenotypic components, then two identical organisms have identical decomposed DNA 2. There are a finite number of indivisible phenotypic traits. 3. The three sophioid-definitions are equivalent: (a) dynamical evolutionary enlargement of the medial temporal lobe and frontal lobe, (b) reliance upon intelligence, (c) the existence of an intellectually- or socially-hierarchical society. Much has yet to be done on this work, but as a first draft, it stands as a jumping point for future exploits; I am working on an entirely revised second draft.
Investigating Strength that Comprehensive mRNA Expression Level of Prognostic Genes Influences on Patient Survival for Every Cancer Type
Minhyeong Lee
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: Bioinformatics, Cancer, Genomics, Computational Biology, RNA sequencing, TCGA
This study aimed to rank cancers by the strength of relationship between comprehensive mRNA expression of the most harmful or protective genes and patient survival. Using TCGA dataset including RNA-SEQ and clinical data, we investigated not only gene specific prognostic availability, but also comprehensive prognostic availability of prognostic genes filtered by cox coefficient, and ranked cancers by specially designed prognostic indicator. Through Kaplan-Meier plots, we checked that cancers vary in the strength of influence of prognostic genes, and they follow as the rank. Developing treatment with method to reduce or increase expression of biomarkers for specific cancer which ranked bottom, it would be not efficient in high probability. The results of this study can be a scientific evidence for that.
Hydrodynamic Light Flashing in Thin Layer Wavy Photobioreactors
Monica Moroni, Simona Lorino, Agnese Cicci, Marco Bravi
Subject: Engineering, Biomedical & Chemical Engineering Keywords: microalgae; photobioreactor; flashing light effect; Computational Fluid Dynamics
In a thin-volume photobioreactor where a concentrated suspension of microalgae is circulated throughout the established spatial irradiance gradient, microalgal cells experience a time-variable irradiance. Deploying this feature is the most convenient way of obtaining the so-called "flashing light" effect, improving biomass production in high irradiance. This work investigates the light flashing features of sloping wavy photobioreactors, a recently proposed type, by introducing and validating a Computational Fluid Dynamics model. Two characteristic flow zones (straight top-bottom stream and local recirculation stream), both effective toward light flashing, have been found and characterised: a recirculation-induced frequency of 3.7 Hz and straight flow-induced frequency of 5.6 Hz were estimated. If the channel slope is increased, the recirculation area becomes less stable while the recirculation frequency is nearly constant with flow rate. The validated CFD model is a mighty tool that could be reliably used to further increase the flashing frequency by optimising the design, the dimensions, the installation and the operational parameters of the sloping wavy photobioreactor.
Finding Exact Forms on a Thermodynamic Manifold
Chao Ju, Mark Stalzer
Subject: Physical Sciences, Other Keywords: thermodynamics; entropy; artificial intelligence; differential geometry; computational physics
Because only two variables are needed to characterize a simple thermodynamic system in equilibrium, any such system is constrained on a 2D manifold. Of particular interest are the exact 1-forms on the cotangent space of that manifold, since the integral of exact 1-forms is path-independent, a crucial property satisfied by state variables such as internal energy dE and entropy dS. Our prior work [1] shows that given an appropriate language of vector calculus, a machine can re-discover the Maxwell equations and the incompressible Navier-Stokes equations from data. In this paper, We enhance this language by including differential forms and show that machines can re-discover the equation for entropy dS given data. Since entropy appears in various fields of science in different guises, a potential extension of this work is to use the machinery developed in this paper to let machines discover the expressions for entropy from data in fields other than classical thermodynamics.
An Extension of the All-Mach Number Pressure-Based Solution Framework for Numerical Modelling of Two-Phase Flows with Interface
Matvey Kraposhin, Aleksandr Kukharskii, Viktoria Korchagova, Aleksandr Shevelev
Subject: Physical Sciences, Fluids & Plasmas Keywords: two-phase flow; compressible flow; interfacial flow; computational hydrodynamic; computational gas dynamic; finite volume method; OpenFOAM; All-Mach number solver
In this paper, we present the extension of the pressure-based solver designed for the simulation of compressible and/or incompressible two-phase flows of viscous fluids. The core of the numerical scheme is based on the hybrid Kurganov— Noele — Petrova/PIMPLE algorithm. The governing equations are discretized in the conservative form and solved for velocity and pressure, with the density evaluated by an equation of state. The acoustic-conservative interface discretization technique helps to prevent the unphysical instabilities on the interface. The solver was validated on various cases in wide range of Mach number, both for single-phase and two-phase flows. The numerical algorithm was implemented on the basis of the well-known open-source Computational Fluid Dynamics library OpenFOAM in the solver called interTwoPhaseCentralFoam. The source code and the pack of test cases are available on GitHub: https://github.com/unicfdlab/hybridCentralSolvers
Simulation of Traffic Born Pollutant Dispersion and Personal Exposure Using High Resolution Computational Fluid Dynamics
Sadjad Tajdaran, Fabrizio Bonatesta, Denise Morrey, Byron Mason
Subject: Earth Sciences, Environmental Sciences Keywords: air quality; nitrogen oxides; dispersion modelling; computational fluid dynamics
Road vehicles are a large contributor to Nitrogen Oxides (NOx) pollution. The routine road-side monitoring stations, however, may underrepresent the severity of personal exposure in urban areas, because long-term average readings cannot capture the effects of momentary, high peaks of air pollution. While numerical modelling tools historically have been used to propose an improved distribution of monitoring stations, ultra-high resolution Computational Fluid Dynamics models can further assist the relevant stakeholders in understanding the important details of pollutant dispersion and exposure at local level. This study deploys a 10 cm-resolution CFD model to evaluate actual high peaks of personal exposure to NOx from traffic, by tracking the gases emitted from the tailpipe of moving vehicles being dispersed towards the roadside. The investigation shows that a set of four Euro 5-rated diesel vehicles travelling at constant speed may generate momentary road-side concentrations of NOx as high as 1.25 mg/m3, with 25% expected increase for doubling the number of vehicles and approximately 50% reduction when considering Euro 6-rated vehicles. The paper demonstrates how the numerical tool can be used to identify the impact of measures to reduce personal exposure, such as protective urban furniture, as traffic patterns and environmental conditions change.
Autologous Gradient Formation Under Differential Interstitial Fluid Flow Environments
Caleb Stine, Jennifer Munson
Subject: Engineering, Biomedical & Chemical Engineering Keywords: interstitial flow; glioma; chemotaxis; autologous; computational; gradient; CXCL12; migration
Fluid flow and chemokine gradients play a large part in not only regulating homeostatic processes in the brain, but also in pathologic conditions by directing cell migration. Tumor cells in particular are superior at invading into the brain resulting in tumor recurrence. One mechanism that governs cellular invasion is autologous chemotaxis, whereby pericellular chemokine gradients form due to interstitial fluid flow (IFF) leading cells to migrate up the gradient. Glioma cells have been shown to specifically use CXCL12 to increase their invasion under heightened interstitial flow. Computational modeling of this gradient offers better insight into the extent of its development around single cells, yet very few conditions have been modelled. In this paper, a computational model is developed to investigate how a CXCL12 gradient may form around a tumor cell and what conditions are necessary to affect its formation. Through finite element analysis using COMSOL and coupled convection-diffusion/mass transport equations, we show that velocity (IFF magnitude) has the largest parametric effect on gradient formation, multidirectional fluid flow causes gradient formation in the direction of the resultant which is governed by IFF magnitude, common treatments and flow patterns have a spatiotemporal effect on pericellular gradients, exogenous background concentrations can abrogate the autologous effect depending on how close the cell is to the source, that there is a minimal distance away from the tumor border required for a single cell to establish an autologous gradient, and finally that the development of a gradient formation is highly dependent on specific cell morphology.
Working Paper DATA DESCRIPTOR
A Geo-Tagged COVID-19 Twitter Dataset for 10 North American Metropolitan Areas over a 255-Day Period
Sara Melotte, Mayank Kejriwal
Subject: Social Sciences, Other Keywords: COVID-19; Twitter; Geo-Tagged; Metropolitan; Computational Social Science
One of the unfortunate findings from the ongoing COVID-19 crisis is the disproportionate impact the crisis has had on people and communities who were already socioeconomically disadvantaged. It has, however, been difficult to study this issue at scale and in greater detail using social media platforms like Twitter. Several COVID-19 Twitter datasets have been released, but they have very broad scope, both topically and geographically. In this paper, we present a more controlled and compact dataset that can be used to answer a range of potential research questions (especially pertaining to computational social science) without requiring extensive preprocessing or tweet-hydration from the earlier datasets. The proposed dataset comprises tens of thousands of geotagged (and in many cases, reverse-geocoded) tweets originally collected over a 255-day period in 2020 over 10 metropolitan areas in North America. Since there are socioeconomic disparities within these cities (sometimes to an extreme extent, as witnessed in `inner city neighborhoods' in some of these cities), the dataset can be used to assess such socioeconomic disparities from a social media lens, in addition to comparing and contrasting behavior across cities.
New Mechanistic Insights on Carbon Nanotubes Nanotoxicity Using Isolated Submitochondrial Particles, Molecular Docking, and Nano-QSTR Approaches
Michael González-Durruthy1, Riccardo Concu, Juan Ruso, Maria Natalia Dias Soeiro Cordeiro
Subject: Materials Science, Nanotechnology Keywords: mitochondria, F0F1-ATPase, carbon nanotubes, computational nanotoxicology.; QSAR; NanoQSAR
Herein, we present a combination of experimental and computational study on the mitochondrial F0F1-ATPase nanotoxicity inhibition induced by single-walled carbon nanotubes (SWCNT-pristine, SWCNT-COOH). To this end, the in vitro inhibition responses in submitochondrial particles (SMP) as F0F1-ATPase enzyme were strongly dependent on the concentration assay (from 3 to 5 µg/ml) for both types of carbon nanotubes. Besides, both SWCNTs show an interaction inhibition pattern like the oligomycin A (the specific mitochondria F0F1-ATPase inhibitor). Furthermore, the best crystallography binding pose obtained for the docking complexes based on the free energy of binding (FEB), fit well with the previous in vitro evidences from the thermodynamics point of view. Following an affinity order as: FEB (oligomycin A/F0-ATPase complex) = -9.8 kcal/mol > FEB (SWCNT-COOH/F0-ATPase complex) = - 6.8 kcal/mol ~ FEB (SWCNT-pristine complex) = -5.9 kcal/mol. With predominance of van der Waals hydrophobic nanointeractions with key F0-ATPase binding site residues (Phe 55 and Phe 64). By the other hand, results on elastic network models, and fractal-surface analysis suggest that SWCNTs induce significant perturbations by triggering abnormal allosteric responses and signals propagation in the inter-residue network which could affect the substrate recognition ligand geometrical specificity of the F0F1-ATPase enzyme in order (SWCNT-pristine > SWCNT-COOH). Besides, the performed Nano-QSTR models for both SWCNTs show that this method may be used for the prediction of the nanotoxicity induced by SWCNT. Overall, the obtained results may open new avenues toward to the better understanding and prediction of new nanotoxicity mechanisms, rational drug-design based nanotechnology, and potential biomedical application in precision nanomedicine.
Deep Cerebellar Transcranial Direct Current Stimulation of the Dentate Nucleus to Facilitate Standing Balance in Chronic Stroke Survivors
Zeynab Rezaee, Surbhi Kaura, Dhaval Solanki, Adyasha Dash, M V Padma Srivastava, Uttama Lahiri, Anirban Dutta
Subject: Medicine & Pharmacology, Other Keywords: cerebellar transcranial direct current stimulation; dentate nucleus; computational modeling
Objective: Cerebrovascular accidents are the second leading cause of death and the third leading cause of disability worldwide. We hypothesized that cerebellar transcranial direct current stimulation (ctDCS) of the dentate nuclei and the lower-limb representations in the cerebellum can improve standing balance functional reach in chronic (> 6 months' post-stroke) stroke survivors. Materials and Methods: Magnetic resonance imaging(MRI) based subject-specific electric field was computed across 10 stroke survivors and one healthy MRI template to find an optimal bipolar bilateral ctDCS montage to target dentate nuclei and lower-limb representations (lobules VII-IX). Then, in a repeated-measure crossover study on 5 stroke survivors, we compared 15minutes of 2mA ctDCS based on the effects on successful functional reach(%) during standing balance task. Three-way ANOVA investigated the factors of interest– brain regions, montages, stroke participants, and their interactions.Results: "One-size-fits-all" ctDCS montage for the clinical study was found to be bipolar PO9h – PO10h for dentate nuclei and bipolar Exx7–Exx8 for lobules VII-IX with contalesional anode. Bipolar PO9h–PO10h ctDCS performed significantly (alpha=0.05) better in facilitating successful functional reach (%) when compared to bipolar Exx7–Exx8 ctDCS. Furthermore, a linear relationship between successful functional reach (%) and electric field strength was found where bipolar PO9h–PO10h montage resulted in a significantly (alpha=0.05) higher electric field strength when compared to bipolar Exx7–Exx8 montage for the same 2mA current. Conclusion: We presented a rational neuroimaging based approach to optimize deep ctDCS of the dentate nuclei and lower limb representations in the cerebellum for post-stroke balance rehabilitation.
Mathematical Model and Simulation for Nutrient-Plant Interaction Analysis
Byunghyun Ban
Subject: Biology, Agricultural Sciences & Agronomy Keywords: Systems Biology; Horticulture; Computational Biology; Complex System; Fertilization; System Modeling
Differential equation models to understand interaction between plant and nutrient solution are presented. The root cells selectively emit H+ ions with active transport consuming ATPs to establish electrical gradient along the cell membrane. It establishes electrical field with Nernst potential to make positively charged ions outside the cell membrane flow into the root cell. Anion influx is also modulated by H+ ion concentration because plant root cell absorbs negatively charged particles with symport. If an anion collides with H+ cell to make net charge as neutral, at symport channel, it can flow through. In this paper, mathematical models for cation and anion absorption are introduced. Cation absorption model was induced from Ohm's law combined with Goldman's equation. Anion absorption model is similar to chemical reaction rate model. Both models have physiological terms influenced by gene expression pattern, species or phenotypes. Cation model also includes terms for ion's kinetic and electrical properties, growth of plant and interaction between the root and the surroundings. Simulation for 20 different sets of coefficients showed that the physiology-related coefficient has important role on nutrition absorption tendencies of plants.
Super Field-of-View Lensless Camera by Coded Image Sensors
Tomoya Nakamura, Keiichiro Kagawa, Shiho Torashima, Masahiro Yamaguchi
Subject: Engineering, Electrical & Electronic Engineering Keywords: computational imaging; lensless camera; CMOS image sensor; compressive sensing
A lensless camera is an ultra-thin computational-imaging system. Existing lensless cameras are based on the axial arrangement of an image sensor and a coding mask, and therefore, the back side of the image sensor cannot be captured. In this paper, we propose a lensless camera with a novel design that can capture the front and back sides simultaneously. The proposed camera is composed of multiple coded image sensors, which are complementary-metal-oxide-semiconductor~(CMOS) image sensors in which air holes are randomly made at some pixels by drilling processing. When the sensors are placed facing each other, the object-side sensor works as a coding mask and the other works as a sparsified image sensor. The captured image is a sparse coded image, which can be decoded computationally by using compressive-sensing-based image reconstruction. We verified the feasibility of the proposed lensless camera by simulations and experiments. The proposed thin lensless camera realizes super field-of-view imaging without lenses or coding masks, and therefore can be used for rich information sensing in confined spaces. This work also suggests a new direction in the design of CMOS image sensors in the era of computational imaging.
Preprint CASE REPORT | doi:10.20944/preprints201712.0133.v1
Research on Natural Gas Pipeline Leakage and Ventilation Scheme in Tunnel
Hongfang Lu, Kun Huang, Lingdi Fu, Zhihao Zhang, Shijuan Wu, You Lyu
Subject: Engineering, Civil Engineering Keywords: tunnel; gas pipeline; leakage; computational fluid dynamics; ventilation scheme
Due to poor ventilation conditions in the tunnel, if gas pipeline leaks, the consequence of the accident will be more serious. Therefore, before the emergency repair, gas in the tunnel needs to be excharged so as not to explode during the repair process. Therefore, it is necessary to study the ventilation of gas in the tunnel. Based on the computational fluid dynamics (CFD) theory and taking the Yanyingshan tunnel section of China-Myanmar pipeline as an example, this paper uses Fluent software to establish the leakage model of the gas pipeline and fan model in the tunnel and analyzes the influence of different fan locations and number of fans on gas concentration. It can be concluded that: (1) the use of press-in method makes it more efficient to discharge gas out of the tunnel. (2) In order to make ventilation efficient, the fan should be arranged in a higher position and needs to be at a distance from the top of the tunnel. (3) Parallel use of two fans has better ventilation effect than single fan.
GPU Accelerated Particle-based Computational Acoustics Solving Based on SPH
Linxu Fan, Yongou Zhang, Chizhong Wang, Tao Zhang
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: SPH; particle-based computational acoustics (PCA); meshfree method; GPU
Smoothed particle hydrodynamics (SPH) is regarded as a pure Lagrangian approach, which can solve fluid dynamics problems without the creation of mesh. In this paper, a paralleled SPH solver is developed to solve particle-based computational acoustics (PCA). The aim of this paper is to study the feasibility of using SPH to solve acoustic problems and to improve the efficiency of solving processes by paralleling some procedures on GPU during calculating. A stand SPH code running serially in a CPU is proposed to solve wave equation. This is a wave propagating in a two-dimensional domain. After finishing the computation, the results are compared with the theoretical solutions and they agree well. So its feasibility is verified. There are two main methods for searching neighbor particles: all-pair search method and linked-list search method. Both methods are used in different codes to simulate an identical problem and their runtimes are compared to investigate their searching efficiencies. The runtime results show that linked-list search method has a higher efficiency, which can save a lot of searching time when simulating problems with huge amounts of particles. Furthermore, the percentages of different procedures' runtimes in a simulation are also discussed to find the most consuming one. Then, some codes are modified to run in different GPUs and their runtimes are compared with those of serial ones on a CPU. Runtime results show that the paralleled algorithm can be more than 80 times faster than the serial one. The result shows that GPU paralleled SPH computing can achieve desirable accuracy and speed in solving acoustic problems.
Quantifying Mosaic Development: Towards an Evo-Devo Postmodern Synthesis of the Evolution of Development via Differentiation Trees of Embryos
Bradly Alicea, Richard Gordon
Subject: Biology, Anatomy & Morphology Keywords: developmental biology; computational biology; lineage trees; embryogenesis; biological complexity
Embryonic development proceeds through a series of differentiation events. The mosaic version of this process (binary cell divisions) can be analyzed by comparing early development of Ciona intestinalis and Caenorhabditis elegans. To do this, we reorganize lineage trees into differentiation trees using the graph theory ordering of relative cell volume. Lineage and differentiation trees provide us with means to classify each cell using binary codes. Extracting data characterizing lineage tree position, cell volume, and nucleus position for each cell during early embryogenesis, we conduct several statistical analyses, both within and between taxa. We compare both cell volume distributions and cell volume across developmental time within and between single species and assess differences between lineage tree and differentiation tree orderings. This enhances our understanding of the differentiation events in a model of pure mosaic embryogenesis and its relationship to evolutionary conservation. We also contribute several new techniques for assessing both differences between lineage trees and differentiation trees, and differences between differentiation trees of different species. The results suggest that at the level of differentiation trees, there are broad similarities between distantly related mosaic embryos that might be essential to understanding evolutionary change and phylogeny reconstruction. Differentiation trees may therefore provide a basis for an Evo-Devo Postmodern Synthesis.
Quantum Similarity in our DNA, and in DNA Storage
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Quantum Similarity; DNA; Molecular Biology; Meticodes; response variance; object set reference; Bioinformatics; Computational Physics; Computational Models; Mathematics; Probability; Statistics; Comparative Modeling; Biostatics; Biostatistics
The usage of Quantum Similarity through the equation Z = {∀θ ∈ Z → ∃s ∈ S ∧ ∃t ∈ T : θ = (s, t)}, represents a way to analyze the way communication works in our DNA. Being able to create the object set reference for z being (s, t) in our DNA strands, we are able to set logical tags and representations of our DNA in a completely computational form. This will allow us to have a better understanding of the sequences that happen in our DNA. With this approach, we can also utilize mathematical formulas such as the Euler–Mascheroni constant, regression analysis, and computational proofs to answer important questions on Quantum biology, Quantum similarity, and Theoretical Physics.
Impact of Astrocytic Coverage of Synapses on the Short-term Memory of a Computational Neuron-Astrocyte Network
Zonglun Li, Yuliya Tsybina, Susanna Gordleeva, Alexey Zaikin
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: neuron; astrocyte; network; short-term memory; spatial frequency; computational biology
Working memory refers to the capability of the nervous system to selectively retain short-term memories in an active state. The long-standing viewpoint is that neurons play an indispensable role and working memory is encoded by synaptic plasticity. Furthermore, some recent studies have shown that calcium signaling assists the memory processes and the working memory might be affected by the astrocyte density. Over the last few decades, growing evidence has also revealed that astrocytes exhibit diverse coverage of synapses which are considered to participate in neuronal activities. However, very little effort has yet been made to attempt to shed light on the potential correlations between these observations. Hence, in this article we will leverage a computational neuron-astrocyte model to study the short-term memory performance subject to various astrocytic coverage and we will demonstrate that the short-term memory is susceptible to this factor. Our model may also provide plausible hypotheses for the various sizes of calcium events as they are reckoned to be correlated with the astrocytic coverage.
Structure (Epicardial Stenosis) and Function (Microvascular Dysfunction) that Influences Coronary Fractional Flow Reserve Estimation
Jermiah J. Joseph, Clara Sun, Ting-Yim Lee, Daniel Goldman, Sanjay R. Kharche, Chris W. McIntyre
Subject: Life Sciences, Biophysics Keywords: Coronary vasculature; lumped parameter model; fractional flow reserve; computational cardiology
Background. The treatment of coronary stenosis relies on invasive high risk surgical assessment to generate the fractional flow reserve diagnostics index, a ratio of distal to proximal pressures in respect of coronary atherosclerotic plaque causing stenosis. Non-invasive methods are therefore a need of the times. This study proposes an extensible mathematical description of the coronary vasculature that permits rapid estimation of the coronary fractional flow reserve. Methods. By adapting an existing closed loop model of human coronary blood flow, the effects of large vessel stenosis and microvascular disease on fractional flow reserve were quantified. Several simula-tions generated flow and pressure information which was used to compute fractional flow re-serve under a spectrum of conditions including focal stenosis, diffuse stenosis, and microvascular disease. Sensitivity analysis stratified the influence of model parameters on the index. The model was simulated as coupled non-linear ordinary differential equations and numerically solved us-ing an implicit higher order method. Results. Large vessel stenosis affected fractional flow re-serve. The model predicts that the presence, rather than severity, of microvascular disease affect coronary flow deleteriously. Sensitivity analysis revealed that heart rate may not affect the index. Conclusions. The model provides a computationally inexpensive instrument for future in silico coronary blood flow investigations as well as clinical-imaging decision making. A combination of focal and diffuse stenosis appears to be essential in reducing the index. In addition to pressure measurements in the large epicardial vessels, diagnosis of microvascular disease is essential. The independence of the index with respect to heart rate suggests that computationally inexpensive steady state simulations may provide sufficient information to reliably compute the index.
Review on Chemical Graph Theory and Its Application in Computer-Assisted Structure Elucidation
Mehmet Aziz Yirik, Kumsal Ecem Colpan, Saskia Schmidt, Maria Sorokina, Christoph Steinbeck
Subject: Chemistry, Analytical Chemistry Keywords: chemical graph theory; computational chemistry; CASE; computer-assisted structure elucidation
The chemical graph theory is a subfield of mathematical chemistry which applies classic graph theory to chemical entities and phenomena. Chemical graphs are main data structures to represent chemical structures in cheminformatics. Computable properties of graphs lay the foundation for (quantitative) structure activity and structure property predictions - a core discipline of cheminformatics. It has a historic relevance for natural sciences, such as chemistry, biochemistry and biology, and is in the heart of modern disciplines, such as cheminformatics and bioinformatics. This review first covers the history of chemical graph theory, then provides an overview of its various techniques and applications for CASE, and finally summarises modern tools using chemical graph theory for CASE.
Modeling Seizures: From Single Neurons to Networks
Damien Depannemaecker, Alain Destexhe, Viktor Jirsa, Christophe Bernard
Subject: Keywords: epilepsy; computational model; seizures; single neurons level; networks; whole brain
Dynamical system tools offer a complementary approach to detailed biophysical seizure modeling, with a high potential for clinical applications. This review describes the theoretical framework that provides a basis for theorizing certain properties of seizures and for their classification according to their dynamical properties at onset and offset. We describe various modeling approaches spanning different scales, from single neurons to large-scale networks. This narrative review provides an accessible overview of this field, including non-exhaustive examples of key recent works.
Aerodynamic and Aeroacoustic Performance of Tail Rotor Investigation and Modification
Ai-Peng Hao, Yu-Hong Jia
Subject: Engineering, Mechanical Engineering Keywords: helicopter, tail rotor, aeroacoustic, finite element method, computational fluid dynamic
With the increasingly stringent airworthiness standards, the noise generated during the rotorcraft flight is gradually attracting people's attention. It widely operated helicopters at low altitudes because of their maneuverability. The way to reduce the noise caused by the complex airflow of the helicopter rotor system has progressively become a hot topic for researchers. Using a hybrid acoustic analysis method, this paper investigates the improvement of the noise and thrust of the helicopter's tail rotor through the tail rotor structural parameters. For the basic model, the turbulence simulation is performed using an incompressible detached eddy simulation (DES) method, and the Lighthill acoustic analog equation is calculated using the finite element method (FEM). We verified the accuracy of the method through wind tunnel tests. We chose a series of structural parameters for sound simulation and fluid simulation calculations. The results indicate that the modified tail rotor noise reduced by 16.5 dBA and the total thrust increased by 19.9% from the prototype model. This work can enhance the duct tail rotor design to improve aerodynamic and aeroacoustic performance.
Enriching Elementary School Mathematical Learning with the Steepest Descent Algorithm
Roberto Araya
Subject: Social Sciences, Accounting Keywords: Elementary Mathematics; STEM; Mathematical Modeling; Computational Thinking; Steepest Descent Algorithm
The Steepest Descent (or Ascent) algorithm is one of the most widely used algorithms in Science, Technology, Engineering, and Mathematics (STEM). However, this powerful mathematical tool is neither taught nor even mentioned in K12 education. We study whether it is feasible for elementary school students to learn this algorithm, while also aligning with the standard school curriculum. We also look at whether it can be used to create enriching activities connected to children's real-life experiences, thus enhancing the integration of STEM and fostering Computational Thinking. To address these questions, we conducted an empirical study in two phases. In the first phase, we tested the feasibility with teachers. In a face-to-face professional development work-shop with 457 mathematics teachers actively participating using an online platform, we found that after a 10-minute introduction they could successfully apply the algorithm and use it in a couple of models. They were also able to complete two complex and novel tasks: selecting models and adjusting the parameters of a model that uses the steepest descent algorithm. In a second phase, we tested the feasibility with 90 fourth graders from 3 low Socioeconomic Status (SES) schools. Using the same introduction and posing the same questions, we found that they were able to understand the algorithm and successfully complete the tasks on the online platform. Additionally, we found that close to 75% of the students completed the two complex modeling tasks and performed similarly to the teachers.
Subject: Social Sciences, Education Studies Keywords: Elementary Mathematics; STEM; Mathematical Modeling; Computational Thinking; Steepest Descent Algorithm
Application of Quantum Computing to Biochemical Systems: A Look to the Future
Hai-Ping Cheng, Erik Deumens, James Freericks, Chenglong Li, Beverly Sanders
Subject: Life Sciences, Biophysics Keywords: computational molecular biology, biochemistry, quantum computing, hybrid quantum-classical algorithms
Chemistry has been viewed as one of the most fruitful near-term applications to science of quantum computing. Recent work in transitioning classical algorithms to a quantum computer has led to great strides in improving quantum algorithms and illustrating their quantum advantage. Much less effort has been placed on how one finishes these calculations by using the results from the quantum computer (on the active region of the molecule) and embeds them back into the remainder of the molecule in order to determine the properties of the entire molecule. Such strategies are critical if one wants to expand the focus to biochemical molecules that contain active regions that cannot be properly explained with classical algorithms on classical computers. While we do not solve this problem here, we provide an overview of where the field is going to enable such problems to be tackled in the future.
Beta-Adrenergic Receptor Stimulation Limits the Cellular Proarrhythmic Effects of Chloroquine and Azithromycin
Henry Sutanto, Jordi Heijman
Subject: Medicine & Pharmacology, Cardiology Keywords: arrhythmia; computational modeling; COVID-19; chloroquine; azithromycin; beta-adrenergic; electrophysiology
Background: The antimalarial drug chloroquine and antimicrobial drug azithromycin have received significant attention during the current COVID-19 pandemic. Both drugs can alter cardiac electrophysiology and have been associated with drug-induced arrhythmias. Meanwhile, sympathetic activation is commonly observed during systemic inflammation and oxidative stress (e.g., in SARS-CoV-2 infection), and may influence the electrophysiological effects of chloroquine and azithromycin. Here, we investigated the effect of beta-adrenergic stimulation on proarrhythmic properties of chloroquine and azithromycin using a detailed in silico model of ventricular electrophysiology. Methods: Concentration-dependent chloroquine and azithromycin-induced alterations in ion-channel function were incorporated into the Heijman canine ventricular cardiomyocyte model. Single and combined drug effects on action-potential (AP) properties were analyzed using a population of 592 models accommodating inter-individual variability. Sympathetic stimulation was simulated by an increase in pacing rate and experimentally validated isoproterenol-induced changes in ion-channel function. Results: At 1 Hz pacing, therapeutic doses of chloroquine and azithromycin (5 and 20 µM, respectively) individually prolonged AP duration (APD) by 33% and 13%. Their combination produced synergistic APD prolongation (+161%) with incidence of proarrhythmic early afterdepolarizations in 53.5% of models. Increasing the pacing frequency to 2 Hz shortened APD and together with 1 µM isoproterenol corrected the drug-induced APD prolongation. No afterdepolarizations occurred following increased rate and simulated application of 0.1-1 µM isoproterenol. Conclusion: Sympathetic stimulation limits chloroquine- and azithromycin-induced proarrhythmia by reducing their APD-prolonging effect, suggesting the importance of heart rate and autonomic status monitoring in particular conditions (e.g., COVID-19).
The Status of Causality in Biological Databases for Logical Modeling: Data Resources and Data Retrieval Possibilities
Vasundra Touré, Åsmund Flobak, Anna Niarakis, Steven Vercruysse, Martin Kuiper
Subject: Life Sciences, Other Keywords: causal interactions; databases; interoperability; biological pathway; logical modeling; computational biology
Causal molecular interactions represent key building blocks used in computational modeling, where they facilitate the assembly of regulatory networks. These regulatory networks can then be used to predict biological and cellular behavior by system perturbations and in silico simulations. Today, broad sets of these interactions are being made available in a variety of biological knowledge resources. Moreover, different visions, based on distinct biological interests, have led to the development of multiple ways to describe and annotate causal molecular interactions. Therefore, data users can find it challenging to efficiently explore resources of causal interaction and to be aware of recorded contextual information that ensures valid use of the data. This manuscript presents a review of public resources collecting causal interactions and the different views they convey, together with a thorough description of the export formats established to store and retrieve these interactions. Our goal is to raise awareness amongst the targeted audience, i.e., logical modelers, but also any scientist interested in molecular causal interactions, about existing data resources and how to get familiar with them.
Seven Challenges in the Multiscale Modelling of Multicellular Tissues
Alexander Fletcher, James Osborne
Subject: Life Sciences, Cell & Developmental Biology Keywords: Multiscale modelling; cell-based modelling; computational biology; multicellular systems biology
The growth and dynamics of multicellular tissues involve tightly regulated and coordinated morphogenetic cell behaviours, such as shape changes, movement, and division, which are governed by subcellular machinery and involve coupling through short- and long-range signals. A key challenge in the fields of developmental biology, tissue engineering and regeneration is to understand how relationships between scales produce emergent tissue-scale behaviours. Recent advances in molecular biology, live-imaging and ex vivo techniques have revolutionised our ability to study these processes experimentally. To fully leverage these techniques and obtain a more comprehensive understanding of the causal relationships underlying tissue dynamics, computational modelling approaches are increasingly spanning multiple spatial and temporal scales, and are coupling cell shape, growth, mechanics and signalling. Yet such models remain technically challenging: modelling at each scale requires different areas of technical skills, while integration across scales necessitates the solution to novel mathematical and computational problems. This review aims to summarise recent progress in multiscale modelling of multicellular tissues and to highlight ongoing challenges associated with the construction, implementation, interrogation and validation of such models.
Development and Assessment of an Integrated 1D-3D CFD Codes Coupling Methodology for Diesel Engine Combustion Simulation and Optimization
Federico Millo, Andrea Piano, Benedetta Peiretti Paradisi, Mario Rocco Marzano, Andrea Bianco, Francesco C. Pesce
Subject: Engineering, Automotive Engineering Keywords: diesel engines; numerical simulation; pollutant emissions prediction; computational fluid dynamics
In this paper an integrated methodology for the coupling between 1D- and 3D-CFD simulation codes is presented, which has been developed to support the design and calibration of new diesel engines. The aim of the proposed methodology is to couple 1D engine models, which may be available in the early-stage engine development phases, with 3D predictive combustion simulations, in order to obtain reliable estimates of engine performance and emissions for newly designed automotive diesel engines. The coupling procedure features simulations performed in 1D-CFD by means of GT-SUITE and in 3D-CFD by means of Converge, executed within a specifically designed calculation methodology. An assessment of the coupling procedure has been performed by comparing its results with experimental data acquired on an automotive Diesel engine, considering different working points including both part load and full load conditions. Different multiple injection schedules have been evaluated for part-load operation, including pre and post injections. The proposed methodology, featuring detailed 3D chemistry modeling, was proven to be capable to properly assess pollutant formation, specifically to estimate NOx concentrations. Soot formation trend was also well-matched for most of the explored working points. The proposed procedure can therefore be considered as a suitable methodology to support the design and calibration of new Diesel engines, thanks to its ability to provide reliable engine performance and emissions estimations from the early-stage of a new engine development.
Evaluating Approximations and Heuristic Measures of Integrated Information
André Sevenius Nilsen, Bjørn Erik Juel, William Marshall, Johan Frederik Storm
Subject: Mathematics & Computer Science, Other Keywords: integrated information theory; differentiation; integration; complexity; consciousness; computational; IIT; Phi
Integrated information theory (IIT) proposes a measure of integrated information (Φ) to capture the level of consciousness for a physical system in a given state. Unfortunately, calculating Φ itself is currently only possible for very small model systems, and far from computable for the kinds of systems typically associated with consciousness (brains). Here, we consider several proposed measures and computational approximations, some of which can be applied to larger systems, and test if they correlate well with Φ. While these measures and approximations capture intuitions underlying IIT and some have had success in practical applications, it has not been shown that they actually quantify the type of integrated information specified by the latest version of IIT. In this study, we evaluated these approximations and heuristic measures, based not on practical or clinical considerations, but rather based on how well they estimate the Φ values of model systems. To do this, we simulated networks consisting of 3–6 binary linear threshold nodes randomly connected with excitatory and inhibitory connections. For each system, we then constructed the system's state transition probability matrix (TPM), as well as its state transition matrix (STM) over time for all possible initial states. From these matrices, we calculated, approximations to Φ, and measures based on state differentiation, state entropy, state uniqueness, and integrated information. All measures were correlated with Φ in a state dependent and state independent manner. Our findings suggest that Φ can be approximated closely in small binary systems by using one or more of the readily available approximations (r > 0.95), but without major reductions in computational demands. Furthermore, Φ correlated strongly with measures of signal complexity (LZ, rs = 0.722), decoder based integrated information (Φ*, rs = 0.816), and state differentiation (D1, rs = 0.827), on the system level (state independent). These measures could allow for efficient estimation of Φ on a group level, or as accurate predictors of low, but not high, Φ systems. While it's uncertain whether the results extend to larger systems or systems with other dynamics, we stress the importance that measures aimed at being practical alternatives to Φ are at a minimum rigorously tested in an environment where the ground truth can be established.
Topological Signature of 19th Century Novelists: Persistence Homology in Context-Free Text Mining
Shafie Gholizadeh, Armin Seyeditabari, Wlodek Zadrozny
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: topological data analysis; text mining; computational topology; style; persistent homology
Topological Data Analysis (TDA) refers to a collection of methods that find the structure of shapes in data. Although recently, TDA methods have been used in many areas of data mining, it has not been widely applied to text mining tasks. In most text processing algorithms, the order in which different entities appear or co-appear is being lost. Assuming these lost orders are informative features of the data, TDA may play a significant role in the resulted gap on text processing state of the art. Once provided, the topology of different entities through a textural document may reveal some additive information regarding the document that is not reflected in any other features from traditional text processing methods. In this paper, we introduce a novel approach that hires TDA in text processing in order to capture and use the topology of different same-type entities in textural documents. First, we will show how to extract some topological signatures in the text using persistent homology-i.e., a TDA tool that captures topological signature of data cloud. Then we will show how to utilize these signatures for text classification.
3D Imaging based on Depth Measurement Technologies
Ni Chen, Chao Zuo, Edmund Y. Lam, Byoungho Lee
Subject: Engineering, Electrical & Electronic Engineering Keywords: Three-dimensional imaging; computational imaging; light field; holography; phase imaging
Three-dimensional (3D) imaging has attracted more and more interests because of its widespread applications, especially in information and life science. These techniques can be broadly divided into two types: ray-based and wavefront-based 3D imaging. Issues such as imaging quality and system complexity of these techniques limit the applications significantly, and therefore many investigations have focused on 3D imaging from depth measurements. This paper presents an overview of 3D imaging from depth measurements, and provides a summary of the connection between these the ray-based and wavefront-based 3D imaging techniques.
Analysis of the Aerodynamic and Structural Performance of a Cooling Fan with Morphing Blade
Alessio Suman, Annalisa Fortini, Nicola Aldi, Michele Pinelli, Mattia Merlin
Subject: Engineering, Mechanical Engineering Keywords: morphing blade; adaptive geometry; computational fluid dynamics; fluid-structure coupling
The concept of smart morphing blades, which can control themselves to reduce or eliminate the need for active control systems, is a highly attractive solution in blade technology. In this paper an innovative passive control system based on Shape Memory Alloys (SMAs) is proposed. On the basis of previous thermal and shape characterization of a single morphing blade for a heavy-duty automotive cooling axial fan, this study deals with the numerical analysis of the aerodynamic loads acting on the fan. By coupling CFD and FEM approaches it is possible to analyze the actual blade shape resulting from both the aerodynamic and centrifugal loads. The numerical results indicate that the polymeric blade structure ensures proper resistance and enables shape variation due to the action of the SMA strips.
Review of Computational Methods on Brain Symmetric and Asymmetric Analysis from Neuroimaging Techniques
P. Kalavathi, K. Senthamilselvi, V. B. Surya Prasath
Subject: Medicine & Pharmacology, Clinical Neurology Keywords: computational imaging; midsagittal plane; inter-hemispheric fissure; symmetry analysis; neuroimaging
Brain is the most complex organ in the human body and it is divided into two hemispheres - left and right hemispheres. Left hemisphere is responsible for control of right side of our body whereas right hemisphere is responsible for control of left side of our body. Brain image segmentation from different neuroimaging modalities is one of the important parts in clinical diagnostic tools. Neuroimaging based digital imagery generally contain noise, inhomogeneity, aliasing artifacts, and orientational deviations. Therefore, accurate segmentation of brain images is a very difficult task. However, the development of accurate segmentation of brain images is very important and crucial for a correct diagnosis of any brain related diseases. One of the fundamental segmentation tasks is to identify and segment inter-hemispheric fissure/mid-sagittal plane, which separate the two hemispheres of the brain. Moreover, the symmetric/asymmetric analyses of left and right hemispheres of brain structures are important for radiologists to analyze diseases such as Alzheimer's, Autism, Schizophrenia, Lesions and Epilepsy. Therefore, in this paper we have analyzed the existing computational techniques used to find brain symmetric/asymmetric analysis in various neuroimaging techniques (MRI/CT/PET/SPECT), which are utilized for detecting various brain related disorders.
Do Written Responses to Open-Ended Questions on Fourthgrade Online Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests?
Subject: Social Sciences, Education Studies Keywords: Computational Linguistics; Online Learning; Student Model; Online Formative Assessments; Student Achievement
Predicting long-term student achievement is a critical task for teachers and for educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first is that teachers develop their own questions for online formative assessment. Therefore, there are a huge number of possible questions, each of which is answered by only a few students. Second, online formative assessment often involves open-ended questions that students answer in writing. These types of questions in online formative assessment are highly valuable. However, analyzing the responses automatically can be a complex process. In this paper, we address these two challenges. We analyzed 621,575 answers to closed-ended questions and 16,618 answers to open-ended questions by 464 fourth-graders from 24 low-SES schools. Using linguistic features of the answers and an automatic incoherent response classifier, we built a linear model that predicts the score on and end-of-year national standardized test. We found that despite answering 36.4 times fewer open-ended questions than closed questions, including features of the students' open responses in our model improved our prediction of their end-of-year test scores. To the best of our knowledge, this is the first time that a predictor of end-of-year test scores has been improved by using automatically detected features of answers to open-ended questions on online formative assessments.
Thermochemical analysis of a packed-bed reactor using finite elements with FlexPDE and COMSOL Multiphysics
Sebastian Taco-Vasquez, César A. Ron, Herman A. Murillo, Andrés Chico, Paul G. Arauz
Subject: Engineering, Biomedical & Chemical Engineering Keywords: packed-bed reactor; computational fluid dynamics; FlexPDE; COMSOL Multiphysics; Fischer-Tropsch
The present study shows a methodology for analyzing and designing a cylindrical packed-bed reactor considering stationary and dynamic models. The design comprises the reactor's stationary and dynamic governing differential equations for mass and heat transfer under multi-dimensional approaches. The results included simulation of concentration, temperature, and reaction rate profiles via the 1-D and 2-D differential equations solution with FlexPDE software. The analysis was complemented with a scaled 3-D dynamic model implemented in COMSOL Multiphysics. Both FlexPDE and COMSOL Multiphysics relied on the finite element technique to solve the governing differential equations. The simulated concentration and temperature profiles from both FlexPDE and COMSOL models were compared to experimental data gathered from literature (specifically from a Fischer-Tropsch process to produce low-molecular-weight hydrocarbons in a configuration of cylindrical packed-bed reactors). Simulated concentration and temperature profiles from the 2-dimensional dynamic model and the COMSOL model were in good agreement with the trend observed in experimental data. Finally, the predicted reaction rate profiles from the COMSOL model and the 2-dimensional dynamic model followed the temperature trend, thus reflecting the temperature dependence of the reaction.
A Semi-Synthetic Study of Multidimensional Imaging using a Scattering Lens
Shivasubramanian Gopinath
Subject: Physical Sciences, Optics Keywords: Computational Imaging; Non-linear reconstruction (NLR); Holography; scattering lens; 3D imaging
Scattering has been always considered a problem in most of the imaging and holography systems. In this project, a 3D imaging system has been developed based on scattering against common beliefs. The 3D imaging system consists of only two components namely a scattering lens, fabricated by grinding the surface of a convex lens using sandpaper and a web camera. The point spread function (PSF) in the form of speckle distribution was recorded using a laser source in the first step. A synthetic object was selected which was convolved with the PSF in a computer to generate the object intensity distribution. The image of the object was reconstructed by processing the PSF and object intensity distribution using a computational reconstruction method called non-linear reconstruction. The recorded PSF was scaled and the process was repeated for a different synthetic object. The concept was extended to 3D by summing the object intensity distributions generated using PSFs with different scaling factors. The image at different planes can be reconstructed using the PSFs corresponding to that plane.
Optimal COVID-19 Therapeutic Candidate Discovery Using the CANDO Platform
William Mangione, Zackary Falls, Ram Samudrala
Subject: Life Sciences, Biochemistry Keywords: COVID-19; SARS-CoV-2; drug discovery; multitargeting; computational drug repurposing
The worldwide outbreak of SARS-CoV-2 in early 2020 caused numer- ous deaths and unprecedented measures to control its spread. We employed our Computational Analysis of Novel Drug Opportunities (CANDO) multiscale therapeutic discovery, repurposing, and design platform to identify small molecule inhibitors of the virus to treat its resulting indication, COVID-19. Initially, few experimental studies existed on SARS-CoV-2, so we optimized our drug candidate prediction pipelines using results from two independent high-throughput screens against prevalent human coronaviruses. Ranked lists of candidate drugs were generated using our open source cando.py software based on viral protein inhibition and proteomic interaction similarity. For the former viral protein inhibition pipeline, we computed interaction scores between all compounds in the corresponding candidate library and eighteen SARS-CoV proteins using an interaction scoring protocol with extensive parameter optimization which was then applied to the SARS-CoV-2 proteome for prediction. For the latter similarity based pipeline, we computed interaction scores between all compounds and human protein structures in our libraries then used a consensus scoring approach to identify candidates with highly similar proteomic interaction signatures to multiple known anti-coronavirus actives. We published our ranked candidate lists at the very beginning of the COVID-19 pandemic. Since then, 51 of our 276 predictions have demonstrated anti-SARS-CoV-2 activity in published clinical and experimental studies. These results illustrate the ability our platform to rapidly respond to emergent pathogens and provide greater evidence that treating compounds in a multitarget context more accurately describes their behavior in biological systems.
Prevention of hazards induced by a Radiation Fireball through Computational Geometry and Parametric Design
Francisco Salguero-Andújar, Joseph M. Cabeza-Lainez, Federico Blasco-Macias
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: prevention of explosion risks; fireball; thermal radiation; computational geometry; geometric algorithms.
Radiation Fireballs are singular phenomena which involve severe thermal radiation and consequently, they need to be duly assessed and prevented. Although the radiative heat transfer produced by a sphere is relatively well known, the shadowing measures implemented to control the fireball's devastating effects have frequently posed a difficult analytical instance, mainly due to its specific configuration. In this article, since the usual solving equations for the said cases are impractical, the authors propose a novel graphic-algorithm method that sorts the problem efficiently for different kinds of obstructions and relative positions of the fireball and the defenses. Adequate application of this method may improve the safety of a significant number of facilities exposed to such risks.
Employing Statistical Machine Reading for Inferring Key Concepts of a Research Field From a Body of Abstracts and Blog Posts
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: statistical machine reading; metabolic engineering; concepts; terms and phrases; computational complexity
The world of science is drowned in a wealth of information. How to make sense of this wealth of published articles, blog posts and abstracts has become an important challenge given the importance of science to different aspects of societal function. At the crux of the issue lies the increasing trend where scientific discovery informs decision making at the societal level. One example, is the elucidation of the ozone hole to the promulgation of the Montreal Protocol in 1987, and documenting increasing atmospheric carbon dioxide concentration led to climate action and signing of the Paris Agreement in 2015. Hence, understanding a research field becomes an important need for many decision makers across different sectors of society. But, the scientific literature is cryptic and esoteric, and presents a significant barrier to comprehension. One approach to ameliorate the problem is statistical machine reading, which provides the critical capability of identifying key concepts that underpins a research field. Such important concepts help provide an incision point to gain further understanding of the field and initiating further conversation about the field. This work sought to validate the concept of whether applying statistical machine reading to a body of literature comprising short blog posts and abstracts of published articles help in understanding the field of metabolic engineering. One important angle pursued in this research is whether the tabulated list of terms and phrases identified by statistical machine reading could be creatively analyzed to gain a deeper understanding of the research field. For example, the most frequently occurring terms and phrases could describe key concepts of the research field. Moving down in frequency occurrence would be terms and phrases that describe methodologies and approaches of the field. Finally, less frequently occurring terms and phrases may be tools and resources used in the research field. Results validated the utility of statistical machine reading in identifying important terms and phrases associated with the research field. But the small dataset of blog posts and abstracts used in this study severely hampered the identification of most of the key concepts of metabolic engineering, which is a fairly broad field of research. Overall, statistical machine reading shows utility in identifying terms and phrases that could describe a field. However, the level of understanding is closely tied in to the breadth and depth of reading material available, which meant that the methodology is data intensive in nature. Future use of supercomputing or quantum computing could help alleviate constraints of computational capacity, and help tackle the exponential rise in computational complexity as the size of the reading material for machine reading expands.
Computational Fluid Dynamic (CFD) Simulation of Bifurcate Artery
Yas Barzegar, Atrin Barzegar
Subject: Keywords: flow pattern; Artery wall; heart muscle; flow rate; computational fluid dynamic
Heart attacks and strokes are one of the leading causes of death in the world today, and heart attacks caused by clogged arteries that carry blood to the heart muscle are a significant part of these strokes. These are caused by the accumulation of fat particles in the walls of the arteries and the reduction of blood flow through it over a long process. The process of fat penetration in the underlying layers of the Artery wall has been the focus of many researchers, and various researches and Simulations have been done on it, in each of them, the effect of specific parameters has been considered. In the present study, the effect of blood flow rate on the flow pattern in a bifurcate artery with two ducts has been investigated using FLUENT software with Computation fluid dynamic Method. The effect of the angle between the two ducts of the Artery on the flow pattern has been investigated.
Improved Wells Turbine Using a Concave Sectional Profile
Reza Valizadeh, Madjid Abbaspour, Mohammad Taeibi Rahni, Mohsen Saffari Pour, Christopher Hulme-Smith
Subject: Engineering, Automotive Engineering Keywords: wells turbine; oscillating water column; wave energy converter; computational fluid dynamics
The current need to develop sustainable power sources has led to the development of ocean-based conversion systems. Wells turbine is a widely used converter in such systems which suffers from a lack of operational range and power production capacity under operational conditions. The profile named IFS which is concave in the post-mid-chord region, can produce significantly larger lift forces and show better separation behavior than the NACA profiles. In the present study, we tested this profile for the first time in a Wells turbine. The performance of six different blade designs with IFS and NACA profiles were evaluated and compared using a validated computational fluid dynamic model. Although the substitution of the NACA profile with the IFS profile in all cases increased the torque generated, the most efficient power generation and the largest efficient range were achieved in the design with varying thickness from the hub with a 0.15 thickness ratio reaching to the ratio of 0.2 at the tip. The operational span of this design with the IFS profile was 24.1% greater and the maximum torque generation was 71% higher than the case with the NACA profile. Therefore, the use of the IFS profile is suggested for further study and practical trials.
Computational Micro-Macro Analysis of Impact on Strain-Hardening Cementitious Composites (SHCC) Including Microscopic Inertia
Erik Tamsen, Iurie Curosu, Viktor Mechtcherine, Daniel Balzani
Subject: Engineering, Civil Engineering Keywords: Computational Homogenization; Impact; Microscopic Inertia; SHCC; ECC; Fiber Pullout; Rate Effect
This paper presents a numerical two-scale framework for the simulation of fiber reinforced concrete under impact loading. The numerical homogenization framework considers the full balance of linear momentum at the microscale. This allows for the study of microscopic inertia effects affecting the macroscale. After describing the ideas of the dynamic framework and the material models applied at the microscale, the experimental behavior of the fiber and the fiber-matrix bond under varying loading rates are discussed. To capture the most important features, a simplified matrix cracking and a strain rate sensitive fiber pullout model are utilized at the microscale. A split Hopkinson bar tension test is used as an example to present the capabilities of the framework to analyze different sources of dynamic behavior measured at the macroscale. The induced loading wave is studied and the influence of structural inertia on the measured signals within the simulation are verified. Further parameter studies allow the analysis of the macroscopic response resulting from the rate dependent fiber pullout as well as the direct study of the microscale inertia. Even though the material models and the microscale discretization used within this study are still simplified, the value of the numerical two-scale framework to study material behavior under impact loading is shown.
Antibodies Engineering by Computational Approach
Mujahed I. Mustafa
Subject: Life Sciences, Biotechnology Keywords: Antibodies engineering; Computational approach; Novel drugs; Synthetic immunology; Next generation antibodies
In the pre era of synthetic antibodies, pharmaceutical companies depend on finding novel drugs from medicinal plants and other traditional resources; while in present, technological advances in biology, computer and robotics give the researchers the ability to rewrite and edit DNA in order to synthesize very large sets of drug candidates; these novel and improved candidates serves the basis for creating another library of drug candidates and so on until we find the right biomolecule for the disease of interest. all these technologies combined together to synthesize therapeutic antibodies for many types of cancer, autoimmune diseases, and infectious diseases, that can address diseases much more readily to very rapidly get therapeutics into patients so that we can potentially have an impact on disease. The antibodies mechanism is recognize and bind to disease cells and pinpoint the immune system to attack those cells effectively. Now a days, they dependent on computational approach to guide and accelerate the process of antibodies engineering by combination of selection system and use of high-throughput data acquisition and analysis to build and construct populations of next generation antibodies that are thermo-stable, non-immunogenic as possible, and to be administered to many humans as possible. In this review, I will discuss the latest in silico methods for antibodies engineering.
Identification of Hand Movements from Electromyographic Signals Using Machine Learning
Alejandro Mora Rubio, Jesus Alejandro Alzate Grisales, Reinel Tabares-Soto, Simón Orozco-Arias, Cristian Felipe Jiménez Varón, Jorge Iván Padilla Buriticá
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: EMG; Machine Learning; Deep Learning; Computational models; Hand and wrist gestures
Electromyographic (EMG) signals provide information about a person's muscle activity. For hand movements, in particular, the execution of each gesture involves the activation of different combinations of the forearm muscles, which generate distinct electrical patterns. Conversely, the analysis of these muscle activation patterns, represented by EMG signals, allows recognizing which gesture is being performed. In this study, we aimed to implement an automatic identification system of hand or wrist gestures based on supervised Machine Learning (ML) techniques. We trained different computational models and determined which of these showed the best capacity to identify six hand or wrist gestures and generalize between different subjects. We used an open access database containing recordings of EMG signals from 36 subjects. Among the results obtained, we highlight the performance of the Random Forest model, with an accuracy of 95.39%, and the performance of a convolutional neural network with an accuracy of 94.77%.
Virtual Asphalt to Predict Roads' Air Voids and Hydraulic Conductivity
Mustafa Aboufoul, Andrea Chiarelli, Isaac Triguero, Alvaro Garcia
Subject: Engineering, Civil Engineering Keywords: computational design; optimisation; porosity; pore networks; X-ray CT; 3D printing
This paper investigates the effects of air void topology on hydraulic conductivity in asphalt mixtures with porosity in the range 14%-31%. Virtual asphalt pore networks were generated using the Intersected Stacked Air voids (ISA) method, with its parameters being automatically adjusted by the means of a differential evolution optimisation algorithm, and then 3D printed using transparent resin. Permeability tests were conducted on the resin samples to understand the effects of pore topology on hydraulic conductivity. Moreover, the pore networks generated virtually were compared to real asphalt pore networks captured via X-ray Computed Tomography (CT) scans. The optimised ISA method was able to generate realistic 3D pore networks corresponding to those seen in asphalt mixtures in term of visual, topological, statistical and air void shape properties. It was found that, in the range of porous asphalt materials investigated in this research, the high dispersion in hydraulic conductivity at constant air void content is a function of the average air void diameter. Finally, the relationship between average void diameter and the maximum aggregate size and gradation in porous asphalt materials was investigated.
Daytime Lighting Assessment in Textile Factories Using Connected Windows in Slovakia: A Case Study
Dušan Katunský, Erika Dolníková, Bystrík Dolník
Subject: Engineering, Civil Engineering Keywords: sustainable architecture; industrial building; indoor environment; lighting conditions; computational simulation; luminance
This paper highlights the problems associated with daylight use in industrial facilities. In a case study of a multi-story textile factory, we report how to evaluate daylight (as part of integral light) in the production halls marked F and G. This study follows the article in the Buildings journal, where Hall E was evaluated (unilateral daylight). These two additional halls have large areas that are 54 × 54 meters and are more than 5 meters high. The daylight is only on the side through the attached windows in envelope structures in the vertical position. In this paper, we want to present two case studies of these two production halls in a textile factory in the eastern part of Slovakia. These are halls that are illuminated by daylight from two sides through exterior peripheral walls that are against or next to each other. The results of the case studies can be applied in similar production halls illuminated by a 'double-sided' (bilateral) daylight system. This means that they are illuminated by natural illumination through windows on two sides in a vertical position. Such a situation is typical for multi-storied industrial buildings. The proposed approximate calculation method for the daylight factor can be used to predict the daylight in similar spaces in other similar buildings.
Computational Fluid Dynamics (CFD) Mesh Independency Study of A Straight Blade Horizontal Axis Tidal Turbine
Siddharth Suhas Kulkarni, Craig Chapman, Hanifa Shah
Subject: Engineering, Energy & Fuel Technology Keywords: horizontal axis tidal turbine; Computational Fluid Dynamics; mesh independency; NACA 0018
This paper numerically investigates a 3D mesh independency study of a straight blade horizontal axis tidal turbine modelled using Computational Fluid Dynamics (CFD). The solution was produced by employing two turbulence models, the standard k-ε model and Shear Stress Transport (SST) in ANSYS CFX. Three parameters were investigated: mesh resolution, turbulence model, and power coefficient in the initial CFD, analysis. It was found that the mesh resolution and the turbulence model affect the power coefficient results. The power coefficients obtained from the standard k-ε model are 15% to 20% lower than the accuracy of the SST model. It can also be demonstrated that the torque coefficient increases with the increasing Tip Speed Ratio (TSR), but drops drastically after TSR = 5 and k-ε model failing to capture the non-linearity in the torque coefficient with the increasing TSR.
Automatic Generation of Literary Sentences in French
Luis-Gil Moreno-Jiménez, Juan-Manuel Torres-Moreno, Roseli Wedemann
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: computational creativity; literary sentences; automatic text generation; shallow parsing and deep learning.
In this paper, we introduce a model for the automatic generation of literary sentences in French. It is based on algorithms that we have previously used to generate sentences in Spanish and Portuguese, and on a new corpus consisting of literary texts in French that we have constructed, called [FR]. Our automatic text generation algorithm combines language models, shallow parsing and deep learning, artificial neural networks. We have also proposed and implemented a manual evaluation protocol to assess the quality of the artificial sentences generated by our algorithm, by testing if they fulfill four simple criteria. We have obtained encouraging results from the evaluators for most of the desired features of our artificially generated sentences.
Savonius Wind Turbine Performance Comparison with One and Two Porous Deflectors: A CFD Study
Md Mahmud Hasan Saikot, Mahfuzur Rahman, Md Anwar Hosen, Wasif Ajwad, Md Faiyaz Jamil, Md. Quamrul Islam
Subject: Engineering, Energy & Fuel Technology Keywords: Savonius wind turbine; Porous deflector; Porosity; Computational Fluid Dynamics (CFD); Self-starting
The present study explores the effect of using two porous deflectors on the performance of the Savonius wind turbine compared to only one porous deflector. The numerical simulation is performed to solve the unsteady Navier-Stokes equations using the SST k-
Functional Data Analysis for Imaging Mean Function Estimation: Computing Times and Parameter Selection
Juan Arias López, Carmen Cadarso Suárez, Pablo Aguiar Fernández
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Functional Data Analysis; Image Processing; Brain Imaging; Neuroimaging; Computational Neuroscience; Data Science
Functional Data Analysis (FDA) is a relatively new field of statistics dealing with data expressed in the form of functions. FDA methodologies can be easily extended to the study of imaging data, an application proposed in Wang et al. (2020), where the authors settle the mathematical groundwork and properties of the proposed estimators. This methodology allows for the estimation of mean functions and simultaneous confidence corridors (SCC), also known as simultaneous confidence bands, for imaging data and for the difference between two groups of images. This is especially relevant for the field of medical imaging, as one of the most extended research setups consists on the comparison between two groups of images, a pathological set against a control set. FDA applied to medical imaging presents at least two advantages compared to previous methodologies: it avoids loss of information in complex data structures and avoids the multiple comparison problem arising from traditional pixel-to-pixel comparisons. Nonetheless, computing times for this technique have only been explored in reduced and simulated setups (Arias-López et al., 2021). In the present article, we apply this procedure to a practical case with data extracted from open neuroimaging databases and then measure computing times for the construction of Delaunay triangulations, and for the computation of mean function and SCC for one-group and two-group approaches. The results suggest that previous researcher has been too conservative in its parameter selection and that computing times for this methodology are reasonable, confirming that this method should be further studied and applied to the field of medical imaging.
Preprint BRIEF REPORT | doi:10.20944/preprints202112.0327.v1
Predicting and Visualizing STK11 Mutation in Lung Adenocarcinoma Histopathology Slides Using Deep Learning
Runyu Hong, Wenke Liu, David Fenyö
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: keyword; histopathology; deep learning; machine learning; cancer; lung adenocarcinoma; immune; computational pathology
Studies have shown that STK11 mutation plays a critical role in affecting the lung adenocarcinoma (LUAD) tumor immune environment. By training an Inception-Resnet-v2 deep convolutional neural network model, we were able to classify STK11-mutated and wild type LUAD tumor histopathology images with a promising accuracy (per slide AUROC=0.795). Dimensional reduction of the activation maps before the output layer of the test set images revealed that fewer immune cells were accumulated around cancer cells in STK11-mutation cases. Our study demonstrated that deep convolutional network model can automatically identify STK11 mutations based on histopathology slides and confirmed that the immune cell density was the main feature used by the model to distinguish STK11-mutated cases.
A New Application in Biology Education: Development and Implementation of Arduino-Supported STEM Activities
Aslı Görgülü Arı, Gülsüm Meço
Subject: Biology, Anatomy & Morphology Keywords: 21st Century Skills/Thinking Skills; Computational Thinking; Critical Thinking; Robotics for Education
A new teaching method under the name of STEM, integrating the disciplines of Science, Technology, Engineering, and Mathematics, is now taught by teachers in their classes. Considering that the generation that grows up in the 21st-century has grown up with technology, it is thought that integrating technology into lessons helps students learn the subject. The study aims to develop five STEM activities for the human body systems lesson by integrating the coding-based Arduino into STEM education. The activities were implemented to 6th-grade students for seven weeks and the effects on students' skills of establishing a cause-effect relationship. The study method was pre-test-post-test quasi-experimental design, and the cause-effect relationship scale was used as a data collection tool. As a result of the study, a significant difference was found between the Arduino-supported STEM activities developed and the students' skills of establishing a cause-effect relationship.
Guaranted Diversity and Optimality in Cost Function Network Based Computational Protein Design Methods
Manon Ruffini, Jelena Vucinic, Simon de Givry, George Katsirelos, Sophie Barbe, Thomas Schiex
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Computational Protein Design; Graphical Models; Automata; Cost Function Networks; Structural Biology; Diversity.
Proteins are the main active molecules of Life. While natural proteins play many roles, as enzymes or antibodies for example, there is a need to go beyond the repertoire of natural proteins to produce engineered proteins that precisely meet application requirements, in terms of function, stability, activity or other protein capacities. Computational Protein Design aims at designing new proteins from first principles, using full-atom molecular models. However, the size and complexity of proteins require approximations to make them amenable to energetic optimization queries. These approximations make the design process less reliable and a provable optimal solution may fail. In practice, expensive libraries of solutions are therefore generated and tested. In this paper, we explore the idea of generating libraries of provably diverse low energy solutions by extending Cost Function Network algorithms with dedicated automaton-based diversity constraints on a large set of realistic full protein redesign problems. We observe that it is possible to generate provably diverse libraries in reasonable time and that the produced libraries do enhance the Native Sequence Recovery, a traditional measure of design methods reliability.
Effects of Mesh Generation on Modeling Aluminum Anode Baking Furnaces
Jose Libreros, Domenico Lahaye, Maria Trujillo
Subject: Engineering, Automotive Engineering Keywords: Anode Baking Furnaces; κ −turbulence flow model; mesh generation; Computational Fluid Dynamics
Turbulent flow is the first and fundamental physical phenomena to evaluate when optimising cost and reducing emissions from an Anode Baking Furnace (ABF). Gas flow patterns, velocity field, pressure drop, shear stress, and turbulent dissipation rate variables are the main operational parameters to be optimised, considering a specific geometry. Computational Fluid Dynamics (CFD) allows simulating physical phenomena using numerical methods with computer resources. In particular, the finite element method is one of the most used methods to solve the flow equations. This method requires a discretisation of the geometry of the ABF, called mesh. Hence, mesh is the main input to the finite element method. A suitable mesh for applying a discretisation method determines whether the problem can be simulated or not. Generating an appropriate mesh remains a challenge to perform accurate simulations. In this work, a comparison between meshes generated using two mesh generation tools is presented. Results of different study cases are included.
Chassis Influence on the Exposure Assessment of a Compact EV during WPT Recharging Operations
Valerio De Santis, Luca Giaccone, Fabio Freschi
Subject: Engineering, Electrical & Electronic Engineering Keywords: Computational electromagnetics; Electric Vehicle; EMF safety; low frequency dosimetry; Wireless Power Transfer
In this study, the external magnetic field emitted by a wireless power transfer (WPT) system and the internal electric field induced into human body models during recharging operations of a compact electric vehicle (EV) are evaluated. To this aim an ad-hoc formulation for the source modeling is coupled with a commercial software that performs numerical dosimetry. Specifically, two realistic anatomical models both in a driving position and in a standing posture are considered, and the chassis of the EV is modeled either as a currently employed aluminum alloys and as a futuristic carbon fiber composite panel. Aligned and misaligned coil configurations of the WPT system are considered as well. The analysis of the obtained results shows that the ICNIRP reference levels are exceeded in the driving position, especially for the carbon fiber chassis, whereas no exceedance is observed in terms of basic restrictions, at least for the considered scenarios.
Feasibility Evaluation of CFD Approach for Inhalation Exposure Assessment: Case Study for Biocide Spray
Donggeun Park, Jong-Hyeon Lee
Subject: Earth Sciences, Environmental Sciences Keywords: inhalation exposure assessment; computational fluid dynamics (CFD); biocides; spray model; unsteady RANS
Consumer products contain the chemical substances that threaten human health. The modeling methods and experimental methods have been used to estimate the inhalation exposure concentration by the consumer products. The model and measurement methods have the spatial property problem and time/cost consuming problem, respectively. For solving the problems due to the conventional methodology, this study performed the feasibility of applying CFD for evaluation of inhalation exposure by comparing the experiment results and the zero-dimensional results with CFD results. To calculate the aerosol concentration, the CFD was performed by combined the 3D Reynolds averaged Navier Stoke's equation and discrete phased model using ANSYS FLUENT. As a result of comparing the three methodologies performed under the same simulation/experimental conditions, we found the zero-dimensional spray model shows approximately 5 times underestimated inhalation exposure concentration when compared with the CFD results and measurement results in near field. Also, the results of the measured concentration of aerosols at five locations and the CFD results at the same location were compared to show the possibility of evaluating inhalation exposure at various locations using CFD instead of experimental method. The CFD results according to measurement positions can predict rationally the measurement results with low error. In conclusion, in the field of exposure science, a guideline for exposure evaluation using CFD was found that complements the shortcomings of the conventional methodology, the zero-dimensional spray model and measurement method.
Design of a Cyclone Separator Critical Diameter Model based on a Machine Learning and CFD
Donggeun Park, Jeung Sang Go
Subject: Engineering, Mechanical Engineering Keywords: Cyclone separator; Computational fluid dynamics (CFD); Machine learning; Unsteady RANS; Critical Diameter
This paper deals with the characteristics of the cyclone separator from the Lagrangian perspective to design important dependent variables, develops a neural network model for predicting the separation performance parameter, and compares the predictive performance between the traditional surrogate model and the neural network model. In order to design the important parameters of the cyclone separator based on the particle separation theory, the force acting until the particles are separated was calculated using the Lagrangian-based CFD methodology. As a result, it was proved that the centrifugal force and drag acting on the critical diameter having a separation efficiency of 50% were similar, and the particle separation phenomenon in the cyclone occurred from the critical diameter, and it was set as an important dependent variable. For developing a critical diameter prediction model based on machine learning and multiple regression methods, Unsteady-RANS analyzes according to shape dimensions were performed. The input design variables for predicting the critical diameter were selected as four geometry parameters that affect the turbulent flow inside the cyclone. As a result of comparing the model prediction performances, the ML model showed the 32.5 % of improvement rate of R2 compared to the traditional MLR considering the nonlinear relationship between the cyclone design variable and the critical diameter. The proposed techniques have proven to be fast and practical tools for cyclone design.
Metabolic Reprogramming of Fibroblasts as Therapeutic Target in Rheumatoid Arthritis and Cancer: Deciphering Key Mechanisms using Computational Systems Biology Approaches
Sahar Aghakhani, Naouel Zerrouk, Anna Niarakis
Subject: Biology, Other Keywords: Fibroblasts; Rheumatoid Arthritis; Cancer; Metabolic Reprogramming; Glycolytic Switch; Systems Biology; Computational Modelling
Fibroblasts, the most abundant cells in the connective tissue, are key modulators of the extracellular matrix (ECM) composition. These spindle-shaped cells are capable of synthesizing various extracellular matrix proteins and collagen. They also provide the structural framework (stroma) for tissues and play a pivotal role in the wound healing process. While they are maintainers of the ECM turnover and regulate several physiological processes, they can also undergo transformations responding to certain stimuli and display aggressive phenotypes that contribute to disease pathophysiology. In this review, we focus on the metabolic pathways of glucose and highlight metabolic reprogramming as a critical event that contributes to the transition of fibroblasts from quiescent to activated and aggressive cells. We also cover the emerging evidence that allows us to draw parallels between fibroblasts in autoimmune disorders and more specifically in rheumatoid arthritis and cancer. We link the metabolic changes of fibroblasts to the toxic environment created by the disease condition and discuss how targeting of metabolic reprogramming could be employed in the treatment of such diseases. Lastly, we discuss Systems Biology approaches, and more specifically, computational modelling, as a means to elucidate pathogenetic mechanisms and accelerate the identification of novel therapeutic targets.
Multi-Winner Election Control via Social Influence: Hardness and Algorithms for Restricted Cases
Mohammad Abouei Mehrizi, Gianlorenzo D'Angelo
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Computational Social Choice; Election Control; Multi-winner Election; Social Influence; Influence Maximization
Nowadays, many political campaigns are using social influence (SI) in order to convince voters to support/oppose a specific candidate/party. In election control via SI problem, an attacker tries to find a set of limited influencers to start disseminating a political message in a social network of voters. A voter will change his opinion when he receives and accepts the message. In constructive case, the goal is to maximize the number of votes/winners of a target candidate/party, while in destructive case, the attacker tries to minimize them. Recent works considered the problem in different models and presented some hardness and approximation results. In this work, we consider multi-winner election control through SI on different graph structures and diffusion models, and our goal is to maximize/minimize the number of winners in our target party. We show that the problem is hard to approximate when voters' connections form a graph, and the diffusion model is the linear threshold model. We also prove the same result considering an arborescence under independent cascade model. Moreover, we present a dynamic programming algorithm for the cases that the voting system is a variation of straight-party voting, and voters form a tree.
The Reuse of Public Datasets in the Life Sciences: Potential Risks and Rewards
Katharina Sielemann, Alenka Hafner, Boas Pucker
Subject: Life Sciences, Other Keywords: data science; reuse; sequencing data; genomics; bioinformatics; databases; computational biology; open science
The 'big data revolution' has enabled novel types of analyses in the life sciences, facilitated by public sharing and reuse of datasets. Here, we review the prodigious potential of reusing publicly available datasets and the challenges, limitations and risks associated with it. Possible solutions to issues and research integrity considerations are also discussed. Due to the prominence, abundance and wide distribution of sequencing data, we focus on the reuse of publicly available sequence datasets. We define 'successful reuse' as the use of previously published data to enable novel scientific findings and use selected examples of such reuse from different disciplines to illustrate the enormous potential of the practice, while acknowledging their respective limitations and risks. A checklist to determine the reuse value and potential of a particular dataset is also provided. The open discussion of data reuse and the establishment of the practice as a norm has the potential to benefit all stakeholders in the life sciences.
Optimization of Extended Surfaces on Tubes of the Radiant Section of Fired Heaters
Ivan Silva, Marcelo Colaco
Subject: Engineering, Mechanical Engineering Keywords: optimization; particle swarm; response surface; extended surface; fired heaters; computational fluid dynamics
This paper proposes the use of non-uniform extended surfaces installed externally to the tubes of the radiation section of fired heaters, in order to obtain a better heat flux distribution to the coils. To this end, the heat transfer mechanisms present in such equipment were studied through computational fluid dynamics (CFD), using simplified geometries that represent typical sizes of fired heaters. Also, a simplified model for the combustion was considered. Although this model oversimplifies the physics of the problem, it was able to give satisfactory results for the parameters being optimized, considering the main objective of this paper, that is to minimize the non-uniformity of heat flux in the tubes of the radiant section of fired heaters. It was possible to obtain optimized geometric parameters for different types of extended surfaces evaluated, coupling the results of these models with the Particle Swarm optimization method through the use of a response surface technique,. The results indicate a significant improvement in the uniformity of the heat flux distribution to the tubes through the use of the proposed extended surfaces. Thus, this solution reveals to be an interesting alternative to reduce the risks of fluid degradation and coking formation. Future studies must investigate the non-uniformity of the heat flux due to the presence of the flame and consider the interaction between the reactive flow and the participating medium. Nevertheless, this paper presents some results that justify the optimization of such extended surfaces taking into consideration thermal radiation.
The Potential of Computational Modeling to Predict Disease Course and Treatment Response in Patients with Relapsing Multiple Sclerosis
Francesco Papparlardo, Giulia Russo, Marzio Pennisi, Giuseppe Alessandro Parasiliti Palumbo, Giuseppe Sgroi, Santo Motta, Davide Maimone
Subject: Life Sciences, Immunology Keywords: computational modeling; agent based modeling; systems biology; multiple sclerosis; immunity; degenerative disease.
As of today, 20 disease modifying drugs (DMD) have been approved for the treatment of relapsing multiple sclerosis (MS) and, based on their efficacy, they can be grouped into moderate-efficacy DMDs and high-efficacy DMDs. The choice of the drug mostly relies on the judgement and experience of neurologists and the evaluation of therapeutic response can only be obtained by monitoring clinical and magnetic resonance imaging (MRI) status during follow up. In an era where therapies are focused on personalization, the aim of this study is to develop a modeling infrastructure to predict the evolution of relapsing MS and the response to treatments. We built a computational modeling infrastructure named UISS (Universal Immune System Simulator) able to simulate the main features and dynamics of the immune system activities. We extended UISS to simulate all the underlying MS pathogenesis and its interaction with the host immune system. This simulator is a multi-scale, multi-organ, agent based simulator with an attached module capable of simulating the dynamics of specific biological pathways at the molecular level. We simulated six MS patients with different relapsing-remitting courses. These patients were characterized on the basis of their age, sex, presence of oligoclonal bands, therapy and MRI lesion load at onset. The simulator framework is made freely available and can be used following the links provided in the availability section. Even though the model can be further personalized employing immunological parameters and genetic information, based on the available data we generated a few simulation scenarios for each patient, including those who matched the real clinical and MRI history. Moreover, for two patients, the simulator anticipated the timing of subsequent relapses, which really occurred, suggesting that UISS may have the potential to assist MS specialists in predicting the course of the disease and the response to treatment.
Prediction of Flow Characteristics in the Bubble Column Reactor by the Artificial Pheromone-Based Communication of Biological Ants
Shahab Shamshirband, Meisam Babanezhad, Amir Mosavi, Narjes Nabipour, Eva Hajnal, Laszlo Nadai, Kwok-wing Chau
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: bubble column reactor; ant colony optimization algorithm (ACO); flow pattern; machine learning; computational fluid dynamics (CFD); big data
In order to perceive the behavior presented by the multiphase chemical reactors, the ant colony optimization algorithm was combined with computational fluid dynamics (CFD) data. This intelligent algorithm creates a probabilistic technique for computing flow and it can predict various levels of three-dimensional bubble column reactor (BCR). This artificial ant algorithm is mimicking real ant behavior. This method can anticipate the flow characteristics in the reactor using almost 30 % of the whole data in the domain. Following discovering the suitable parameters, the method is used for predicting the points not being simulated with CFD, which represent mesh refinement of Ant colony method. In addition, it is possible to anticipate the bubble-column reactors in the absence of numerical results or training of exact values of evaluated data. The major benefits include reduced computational costs and time savings. The results show a great agreement between ant colony prediction and CFD outputs in different sections of the BCR. The combination of ant colony system and neural network framework can provide the smart structure to estimate biological and nature physics base phenomena. The ant colony optimization algorithm (ACO) framework based on ant behavior can solve all local mathematical answers throughout 3D bubble column reactor. The integration of all local answers can provide the overall solution in the reactor for different characteristics. This new overview of modelling can illustrate new sight into biological behavior in nature.
Reproduction of Local Strong Wind Area Induced in the Downstream of Small‐scale Terrain by Computational Fluid Dynamic (CFD) Approach
Takanori Uchida, Keiji Araki
Subject: Engineering, Civil Engineering Keywords: Terrain‐induced severe wind event; Stratified flows; Computational Fluid Dynamics (CFD); LES
In this research, the computational fluid dynamic (CFD) approach that has been used in wind power generation field was applied for the solution of the problems of local strong wind areas in railway fields, and the mechanism of wind generation was discussed. At the same time, the affectivity of the application of computational fluid dynamic approach to railway field was discussed. The problem of local wind that occurs on the railway line in winter was taken up in this research. A computational simulation for the prediction of wind conditions by LES was implemented and it was clarified that the local strong wind area is mainly caused by separated flows originating from the small‐scale terrain positioned at its upstream (at approximately 180.0 m above sea level). Meanwhile, the effects of the size of calculation area and spatial grid resolution on the result of calculation and the effect of atmospheric stability were also discussed. It was clarified that when the air flow characteristic of the separated flow originating from the small‐scale terrain (at altitude of approximately 180.0 m) targeted in this research is reproduced at high accuracy by computational simulation of wind conditions, approximately 10.0 m of spatial resolution of computational grid in horizontal direction is required. As a result of the computational simulation of wind conditions of stably stratified flow (Fr = 1.0), lee waves were excited at the downstream of the terrain over time. As a result, the reverse‐flow region lying behind the terrain that had been observed at a neutral time was inhibited. Consequently, local strong wind area was generated at the downstream of the terrain and the strong wind area passing through the observation mast was observed. By investigating the speed increasing rate of local strong wind area induced at the time of stable stratification, it was found that the wind was approximately 1.2 times stronger than what was generated at a neutral time.
NLP Formulation for Polygon Optimization Problems
Saeed Asaeedi, farzad didehvar, Ali Mohades
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: α-MAP; α-MPP; α-MNP; polygon optimization; nonlinear programming; computational geometry
In this paper, we generalize the problems of finding simple polygons with the minimum area, maximum perimeter and maximum number of vertices so that they contain a given set of points and their angles are bounded by $\alpha+\pi$ where $\alpha$ ($0\leq\alpha\leq \pi$) is a parameter. We also consider the maximum angle of each possible simple polygon crossing a given set of points, and derive an upper bound for the minimum of these angles. The correspondence between the problems of finding simple polygons with the minimum area and maximum number of vertices is investigated from a theoretical perspective. We formulate the three generalized problems as nonlinear programming models, and then present a Genetic Algorithm to solve them. Finally, the computed solutions are evaluated on several datasets and the results are compared with those from the optimal approach.
Computational Fluid Dynamic Modelling and Optimisation of Wastewater Treatment Plant Bioreactor Mixer
Andrew Elshaw, N. M. S. Hassan, M. M. K. Khan
Subject: Engineering, Mechanical Engineering Keywords: wastewater treatment; computational fluid dynamics; hydrodynamic performance; specific power dissipation; anoxic zone
This study aims to determine the optimal configuration (position and operation duration) for wall mounted mechanical mixers based on the comparison of three-dimensional computational fluid dynamics (CFD) modelling results and physical data collected from the treatment plant. A three-dimensional model of anoxic zone 1, 2 and 3 of Northern Wastewater Treatment Plant (WWTP) located at Cairns Regional Council, Cairns, Queensland, Australia was developed and validated. The model was used to simulate the flow pattern of the WWTP and the simulation results are in good agreement with the physical data varying between 0% to 15% in key locations. The anoxic zones were subject to velocities less than the desired 0.3 metres per second however results for suspended solids concentration indicate that good mixing is being achieved. Results for suspended solids concentrations suggest that the anoxic zones are towards the upper limits recommended by literature for specific power dissipation. The duration for operation of mechanical mixers was investigated and identified that the duration could be reduced from 900 seconds down to 150 seconds. Alternative mixer positioning was also investigated and identified positioning which would increase the average flow velocity with decreased duration (150 seconds). The study identified that Council may achieve savings of $24,000 per year through optimisation of the mechanical mixers.
Fluid Flow and Static Structural Analysis of E-Glass Fiber Reinforced Pipe Joints versus S-Glass Fiber Reinforced Pipe Joints
Sujith Bobba, Z. Leman, E.S. Zainudin, S.M. Sapuan
Subject: Materials Science, Other Keywords: computational fluid dynamics; glass fiber reinforced composites; heavy crude oil; pressure waves
Filament wound composite pipes are frequently used in the field were transmission of high pressured chemical fluids, disposal of industrial wastes, oil and natural gas transmission takes place. In oil and gas industry, the pipelines transporting heavy crude oil are subjected to variable pressure waves causing fluctuating stress levels in the pipes. Computational Fluid Dynamics Analysis was performed using Ansys 15.0 Fluent software to study the effects of these pressure waves on some specified joints in the pipes. Depending on the type of heavy crude oil being used, the flow behavior indicated a considerable degree of stress levels in certain connecting joints, causing the joints to become weak over a prolonged period of use. In this research comparison of various pipe joints was done by using different material and the output result of the stress levels of the pipe joints were checked so that the life of the pipe joints can be optimized by the change of material.
A Two-Phase Model of Air Shock Wave Induced by Rock-Fall in Closed Goaf
Fengyu Ren, Yang Liu, Jianli Cao, Rongxing He, Yuan Xu, Xi You, Yan-jun Zhou
Subject: Engineering, Civil Engineering Keywords: air shock wave; rock-fall; two-phase model; computational fluid dynamics (CFD)
In this paper, a two-phase model of air shock wave induced by rock-fall was described. The model was made up of the uniform motion phase (velocity was close to 0 m·s-1) and the acceleration movement phase. The uniform motion phase was determined by experience, meanwhile the acceleration movement phase was derived by the theoretical analysis. A series of experiments were performed to verify the two-phase model and obtained the law of the uniform motion phase. The acceleration movement phase was taking a larger portion when height of rock-fall was higher with the observations. Experimental results of different falling heights showed good agreements with theoretical analysis values. Computational fluid dynamics (CFD) numerical simulation had been carried out to study the variation velocity with different falling height. As a result of this, the two-phase model could accurately and convenient estimating the velocity of air shock wave induced by rock-fall. The two-phase model could provide a reference and basis for estimating the air shock waves' velocity and designing the protective measures.
New Insights into the State Trapping of UV-Excited Thymine
Ljiljana Stojanovic, Shuming Bai, Jayashree Nagesh, Artur F. Izmaylov, Rachel Crespo-Otero, Hans Lischka, Mario Barbatti
Subject: Chemistry, Physical Chemistry Keywords: computational theoretical chemistry; photochemistry; nonadiabatic dynamics; ultrafast processes; surface hopping; nucleobases; thymine
After UV excitation, gas phase thymine returns to ground state in 5 to 7 ps, showing multiple time constants. There is no consensus on the assignment of these processes, with a dispute between models claiming that thymine is trapped either in the first (S1) or in the second (S2) excited states. In the present study, nonadiabatic dynamics simulation of thymine is performed on the basis of ADC(2) surfaces, to understand the role of dynamic electron correlation on the deactivation pathways. The results show that trapping in S2 is strongly reduced in comparison to previous simulations considering only non-dynamic electron correlation on CASSCF surfaces. The reason for the difference is traced back to the energetic cost for formation of a CO p bond in S2.
When to Use Large Language Model: Upper Bound Analysis of BM25 Algorithms in Reading Comprehension Task
Tingzhen Liu, Qianqian Xiong, Shengxi Zhang
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Large Language Model; Natural Language Processing; Reading Comprehension; Computational linguistics; Information Retrieval; BM25
Large language model (LLM) is a representation of a major advancement in AI, and has been used in multiple natural language processing tasks. Nevertheless, in different business scenarios, LLM requires fine-tuning by engineers to achieve satisfactory performance, and the cost of achieving target performance and fine-tuning may not match. Based on the Baidu STI dataset, we study the upper bound of the performance that classical information retrieval methods can achieve under a specific business, and compare it with the cost and performance of the participating team based on LLM. This paper gives an insight into the potential of classical computational linguistics algorithms, and which can help decision-makers make reasonable choices for LLM and low-cost methods in business R&D.
Research in Computational Expressive Music Performance and Popular Music Production: A Potential Field of Application?
Pierluigi Bontempi, Filippo Carnovalini, Antonio Rodà, Sergio Canazza
Subject: Engineering, General Engineering Keywords: computational music expressive performance; popular music; music production; Digital Audio Workstation; virtual instruments
In music, the role of the interpreter is to play her/his part manipulating the performance parameters in order to offer a sonic rendition of the piece capable of conveying specific expressive intentions. Since the 1980s there has been a growing interest in computational expressive music performance (EMP). This research field has two fundamental objectives: the understanding of the phenomenon of human musical interpretation and the automatic generation of expressive performances. Rule based, statistical, machine and deep learning approaches have been proposed, most of them devoted to the classical repertoire, in particular to piano pieces. On the contrary, we present an introduction to the role of expressive performance within popular music and to the contemporary ecology of pop music production, based on the use of Digital Audio Workstations (DAWs) and virtual instruments. After an analysis of the tools related to expressiveness commonly available to modern producers we propose a detailed survey of research into the computational EMP field, highlighting the potential and limits of what is present in literature with respect to the context of popular music, which by its nature cannot be completely superimposed on the classical one. In the concluding discussion we suggest possible lines of future research in the field of computational expressiveness applied to pop music. | CommonCrawl |
An empirical inquiry into the determinants of public education spending in Europe
Catalin Dragomirescu-Gaina1
IZA Journal of European Labor Studies volume 4, Article number: 25 (2015) Cite this article
This paper makes two important contributions. Firstly, it uncovers some of the main economic determinants driving the dynamics of public education spending in Europe. Drawing mainly on the insights provided by Baumol's cost theory, the baseline specification uses unit labour costs and real GDP per capita as its main determinants. Some important institutional rigidities are also highlighted. The results confirm the fast relative increase in education costs, exposing the long-term affordability challenge of public education investment. Secondly, by including a policy objective and translating the empirical specification into a decision rule, the paper touches on some less mentioned determinants, such as policy commitment. Unfortunately, there is only weak overall evidence that public education spending would increase in response to a lack of progress in the policy objective. Finally, policy implications are discussed.
JEL: D78, H41, H52, I22
The sustained rise in the relative cost of certain public services continues to be a major source of concern for societies and governments alike. The problem has been first observed and discussed in Baumol and Bowen (1966) and Baumol (1967), with sectors such as education and health-care being given as prominent examples. Baumol (2012) further explains that the pain experienced by a society seems to be caused mainly by the relative dynamics of these costs rather than their levels (see also Wolff et al., 2014). In fact, this problem has been so pervasive across both the developed and the developing world, and over the last couple of decades, that it has been commonly labeled as the (Baumol's) 'cost disease'. Obviously, such a persistent and widespread phenomenon must have roots that go deeper than country-specific characteristics and institutional arrangements.
Sensitivity check of model coefficients when one country is removed from the sample
I start from these considerations when empirically analysing a panel dataset that spans over the 2000–2012 time period and refers to current European Union (EU) member states. In this data environment, I intend to highlight some common determinants driving the dynamics of public education spending and to draw insights with respect to their policy importance. I also focus on spending dynamics rather than levels since this allows me to address some large heterogeneity concerns in my sample. I construct an empirical specification building on theoretical insights borrowed mainly from Baumol (1967) 'cost model', but also from Bowen (1980) 'revenue theory of costs'. Using just aggregate measures of unit labour costs and income per capita, I am able to capture the sustained rise in education costs (and therefore spending), despite the inertia observed in classroom 'production technology' (measured by student-teacher ratios). Relevant studies employing similar approaches to model education spending can be found in Fernandez and Rogerson (2001), Gundlach et al. (2001), Archibald and Feldman (2008), Wolff et al. (2014), Chen and Moul (2014). In general, their findings largely confirm the theoretical underpinnings discussed above. In fact, the Baumol's cost theory in particular has been an excellent workhorse for empirical investigations in many closely-related areas, such as health-care, general services, and other labour-intensive activities.
The paper also touches on some less mentioned determinants of education spending that bear more with political economy considerations, such as policy commitment. Here, I discuss commitment with a focus on public education spending (at primary, secondary and tertiary levels), although extensions to other policy areas (e.g., employment and social security) remain possible within the same methodological approach. The understanding is that, beyond theoretical determinants and inherent institutional rigidities, there will always be some degree of policy discretion (as opposed to time-consistent policy rules that define commitment) that would amend funding allocations due to some specific considerations. As a prerequisite to evaluating policy commitment, I expand my empirical specification to include a policy-relevant education objective and attempt to establish a link between the progress registered in this objective and the dynamics of spending (which I take to be the policy instrument). Lisbon strategy (or Europe 2010) together with its current, more detailed, Europe 2020 version were designed to promote education attainment and social cohesion, raise employment and foster innovation along with other long-term policy objectives.Footnote 1 Hence, I draw on these two policy agendas in search of some well-defined, consistent indicators to serve as policy objectives for my analysis. With respect to education, there is only one clear and measureable policy objective mentioned in both strategies: reducing the share of early school leavers (henceforth ESL), i.e., 18–24 year-olds with less than upper secondary educational attainment who are no longer in education or training —according to the definition of the European Commission.Footnote 2 In this context, I investigate whether EU governments have shown determination in their pursuit for better education goals, i.e., lower ESL shares; for a positive characterization of policy commitment, I expect public education spending to increase whenever there is a lack of or insufficient progress with respect to the ESL policy objective.Footnote 3
The paper makes at least two important contributions. Firstly, it empirically identifies some of the main common determinants driving the dynamics of public education spending in Europe, building on some well-established economic theories. The paper provides clear evidence that the annual growth rate in public education spending (especially at primary and secondary levels) has considerably exceeded the annual growth rate in unit labour costs—a preferred measure for portraying the Baumol 'cost disease' and an indicator of general price trends. Unfortunately, this finding also exposes the long-term affordability challenge of public education investment. A system of seemingly unrelated regression equations is proposed to account for any possible substitutions or complementarities between different education levels. Moreover, by broadening the perspective, I show that the dynamics of total government spending per capita can also be easily described by relying on the same major determinants as in the case of education spending per student (a recent empirical review on public spending determinants can be found in Shelton, 2007). In addition, the paper highlights some dimensions where institutional rigidities are higher and can, therefore, generate highly persistent dynamics in public education spending (e.g., student-teacher ratio, spending share of teachers' wages).
Secondly, the paper attempts to empirically evaluate the link between education spending and education attainment, where the latter is defined in terms of early school leavers—a highly relevant policy objective according to the strategies promoted by the EU. In establishing such a link between the two elements, the analysis touches on the issue of policy commitment to education. Commitment here implies a very clear sequencing of the decision-making process: spending is adjusted or decided with respect to previous/past realized performances in the policy objective. Unfortunately, I am not able to find sufficiently strong evidence that policy commitment has been a major determinant of education spending across EU states, although this finding is clearly dependent on the specific approach adopted here.
This analysis also has implications for other closely-related strands of research. On the one hand, education is known to foster technical progress and productivity growth that, in turn, provide not only resources to fund more education but also the right incentives to inspire better education choices for the young generations.Footnote 4 The possibility of such a virtuous circle highlights the importance of education investment due to its favourable consequences on several socio-economic dimensions, including better career prospects, faster transition from school to work, and higher social mobility. On the other hand, education usually represents a small allocation in total government spending, much smaller than other, more pressing, policy objectives such as employment and social security. In fact, short-term political considerations might dominate the public budgeting process today, especially given the high and persistent unemployment levels in Europe. However, if education costs were to continue rising faster than general prices, the governments would be forced into a difficult trade-off between the long-term affordability and provision of public education and the short-term political considerations arising from more pressing objectives.Footnote 5 In the end, although the cost disease 'turns out to affect only the way we divide the money we spend' (see Wolff et al., 2014, p. 19), this might not be too comforting for governments that need to deliver on seemingly conflicting policy objectives.
The paper is organized as following. Section 2 discusses the theoretical foundations of the paper. The data and the main empirical specifications of the model are presented in section 3. Section 4 discusses the empirical results and their policy implications. Finally, section 5 concludes.
As a theoretical basis for this modelling exercise, I draw mainly on two well-established economic theories. The first theoretical strand rests on the seminal work of Baumol and Bowen (1966) and Baumol (1967), who propose an 'unbalanced growth' model to explain the dynamics of an economy consisting of two sectors: a 'progressive' one, and an 'un-progressive' or 'stagnant' one. The second theoretical strand relies on Bowen (1980) and his 'revenue theory of costs' formulated in relation to higher education.
The main assumption behind the first theoretical approach is that labour productivity grows in the long-term only in the 'progressive' sector, which is usually identified with the manufacturing or, more generally, with the good-providing sector. Beyond the undisputed statistical evidence, several arguments can be put forward to support the claim that manufacturing is more likely to enjoy higher levels of productivity growth over the long-term. Labour productivity, defined as output per worker, generally grows as a result of technological progress and innovation, increased capital per worker, economies of scale, etc. Being highly exposed to international competition, the manufacturing sector is forced to innovate to retain competitiveness. A second argument is that new technologies can more easily be incorporated into physical capital and equipment,Footnote 6 and the manufacturing sector is known to be more capital-intensive than other sectors. Lastly, economies of scale are more easily observed in the case of good-producing industries that can benefit from the automatization of routine tasks. When identifying 'non-progressive' sectors, Baumol cites education and health-care as two highly labour-intensive sectors, where labour productivity does not generally grow in the long-run. The basic idea is that labour-intensive industries cannot use technology as a leverage to increase productivity as much as capital-intensive industries do. However, nominal wages in both sectors grow at the same rate over the long-term, mainly because the two sectors are competing for the same pool of workers.Footnote 7 Due to long-term nominal wage convergence between the two sectors, lagging productivity growth in the non-progressive sector will put upward pressure on the relative costs in that sector.Footnote 8 Baumol assumes that both sectors produce final goods and services (i.e., no intermediate products) according to the following production functions:
$$ \begin{array}{l}{Y}_t^{man}=a*{L}_t^{man}* \exp (rt)\\ {}{Y}_t^{edu}=b*{L}_t^{edu},\end{array} $$
where man is an index for the 'progressive' sector and edu is the index for the 'stagnant' sector, Y denotes the real output, L is the labour input and a,b are constants. Notice that the progressive sector grows at the exogenously given rate r, which is the growth rate of the technological progress.
The nominal wages in the two sectors grow at the same rate as the technological progress, i.e., W t = W * exp(rt). Consequently, a prediction of this theory is that the relative cost per unit of output goes to infinity, i.e., \( \left(\frac{W_t*{L}_t^{edu}}{Y_t^{edu}}\right)/\left(\frac{W_t*{L}_t^{man}}{Y_t^{man}}\right)=a/b* \exp (rt)\underset{t\to \infty }{\to}\infty, \) implying that the 'stagnant' sector might vanish in the long-run. In fact, depending on demand elasticity, customers might not tolerate an 'infinite' price increase, and therefore some products might even disappear from the market or retreat to luxury niches (Baumol cites expensive restaurants or famous theaters as relevant examples). However, in the case of necessities such as education or health-care, price elasticity is very low, and therefore higher costs will be passed-through into higher prices, causing an 'unbalanced-growth' dynamics between the two sectors in nominal terms.
Baumol imagines two extreme scenarios. The first scenario assumes that the relative output (or consumption) of the two sectors, i.e., \( {Y}_t^{man}/{Y}_t^{edu}, \) is to remain constant in real terms. Under the constraint that labour supply is fixed and given by \( L={L}_t={L}_t^{man}+{L}_t^{edu}, \) this additional assumption would imply that the labour share of the 'non-progressive' sector will constantly grow over time, such that \( {L}_{t=\infty}^{edu}={L}_{t=\infty }. \) Since wages tend to converge in the two sectors, this also implies that the relative expenditure share of the stagnant sector will keep rising indefinitely (in nominal terms). The second scenario is just the reverse of the previous one: assuming that relative expenditures in the two sectors stay constant in the long-term, the consequences are that the relative output will grow in the 'progressive' sector, but its employment share will remain constant. In reality, because of the uneven dynamics of productivity and wages in the two sectors, the share of expenditures allocated to the non-progressive sector will raise over time—something that has been coined in the literature as the (Baumol's) 'cost disease'.
Despite some inherent critiques, this modelling setting has been a fruitful avenue of research in the literature analysing the dynamics of the education sector (see Gundlach et al. 2001; Archibald and Feldman, 2008; Wolff et al., 2014; Chen and Moul, 2014), the health-care sector (see Hartwig, 2008; 2011), and the more general service-providing sector (see Sasaki, 2007; Nordhaus, 2008). Recently, Baumol (2012) appears to favour an alternative explanation of his theory where the 'cost disease' might be more of a 'cost utopia' if higher costs in the non-progressive sector are simply driven by higher demand (in contrast to the more supply-side determinants outlined in the original theory), which in turn is supported by the higher income generated in the progressive sector (Chen and Moul, 2014).
Probably not far from this alternative interpretation provided by Baumol (2012), the second relevant literature strand rests on Bowen (1980) and his insights with respect to drivers of education costs, in particular, higher education costs. Bowen argues that available revenues are the only constraint on how much to spend on education. In his view, schools maximize 'education excellence, prestige and influence', only facing a revenue constraint. To provide and empirical support for his theory, Bowel points at both the increase in disposable income and the increase in education costs experienced by the developed countries over the last decades (see also Kane, 1999; Fernandez and Rogerson, 2001; Archibald and Feldman, 2008). It should be noted however that this second explanation of raising education cost looks less appealing from a theoretical perspective, resting mainly on available empirical evidence at that time.
In addition, one may consider the political economy models that regard the preferences of the median voter (see Fernandez and Rogerson, 1995, 1996, 1998; Gradstein and Justman, 1997; Gradstein, 2000; Easterly, 2001; Benabou 1996; 2002). According to these models, households' income distribution play a key role in the public support for education, mainly because inequality with respect to education opportunities could be mitigated through fiscal policy means (e.g., taxes, transfers). The median voter would be willing to support the public budget through paid taxes, and as a consequence, the dynamics of public education costs over time will be a direct function of voters' disposable income. While I do not specifically attempt to estimate such models here, their insights might be useful to frame the policy discussion later, especially with respect to affordability of public education and policy trade-offs.
Empirical strategy
The main data source on education spending is Eurostat, i.e., the government finance statistics section (COFOG, based on the ESA95 standard), and covers the time period 2000–2012. This source contains only general government spending for all major public domains, including education; due to its limited coverage, it excludes other important sources of funding for education, especially private sources (e.g., funding provided by households), but also foreign sources (e.g., funding by international organizations). I also use data on education drawn from the OECD/UNESCO/Eurostat joint data collection and covering the time period 2000–2012. The data is organized according to the International Standard Classification System of Education or ISCED (on a scale from 0 to 6) developed by UNESCO in 1997. In addition, all other relevant macroeconomic indicators used in the empirical section are taken from the European Commission AMECO database (downloaded in January-February 2015) maintained by DG ECFIN. As a general remark, however, the resulting panel is slightly unbalanced due to data availability issues, with longer time-series available especially for older EU members.
In empirical studies comparing variables and indicators (in levels) across countries, there seems to be a widespread agreement to use purchasing power parities (the PPS standard). However, as discussed in the introduction, in order to better address the policy implications and concerns with respect to the long-term affordability of rising education costs, I focus on the dynamics of spending (see Baumol, 2012; Wolff et al., 2014). Therefore, all the variables employed in this paper are in (log) first-differences.Footnote 9 I provide three more (technical) reasons for such a choice: (i) avoid potential non-stationarity issues in the empirical estimation, (ii) reduce the influence of some methodological breaks in the available time-series, and (iii) mitigate the impact of time-invariant country-specific factors that dominate the education spending data in levels, reflecting cultural, historical and political differences.
Using data in first-differences also allows me to avoid any potential pitfalls with respect to a lack of empirical evidence on the causality link between education outcomes and financial resources/spending, as extensively discussed in Hanushek (2003). In his influential study, Hanushek advocates for the need to improve school and teacher's characteristics rather than to increase spending.Footnote 10 Specifically, he highlights factors that would mostly reflect country/school-specific characteristics, including autonomy over curriculum, accountability, teacher's knowledge of the subject, etc. These indicators do not have adequate time-variation to be considered relevant for my modelling purposes (e.g., PISA surveys conducted by the OECD every 3 years are not easily comparable over time). Public education spending, however, is part of a policy decision-making process and subject to public scrutiny on a much more frequent basis, in contrast to other structural policies that might address the qualitative aspects discussed above. Accordingly, the approach presented in this paper can be considered to account for these qualitative factors but in a rather uninformative way, i.e., by using data in first-differences.
OECD recommends using data expressed in national prices as there is no need to convert variables in PPS and, thus, introduce additional variability caused by relative price movements (see Ahmad et al., 2003). I will follow this advice in my empirical analysis, which is presented in the next sections.
Model specification
The specification of the model describing the dynamics of public education spending is built in three steps.
In a first step, I draw on the theoretical insights provided in the previous section and set up a 'baseline' model specification using only the main economic indicators highlighted in section 2 and denoted here by economicI. I follow a rich empirical literature taking an empirical perspective on the Baumol's theoretical model (see Gundlach et al., 2001; Hartwig, 2008, 2011). Accordingly, the annual change in the 'non-progressive' sector's expenditures is modelled as a function of changes in unit labour costs (ULC)—simply computed as nominal wages over labour productivity. While some authors have used sector-specific variables,Footnote 11 the lack of available time-series does not allow me to follow this route. Instead, I use the aggregate indicators, corresponding to the whole economy, much as in Hartwig (2008, 2011), where a similar approach is used to investigate health-care spending.
Besides ULC, I include real GDP per capita, which I take as the main economic determinant according the second theoretical strand that relies on the Bowen (1980) and his 'revenue theory of cost'. As noted by Archibald and Feldman (2008), the analytical framework formulated by Bowen is less appealing because it lacks clear theoretical foundations. However, GDP per capita is one of the most used indicators in empirical studies on education and is an excellent candidate to control for any remaining heterogeneity.Footnote 12 Moreover, the GDP per capita would also adequately control for demand-side effects, where higher income generates higher demand for education services. A stylized version of the 'baseline' model specification is given below:
$$ \Delta spendin{g}_{i,t}=f\left(\Delta economic{I}_{i,t}\right)=f\left(\Delta UL{C}_{i,t},\ \Delta GDPperca{p}_{i,t}\right) $$
where spending denotes the amount of public funding devoted to education, ∆ denotes the log-first difference of the indicator and f is a general functional form specification. The subscript t is a time index, while i is a country index, both common notations in a panel setting.
In a second step, I enrich the 'baseline' model specification to include additional controls. These controls are supposed to reflect the large heterogeneity existing in education systems across the EU in terms of how they are organized, what are the most important institutional rigidities, and other relevant dimensions. Beyond organizational aspects, education spending can be ultimately related to the number of teachers and students, classes and schools; in the short-run these figures cannot be easily changed, thus, worsening institutional rigidities. Teachers are the most important resource in the production of education and their wage bill represents the biggest single contributor to total public education spending. The relative inertia in classroom 'production technology' implies that student-teacher ratios are highly persistent too. In addition, any foreseen changes in the quantity/quality of existing infrastructure (schools and/or classes) are part of a multi-annual planning process and will be reflected in persistent capital expenditures, which represent the third most important contributor to total public education spending.Footnote 13
Moreover, my list of potential controls is severely limited by data availability on a yearly basis. In line with the discussion above, I use the following indicators: the share of teachers' wages in current education spending, the share of capital expenditure in total education spending, class size and student-teacher ratios.Footnote 14 The additional controls introduced at this step are generally denoted as structureI. I formulate my model as in (2) and label this the 'extended' model specification:
$$ \Delta\;spendin{g}_{i,t}=f\left(\Delta\;economic{I}_{i,t},\kern0.37em \Delta\;structure{I}_{i,t-1}\right) $$
The one year lag assumed for the indicators reflecting the structure of education spending captures the inherent persistency in its dynamics. It can be easily argued that the number of available teachers and number of classes and schools do not significantly change over the timeframe of 1 year (at least not in a regular way that can be captured in a typical time-series regression analysis). However, over a longer timeframe these indicators will eventually change for different reasoning: demographics dynamics, policy considerations, etc., and therefore the need to control for their influence.
As a third and final step, I include the registered progress in the education policy objective as an additional regressor. This would establish a direct link between the dynamics of education spending (which can be seen as the policy instrument) and the movements in the policy objective, allowing thus an empirical characterization of the policy commitment. As discussed in the introduction of the paper, I consider that the relevant policy objective consists in reducing the share of early school leavers. The most general formulation of the empirical model in its 'full' specification is given below:
$$ \Delta \kern0.28em spendin{g}_{i,t}=f\left(\Delta \kern0.28em economic{I}_{i,t},\kern0.28em \Delta \kern0.28em structure{I}_{i,t-1},\kern0.28em \Delta \kern0.28em objective{I}_{i,t-1}\right), $$
where ∆ objectiveI denotes the progress registered in the policy objective and its 1 year lag is required to expose the policy decision rule consistent with commitment to education. In my empirical setting, commitment implies a very particular sequencing of the decision-making process: the instrument is adjusted only after observing the progress registered in the policy objective.Footnote 15 The 1 year lag might be simply justified by data availability: spending decisions are taken with a forward-looking perspective, but are based on readily available statistical data, which usually refer to some previous period (higher lag orders might be needed if the time gap in data availability is assumed to be longer).
I adopt a system-estimation approach meaning that, for each main education level, I specify an equation describing the dynamics of spending at that particular level, i.e., primary (including pre-primary), secondary and tertiary; accordingly, the corresponding groupings based on the ISCED classification will be: ISCED 0–1, or ISCED 2–4, and ISCED 5–6, respectively. Latter, I will include a fourth equation describing the dynamics of total public spending as a way to acknowledge the existing budget constraints. I estimate the system using seemingly-unrelated regression methods (SUR) to get un-biased and efficient estimates of the coefficients. There are two main arguments that support the system-estimation approach. The first argument rests on the characteristics of the policy decision-making process in public finance, i.e., all decisions are interrelated and simultaneous. A second argument is that all the estimated equations will include at least one common regressor, i.e., ULCs and, eventually, GDP per capita. The inherent assumption behind a SUR estimation method is that the residuals of the individual regressions included in the system are all correlated (i.e., not independent). Accordingly, I always report the Breusch-Pagan test of independence in my results: a rejection of the null (i.e., independence) means that the SUR method is appropriate for estimating the model.
All the data used in the estimation is in nominal terms, local currency, per pupil/student, and expressed in full time equivalents (FTE).Footnote 16 The use of spending per pupil/student adequately captures all relevant demographic dynamics; accordingly, the model does not need to include demographics as a separate regressor, thus, saving valuable degrees of freedom in the estimation. A similar argument can be made with respect to the fourth equation, which specifies the dynamics of total public spending in per capita terms.
Empirical results
A three-equation system
The first thing that needs to be checked is whether the theoretical framework discussed in section 2 is validated by the data. Indeed, for all 'baseline' and 'extended' specifications presented in Table 1 below, the coefficient of the unit labour cost, i.e., the wage-productivity differential or simply the 'Baumol' variable, is always significant in all three equations of the system. Moreover, the constant term is also highly significant, especially at primary and secondary levels (for tertiary education the results are more mixed since the constant term is not always statistically significant). This finding confirms the faster rise in education costs relative to the rise in ULCs—a common proxy for capturing broad inflationary pressures in (most new-Keynesian) economic models. It also reveals the most often advocated solution to the 'cost disease' problem: since only wage growth in excess of productivity gains would be fed into higher education costs. This suggests productivity growth as a possible long-term remedy (see Wolff et al., 2014).
Table 1 Estimates for public education spending per (FTE) student
When comparing the 'baseline' with the 'extended' specification, however, there appears to be a drop in the estimated 'Baumol' coefficient. The drop seems substantial in the case of pre- and primary (ISCED 0–1) and tertiary education levels (ISCED 5–6), but it must be associated with the introduction of the (lagged) share of teachers' wages as an additional control in the regression.Footnote 17 This suggests that the share of teachers' wages is an important determinant—an additional nominal driver of public education costs, though certainly carrying a different content than the 'Baumol cost' variable.Footnote 18 The 'extended' specification includes the additional controls discussed in section 3.2. For the specifications reported here, I use student/teacher ratios instead of class size due to better data availability. The pupil/teacher ratio (only averages over aggregated ISCED 1–3 levels are available from Eurostat) is highly significant and the estimated coefficients are all negative, pointing at the relevance of this indicator for cost analysis.
Adding real GDP per capita as an additional regressor to either 'baseline' or 'extended' specifications does not significantly change the estimated coefficients for ULCs. However, it does improve the overall fit of the model (i.e., the R2 increases when adding GDP per capita). As evidenced in Table 1, adding GDP per capita seems no more than a better way to control for some remaining country-specific factors (probably not entirely removed after first-differentiating the data). Indeed, the specification illustrated in column (5) replaces the GDP per capita with country-specific dummies, which deliver an expected improvement in the overall fit of the model. However, the specification with GDP per capita is preferred by the Akaike (AIC) and Bayesian Information Criterion (BIC), with both being larger in the 'extended & country dummies' compared to 'extended & GDP per capita'.
Section 2 has provided an extensive discussion on the relevant mechanisms behind the dynamics of education spending; yet, these theoretical mechanisms are assumed to work mainly over the long-term. When exploring data with an annual frequency, richer dynamics might be revealed. I rely on the empirical literature to provide some needed guidelines in my investigation. Humphreys (2000) and Delaney and Doyle (2011) among many others find strong evidence that business cycles affect government financial support for higher education. Although primary and secondary education are compulsory and, therefore, should not be too much subject to short-term policy considerations, I will include 'output gap' as an additional regressor (i.e., proxy for business cycle) for all levels of education considered. However, I had to remove GDP per capita from this specification (depicted in column (6) in Table 1) to avoid colinearity due to the high correlation between the two indicators; it is interesting to observe that the Baumol coefficient drops significantly or even losses statistical significance in some cases. This finding points to the fact that the output gap and the ULC might both reflect similar information, such as the gap between demand (proxied by the wage component of the ULC) and supply (proxied by the productivity component of the ULC).
To select the most parsimonious specification, I rely again on AIC and BIC information criteria. In this case, both parsimony measures (AIC and BIC) prefer the specification that includes GDP per capita (column 4) to the one that includes output gap (column 6).
A four-equation system
Within the broader context of public budgeting, education spending represents only a minor allocation of public financial resources. According to Eurostat, at the aggregate level of the European Union (EU), education makes up for a little bit more than 10% of total government spending. Over the 2000–2012 period this percentage has varied significantly across EU member states, from a minimum of 6% in Greece to a maximum of 19% in Estonia. Given this allocation problem, there is a need to account for the existing general public budget constraints when modelling the dynamics of education spending. The Appendix discusses some additional reasons to append a fourth equation to the initial three-equation system.
To address this aspect, specification (7) in Table 2 below presents a four-equation system, where the forth equation describes the dynamics of the general public spending per capita. The initial three-equation system has already addressed the possibility of reallocations (through correlations) between the three relevant education levels: primary, secondary and tertiary. Now, within a four-equation system, the possibility of reallocation between education and other areas of the public budget is also addressed (e.g., spending on economic affairs, military and defense are all substitutes to public education spending, etc.).
The empirical specification of this fourth equation is intentionally kept simple. Baumol's theory can be easily extended to include all publicly-provided services (same assumptions are needed), while income per capita is one of the most relevant determinants for general government spending (see Shelton, 2007). Accordingly, the same two theoretical determinants outlined in the previous sections are included. The coefficients of the first three equations of the system remain basically unchanged between specifications (4) and (7). Interestingly, the fourth equation has a higher explanatory power than the other three, despite relying on just two determinants and a constant term; the constant is statistically significant, though smaller in absolute value than in the case of primary and secondary levels. Other advantages pertaining to specification (7) over specification (4) are highlighted in the Appendix.
Despite all the existing budgetary constraints, some of them arising from the strict EU institutional governance framework, there has always been some degree of domestic policy discretion that could change a given spending allocation due to specific considerations. I intend to reflect the commitment idea using the 'full' model specification, which is portrayed by equation (3) from section 3.2. I am making the implicit assumption that policy commitment implies a very clear sequencing of the decision-making process: current changes in spending allocation to education are decided based on past performances. Compared to the 'extended' specification, I now include an additional regressor in each of the three initial equations of the system. In principle, lagged changes in the policy objective (or similar transformations thereof – see below) should represent relevant measures of past performances. Given the definition of the policy objective,Footnote 19 a rising share of early school leavers would reflect a worsening of the situation, while declining ESL shares reflect an improvement (i.e., the lower the ESL shares, the better). Accordingly, a truly committed policy-maker would increase spending today (and possibly over the next periods) to counterbalance any unfavourable past developments in the ESL objective. Similarly, a policy-maker could be allowed to reduce spending today if past ESL progress is considered as satisfactory.Footnote 20
In reality, different EU countries could make different spending allocations despite registering a similar ESL progress; likewise, countries could behave similarly (i.e., increase/decrease education spending by the same percent) despite witnessing very different ESL developments. Given the large diversity in ESL performancesFootnote 21 across the EU members, there is a need to introduce some relevant benchmarks or reference values in order to adequately evaluate/measure the progress registered in the policy objective for a given country. Despite some inherent simplifications, I present three ways to achieve this goal and illustrate the results in Table 3 below, namely in specifications (8), (9) and (10).
Table 3 Estimates for public education spending per (FTE) student with policy commitment
Firstly, I use the lagged change in ESL as the easiest way to include a feedback mechanism into my specification, which states the policy decision rule dictating how current spending is adjusted. This specification corresponds to column (8) in Table 3 below and is labelled as 'own lag' because progress in the ESL objective is measured against its own lag. The intuition is that a policy-maker would compare the available (t-1) ESL value with its previous, (t-2), ESL value to quantify progress before deciding on education spending for the current time period, t. Consequently, the additional regressor included in specification (8) is given by: ∆log(ESL)t-1 or, equivalently, by the difference defined in ESL levels: log(ESL)t-1 – log(ESL)t-2.
Secondly, for specification (9) in Table 3, I consider that the policy-maker evaluates the available (t-1) ESL value against a fixed (time-invariant) unobserved country-specific reference value. Such a reference value could be either a fixed target or an optimal determined ESL value, though here I include dummy variables to balance the lack of such country-specific measures over the whole estimation period.Footnote 22 This specification, which I label as 'FE', also controls for any remaining cross-sectional heterogeneity not accounted for so far by the 'full' model specification. Consequently, I include log(ESL)t-1 as an additional regressor along with a country-specific dummy.
Thirdly, I consider that, for a given country, the relevant reference value to measure the progress registered by its ESL objective is the EU-aggregate ESL value. This specification, labelled as 'EU27', corresponds to column (10) in Table 3. Consequently, the regressor included in the model is given by the difference between the country-specific ESL and the EU27Footnote 23 average ESL, i.e., log(ESL)t-1 – log(ESLEU27)t-1.
In practice, it is more likely that policy-makers rely on a series of benchmarks and reference values (possibly including the ones employed above, i.e., own lags, specific targets, EU averages etc.) to evaluate their progress and decide spending allocations in education. Based on the results displayed in Table 3, there is some weak statistical evidence of policy commitment in specification (9), where dummies were introduced as a proxy for fixed ESL targets. This evidence should be understood within the limits of the approach considered here. Interestingly, in model (9) the coefficient associated with the commitment proxy is positive for primary education but negative for tertiary education, meaning that, for example, if previous ESL values were above a fixed (country-specific) reference value, spending would increase at the primary (including pre-primary) but decrease at the tertiary education level – a result that makes sense given that the policy objective is expressed in terms of school dropouts with less than (upper) secondary education. Unfortunately, such weak statistical evidence does not allow me to draw strong conclusions with respect to policy commitment in this empirical setting. The explanation could be related to large data heterogeneity issues, most likely due to the presence of some gross outliers in the sample (see Appendix) or simply because the model tries to pool together countries with very different policy attitudes. Empirical results were sensitive to sample length changes, i.e., dropping more than one observation either from the beginning or from the end of the sample would alter the coefficients and their statistical significance; a future investigation that could leverage on better data availability and span over longer time periods should perhaps come with more insights and more robust findings. Probably more interesting, the results were not that sensitive when dropping some of the selected controls that define the 'extended' specification.
Policy discussion
With public education costs growing faster than aggregate prices, affordability concerns inevitably arise in any discussion on public education investment. The empirical results above have exposed the 'cost disease' as a common phenomenon across the EU, especially at primary and secondary education levels. In the case of higher education instead, the results were more mixed (see Table 1), probably due to the nature of the data employed in the empirical analysis, which covers only public funding sources but disregards spending by households, philanthropists, international institutions, etc. In contrast to primary and secondary education, public financial support for tertiary education (which is generally not mandatory) covers only a part of total education costs (with the remaining being filled from private sources). In this context, such mixed results for tertiary education are not quite unexpected. In fact, some recent policy initiatives envisaging a shift in the cost burden from governments to parents and/or students have been taken mainly with respect to tertiary education funding support (e.g., via higher tuition fees, fiscal incentives for student loans, etc.). If this trend in policy changes will continue in the future, it would definitely help to reduce pressure on public budgets. Some forms of cost-sharing might also arise at lower education levels like, for example, private tutoring to compensate for poor/insufficient teaching that usually arises as a consequence of lower public financial support. However, there are good reasons to expect harder public scrutiny in case these practices become the norm rather than the exception (the last paragraph in section 2 lists some reference studies discussing the interaction between education and redistribution policies, and the preferences of the median-voter).
As already suggested in section 4.1 above, growing labour productivity (at least at a faster pace than wages so that unit labour costs are kept in check) might be a way to tackle the 'cost disease' problem over the long-term. Indeed, the 'progressive' sector can more efficiently leverage physical capital and technology to deliver overall productivity gains (despite stagnation in other sectors). However, this productivity-enhancing mechanism can be strengthened when human capital (instead of simply labour) enters the production function of the 'progressive' sector. In fact, some recent extensions of the Baumol model include more positive interactions between the 'progressive' and the 'stagnant' sectors as a way to achieve better economic outcomes (see Pugno, 2006). The dual causality link between education and economic development exposes, nevertheless, a range of effective policy options in this context. More importantly, it highlights a mechanism along which better designed policies can incentivize more (or more efficient) human capital accumulation, not only through the consumption of education services, but also through other forms, such as learning-by-doing, up-skilling, lifelong learning, etc.
It is important to note that some forms of human capital investment do not necessarily require public financial support (e.g., learning-by-doing), but others do so, and sometimes to a large extent (besides education, other examples include re-training and some active labour market policies). From a political economy perspective, and apart from business cycle influences, it is clear that policy-makers can shift their preferences between (short-term) redistributive policies (e.g., transfers or subsidies) and policies incentivizing (long-term) human capital accumulation. There are probably many difficult trade-offs here that arise due to seemingly conflicting objectives. Yet, it is not necessarily the size of public spending allocated to a given policy area that this paper is concerned with, but rather the policy-decision process of adjusting/amending spending in an effort to deliver results on a clear policy mandate. In this context, the weak overall empirical evidence found for policy commitment on education could highlight risks associated with EU economic development prospects. Even if the lack of clear evidence for commitment is mostly a reflection of large heterogeneity issues in the data, it could nevertheless raise concerns with respect to real (economic and political) convergence prospects in Europe.
This paper makes two important contributions. Firstly, it uncovers some of the main economic determinants driving the dynamics of public education spending in a panel dataset spanning across EU members over the 2000–2012 period. The empirical model builds mainly on theoretical insights borrowed from Baumol (1967), Baumol and Bowen (1966), and Bowen (1980). A four-equation system is proposed to resolve the spending allocation problem inherent in a public budgeting process, with three equations describing the dynamics of education spending per student (at primary, secondary and tertiary levels) and a fourth one describing the dynamics of total public spending in per capita terms. The empirical results provide strong evidence that the costs of public education have been rising faster than general prices (proxied here by the aggregate unit labour costs), despite accounting for growing real income/demand effects (proxied by GDP per capita). Besides confirming the theories on which the specification of the model was built, such a finding also exposes, unfortunately, the long-term affordability challenge of public education investment.
Secondly, the empirical setting above is used to investigate whether policy commitment has been a major determinant driving the dynamics of public education spending in Europe. I draw on two main policy agendas and select the share of early school leavers as a relevant policy objective for the EU member states over the whole 2000–2012 period. Then, expanding on the previous model specification, I formulate an empirical equivalent of a policy rule that would determine how education spending decisions are related to past progresses in the policy objective. Nonetheless, I find only weak overall statistical evidence of policy commitment to education across Europe, most likely due to large heterogeneity issues in the dataset. Moreover, the empirical results seem to be sensitive to some basic robustness checks.
Finally, it might be worth pointing out some further research directions. A possible follow-up could attempt to include determinants that draw more heavily on political economy considerations to reflect changes in political and/or institutional aspects, different financing mechanisms, or analyse longer time periods (see Busemeyer, 2007). Moreover, direct extensions of the present analysis of commitment to other policy domains remain possible, especially in the area of employment and social security, although a different theoretical background would be required in this case.
See the official communication of the European Council from March 2000 at http://www.consilium.europa.eu/en/uedocs/cms_data/docs/pressdata/en/ec/00100-r1.en0.htm and the conclusions of the European Council from June 2010 at http://www.consilium.europa.eu/ueDocs/cms_Data/docs/pressData/en/ec/115346.pdf.
As an additional proof of its policy relevance, an ambitious EU-wide target of no more than 10 % was initially specified, and later reiterated within the Europe 2020 version together with country-specific targets, in an attempt to enforce more efficient implementation and accountability.
How much a government spends on education can be seen as a measure of its commitment to education according to Gylfason (2001). Some existing theoretical studies address the issue of policy commitment and time-consistent fiscal policy in general, but with careful considerations with respect to education investment in particular, e.g., Boadway et al. (1996), Gradstein (2000), Andersson and Konrad (2003).
An interesting analysis in the EU context is provided in Dragomirescu-Gaina et al. (2015), who discuss the positive interactions between labour productivity and education choices with a long-term perspective.
Same arguments can be made if one groups public spending on education and health-care together (as long-term investments in human capital) and contrasts them with spending on other public policy areas like employment and social security (which can also be seen as investments in human capital, but from a rather short-term perspective).
This can be seen mostly in the significant drop in the relative prices of information-communication & technology (ICT) equipment (compared to general prices) over the last decades.
Besides labour mobility, several other factors might also contribute: nominal rigidities in setting wages, high unionization levels etc.
At this point, an interested reader might notice the striking similarities between the work of Baumol and the parallel research in international economics done by Balassa (1964) and Samuelson (1964). In a similar modelling setting, two sectors (i.e., the tradable and the non-tradable sector) are characterized by different productivity dynamics, but the convergence of nominal wages later drives domestic inflation higher and generates real appreciation of the domestic currency.
This paper uses first-differences of the data, but a very similar approach is employed in Busemeyer (2007), who uses the residuals from an estimated autoregressive AR(1) model in levels.
See Jefferson (2005) for a survey on empirical evidence related to the causality link between education financing and students' performances. Two more recent but extensive reviews of the previous literature that are on school and teacher's characteristics are Hanushek and Woessmann (2010) and Glewwe et al. (2011).
For example Gundlach et al. (2001) compute sector-specific productivity in the education sector based on the results of standardized achievement tests for U.S.
Heterogeneity might still remain an unsolved issue, despite estimating a model specification in first-differences that control for time invariant characteristics.
The inclusion in the regression of the capital expenditures' share instead of non-personnel costs' share (which is the second biggest contributor to total public education spending) is just a matter of choice. Obviously, one cannot include all three components of public education spending due to collinearity concerns. In this case, capital spending might even provide a clearer interpretation of the results because of its explicit content.
One can argue (and this would be in line with some existing empirical studies) that teachers' age and gender composition are also important determinants of overall education spending; yet, data availability was the main drawback for not using more detailed indicators as controls. Also, one should be careful not to overstate the importance of some qualitative indicators in a time-series analysis; most of these indicators, e.g., describing schools' organizational methods or other institutional dimensions, might have only a one-time effect (see discussion in Wolff et al., 2014) and would therefore not appear as significant in a time-series regression analysis.
Please note that this would not represent an empirical formulation of a transmission mechanism—from spending public resources to reaching specific outcomes—mainly because the causality in the text above runs backwards.
The number of FTE pupil/students used to compute the COFOG-based spending per student at the ISCED 0–1 level correspond only to ISCED 1 pupils, thus excluding ISCED 0 pupils.
This is a common finding in empirical studies when the two regressors involved are correlated. However, in my case, the correlation is not high enough to render any of the coefficients statistically insignificant. Therefore, I take this as evidence of their different content and significance in explaining the dynamics of education spending.
In principle, the share of teachers' wages in education spending would reflect the sensitive balance that governments need to strike between human (e.g., teachers) and physical capital (e.g., schools) when it comes to allocating public funds.
Although not explicitly treated here, education quality might be an important element in the policy discussion. Still, it remains an open question whether consistent (as opposed to one-time) improvements in education quality are possible at all (see Wolff et al., 2014). Meanwhile, there is plenty of space to improve the existing quantitative measures of education attainment, such as ESL.
A never-ending quest for lower ESL values would be inefficient and waste important financial resources when policy-makers face an allocation problem and different policy objectives. Moreover, given the increased efforts to foster mobility across EU countries (e.g., ERASMUS+), most differences between countries should be mitigated through migration or internationalization of education over the long-run. In light of these arguments, 'policy commitment' appears also in the reverse situation when spending is reduced if the progress is satisfactory, thus facilitating a linear interpretation of the model.
Historically, some EU countries have consistently registered very low ESL values over the 2000–2012 period, among them: AT, CZ, DK, FI, HR, LT, PL, SE, SI and SK. Other members such as CY, DE, FR, IE, LU and NL have managed to reach single-digit ESL levels only over the most recent years (i.e., 2010–2012). The rest of the EU members (representing about half of the total EU28) were still above the 10 % EU-wide target as of 2012.
Europe 2020 agenda has introduced country-specific targets for ESL. However, these targets were adopted after the year 2010 (though at different moments) and were therefore not available for most of the period under consideration here. Only as an exercise, I have used the deviations from these politically 'announced' country-specific targets, but the results were not significant (result are available from the author). If anything, this highlights the lack of relevance (both political and empirical) of the announced targets, thus supporting the use of a series of reference values in the present evaluation of policy commitment.
I use EU27 averages since data availability for EU28 is severely limited.
Ahmad N, Lequiller F, Marianna P, Pilat D, Schreyer P, Wolfl A (2003) Comparing labour productivity growth in the OECD area: the role of measurement, OECD-STI Working Paper No. 14/03
Andersson F, Konrad KA (2003) Human capital investment and globalization in extortionary states. J Public Econ 87(7):1539–55
Archibald RB, Feldman DH (2008) Explaining increases in higher education costs. J Higher Educ 79(3):268–95
Balassa B (1964) The purchasing-power parity doctrine: A reappraisal. J Polit Econ 72:584–96
Baumol WJ (1967) Macroeconomics of unbalanced growth: the anatomy of urban crisis. Am Econ Rev 57(3):415–26
Baumol, WJ (2012). The cost disease: Why computers get cheaper and health care doesn't. Yale university press. http://qrixqln.yalebooks.com/yupbooks/excerpts/Baumol_excerpt.pdf.
Baumol WJ, Bowen WG (1966) Performing arts: The economic dilemma. Twentieth Century Fund, New York
Benabou R (1996) Inequality and growth. NBER Macroecon Annu 11:11–92
Benabou R (2002) Tax and Education Policy in a Heterogeneous‐Agent Economy: What Levels of Redistribution Maximize Growth and Efficiency? Econometrica 70(2):481–517
Boadway R, Marceau N, Marchand M (1996) Investment in education and the time inconsistency of redistributive tax policy. Economica 63:171–89
Bowen HR (1980) The costs of higher education: How much do colleges and universities spend per student and how much should they spend? Jossey-Bass, San Francisco
Busemeyer MR (2007) Determinants of public education spending in 21 OECD democracies, 1980–2001. J Eur Public Policy 14(4):582–610
Chen X, Moul CC (2014) Disease or utopia? Testing Baumol in education. Econ Lett 122(2):220–3
Delaney JA, Doyle WR (2011) State spending on higher education: Testing the balance wheel over time. J Educ Finance 36(4):343–68
Dragomirescu-Gaina C, Elia L, Weber A (2015) A fast-forward look at tertiary education attainment in Europe 2020. J Policy Model 37(5):804–19
Easterly W (2001) The middle class consensus and economic development. J Econ Growth 6(4):317–35
Fernandez R, Rogerson R (1995) On the political economy of education subsidies. Rev Econ Stud 62(2):249–62
Fernandez R, Rogerson R (1996). Income distribution, communities, and the quality of public education. Quarterly J Econ. 111(1):135–64.
Fernandez R, Rogerson R (1998). Public education and income distribution: A dynamic quantitative evaluation of education-finance reform. Am Econ Review. 88(4):813–33.
Fernandez R, Rogerson R (2001). The determinants of public education expenditures: Longer-run evidence from the states. J Educ Finance. 27(1):567–83.
Glewwe PW, Hanushek EA, Humpage SD, Ravina R (2011). School resources and educational outcomes in developing countries: A review of the literature from 1990 to 2010. National Bureau of Economic Research, No. 17554. http://www.nber.org/papers/w17554.
Gradstein M (2000) An economic rationale for public education: the value of commitment. J Monet Econ 45(2):463–74
Gradstein M, Justman M (1997) Democratic choice of an education system: implications for growth and income distribution. J Econ Growth 2(2):169–83
Gundlach E, Wossmann L, Gmelin J (2001) The decline of schooling productivity in OECD countries. Econ J 111(471):135–47
Gylfason T (2001) Natural resources, education, and economic development. Eur Econ Rev 45(4):847–59
Hanushek EA (2003) The Failure of Input‐based Schooling Policies. Econ J 113(485):F64–98
Hanushek EA, Woessmann L (2010). The economics of international differences in educational achievement. National Bureau of Economic Research, No. 15949. http://www.nber.org/papers/w15949.
Hartwig J (2008) What drives health care expenditure? Baumol's model of'unbalanced growth' revisited. J Health Econ 27(3):603–23
Hartwig J (2011) Can Baumol's model of unbalanced growth contribute to explaining the secular rise in health care expenditure? An alternative test. Appl Econ 43(2):173–84
Humphreys BR (2000). Do business cycles affect state appropriations to higher education? Southern Econ. J. 67(2):398–413.
Jefferson AL (2005) Student Performance: Is More Money the Answer? J Educ Finance 31(2):111–24
Kane TJ (1999) The price of admission: Rethinking how Americans pay for college. The Brookings Institution Press, Washington
Nordhaus WD (2008) Baumol's diseases: a macroeconomic perspective. BE J Macroecon 8:1
Pugno M (2006) The service paradox and endogenous economic growth. Struct Chang Econ Dyn 17(1):99–115
Samuelson P (1964) Theoretical notes on trade problems. Rev Econ Stat 23:145–54
Sasaki H (2007) The rise of service employment and its impact on aggregate productivity growth. Struct Chang Econ Dyn 18(4):438–59
Shelton CA (2007) The size and composition of government expenditure. J Public Econ 91:2230–60
Wolff EN, Baumol WJ, Saini AN (2014) A comparative analysis of education costs and outcomes: The United States vs. other OECD countries. Econ Educ Rev 39:1–21
I would like to thank Leandro Elia, the editor (Sara de la Rica) and an anonymous reviewer of this journal for comments and suggestions that helped me streamline the analysis presented here. Comments from Stan van Alphen and Lene Mejer on a preliminary version of this paper are also acknowledged. Obviously, the usual disclaimer applies and all the remaining errors are mine. The paper builds and expands on a work project that started during my stay at the Econometrics and Applied Statistics Unit, DG Joint Research Centre, European Commission.
Responsible editor: Sara de la Rica
Independent researcher, ᅟ, Romania
Catalin Dragomirescu-Gaina
Search for Catalin Dragomirescu-Gaina in:
Correspondence to Catalin Dragomirescu-Gaina.
The IZA Journal of European Labor Studies is committed to the IZA Guiding Principles of Research Integrity. The author declares that he/she has observed these principles.
This appendix presents a sensitivity check of the estimated model coefficients displayed for both specifications (4) and (7) in Table 2. This sensitivity check addresses heterogeneity concerns with respect to countries included in the sample and consists in estimating the same model specification, but excluding one country at a time. The resulting 'shadow coefficients' are plotted in the figure below, where the x-axis lists the excluded country. The advantages of a four-equation system can be most clearly exposed using a simple standard deviation measure to reflect the departure of the 'shadow coefficients' from the ones reported in specifications (4) or (7). What is probably less clear in the figure, but becomes evident when displaying the raw standard deviations (henceforth sdev.) of the 'shadow coefficients', is that the outliers now play a less important role (though the bias in the estimated coefficients cannot be entirely eliminated). The three-equation system exhibits a sdev. of 5.94, 4.29 and 3.44% for the 'shadow coefficients associated with ULCs in equations 1, 2, and 3, respectively, and 4.02, 5.82 and 3.75% for GDP per capita. Similarly, the four-equation system exhibits a sdev. of 5.46, 4.19 and 3.12% for ULCs in equations 1, 2, and 3, respectively, and 3.73, 5.77 and 3.59% for GDP per capita.
Moreover, the figure above suggests that the overall sample might contain some gross outliers (which seem to introduce a bias in the estimated coefficients) such as: BG, CY, CZ, GR, HU, LT, LV, MT, RO, and SI. These findings are not surprising since some of these countries have suffered harsh adjustments in their fiscal policies, including their public expenditures: RO, LT and LV implemented broad austerity programs negotiated with the international financial institutions, mostly in 2010 (the 25% cut in public wages in RO and LV, and the 15% cut in LT have mostly affected education, health-care and other related public sectors with large employment shares). Others such as GR, and to a smaller extent HU, have been among the countries that required international financial assistance in the aftermath of the recent global economic and European sovereign crisis. Countries such as ES, IE, IT, and PT have also benefited from different forms of EU financial support, but these countries do not seem to bias the estimated 'shadow coefficients' according to the figure above. However, most (if not all) EU members have undertaken some adjustments in their public spending policies over the recent periods, especially as a consequence of the sovereign crisis that has increased the pressure on public finances across the board.
Just as an illustration of the possible effects introduced in the model by the outliers identified above and/or the recent European sovereign crisis, I depart from specifications (4) and (7) from Table 2 and illustrate four additional specifications that include: (i) country-specific dummies, but only for the list of nominated outliers above, i.e., BG, CY, CZ, GR, HU, LT, LV, MT, RO, SI, and (ii) time dummies that cover the whole estimation period. The alternative specifications are labeled (4.i), (4.ii), (7.i) and (7.ii) and are displayed in Table 4 below. Two main findings are worth mentioning here. Firstly, as expected, correcting for outliers has generated an overall increase in the explanatory power of the empirical model (as captured by the R2), especially at the ISCED 0–1 and ISCED 2–4 levels. Secondly, the coefficients associated with ULCs and GDP per capita have remained statistically significant across all the alternative specifications, and this is a strong confirmation for the importance of the economic determinants identified in section 2.
Table 4 Alternative estimates for public education spending per (FTE) student – tackling outliers
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Dragomirescu-Gaina, C. An empirical inquiry into the determinants of public education spending in Europe. IZA J Labor Stud 4, 25 (2015) doi:10.1186/s40174-015-0049-7
Baumol cost disease
Policy commitment | CommonCrawl |
Laser and Particle Beams (8)
Experimental platform for the investigation of magnetized-reverse-shock dynamics in the context of POLAR
On the Cover of HPL
HPL Laboratory Astrophysics
B. Albertazzi, E. Falize, A. Pelka, F. Brack, F. Kroll, R. Yurchak, E. Brambrink, P. Mabey, N. Ozaki, S. Pikuz, L. Van Box Som, J. M. Bonnet-Bidaud, J. E. Cross, E. Filippov, G. Gregori, R. Kodama, M. Mouchet, T. Morita, Y. Sakawa, R. P. Drake, C. C. Kuranz, M. J.-E. Manuel, C. Li, P. Tzeferacos, D. Lamb, U. Schramm, M. Koenig
Published online by Cambridge University Press: 16 July 2018, e43
The influence of a strong external magnetic field on the collimation of a high Mach number plasma flow and its collision with a solid obstacle is investigated experimentally and numerically. The laser irradiation ( $I\sim 2\times 10^{14}~\text{W}\cdot \text{cm}^{-2}$ ) of a multilayer target generates a shock wave that produces a rear side plasma expanding flow. Immersed in a homogeneous 10 T external magnetic field, this plasma flow propagates in vacuum and impacts an obstacle located a few mm from the main target. A reverse shock is then formed with typical velocities of the order of 15–20 $\pm$ 5 km/s. The experimental results are compared with 2D radiative magnetohydrodynamic simulations using the FLASH code. This platform allows investigating the dynamics of reverse shock, mimicking the processes occurring in a cataclysmic variable of polar type.
Short-pulse laser-driven x-ray radiography
HEDP and HPL 2016
E. Brambrink, S. Baton, M. Koenig, R. Yurchak, N. Bidaut, B. Albertazzi, J. E. Cross, G. Gregori, A. Rigby, E. Falize, A. Pelka, F. Kroll, S. Pikuz, Y. Sakawa, N. Ozaki, C. Kuranz, M. Manuel, C. Li, P. Tzeferacos, D. Lamb
Published online by Cambridge University Press: 21 September 2016, e30
We have developed a new radiography setup with a short-pulse laser-driven x-ray source. Using a radiography axis perpendicular to both long- and short-pulse lasers allowed optimizing the incident angle of the short-pulse laser on the x-ray source target. The setup has been tested with various x-ray source target materials and different laser wavelengths. Signal to noise ratios are presented as well as achieved spatial resolutions. The high quality of our technique is illustrated on a plasma flow radiograph obtained during a laboratory astrophysics experiment on POLARs.
Impulsive electric fields driven by high-intensity laser matter interactions
M. BORGHESI, S. KAR, L. ROMAGNANI, T. TONCIAN, P. ANTICI, P. AUDEBERT, E. BRAMBRINK, F. CECCHERINI, C.A. CECCHETTI, J. FUCHS, M. GALIMBERTI, L.A. GIZZI, T. GRISMAYER, T. LYSEIKINA, R. JUNG, A. MACCHI, P. MORA, J. OSTERHOLTZ, A. SCHIAVI, O. WILLI
Journal: Laser and Particle Beams / Volume 25 / Issue 1 / March 2007
The interaction of high-intensity laser pulses with matter releases instantaneously ultra-large currents of highly energetic electrons, leading to the generation of highly-transient, large-amplitude electric and magnetic fields. We report results of recent experiments in which such charge dynamics have been studied by using proton probing techniques able to provide maps of the electrostatic fields with high spatial and temporal resolution. The dynamics of ponderomotive channeling in underdense plasmas have been studied in this way, as also the processes of Debye sheath formation and MeV ion front expansion at the rear of laser-irradiated thin metallic foils. Laser-driven impulsive fields at the surface of solid targets can be applied for energy-selective ion beam focusing.
Laser triggered micro-lens for focusing and energy selection of MeV protons
O. WILLI, T. TONCIAN, M. BORGHESI, J. FUCHS, E. D'HUMIÈRES, P. ANTICI, P. AUDEBERT, E. BRAMBRINK, C. CECCHETTI, A. PIPAHL, L. ROMAGNANI
Published online by Cambridge University Press: 28 February 2007, pp. 71-77
We present a novel technique for focusing and energy selection of high-current, MeV proton/ion beams. This method employs a hollow micro-cylinder that is irradiated at the outer wall by a high intensity, ultra-short laser pulse. The relativistic electrons produced are injected through the cylinder's wall, spread evenly on the inner wall surface of the cylinder, and initiate a hot plasma expansion. A transient radial electric field (107–1010 V/m) is associated with the expansion. The transient electrostatic field induces the focusing and the selection of a narrow band component out of the broadband poly-energetic energy spectrum of the protons generated from a separate laser irradiated thin foil target that are directed axially through the cylinder. The energy selection is tunable by changing the timing of the two laser pulses. Computer simulations carried out for similar parameters as used in the experiments explain the working of the micro-lens.
High energy heavy ion jets emerging from laser plasma generated by long pulse laser beams from the NHELIX laser system at GSI
G. SCHAUMANN, M.S. SCHOLLMEIER, G. RODRIGUEZ-PRIETO, A. BLAZEVIC, E. BRAMBRINK, M. GEISSEL, S. KOROSTIY, P. PIRZADEH, M. ROTH, F.B. ROSMEJ, A.YA. FAENOV, T.A. PIKUZ, K. TSIGUTKIN, Y. MARON, N.A. TAHIR, D.H.H. HOFFMANN
Journal: Laser and Particle Beams / Volume 23 / Issue 4 / October 2005
High energy heavy ions were generated in laser produced plasma at moderate laser energy, with a large focal spot size of 0.5 mm diameter. The laser beam was provided by the 10 GW GSI-NHELIX laser systems, and the ions were observed spectroscopically in status nascendi with high spatial and spectral resolution. Due to the focal geometry, plasma jet was formed, containing high energy heavy ions. The velocity distribution was measured via an observation of Doppler shifted characteristic transition lines. The observed energy of up to 3 MeV of F-ions deviates by an order of magnitude from the well-known Gitomer (Gitomer et al., 1986) scaling, and agrees with the higher energies of relativistic self focusing.
Radiation dynamics of fast heavy ions interacting with matter
O.N. ROSMEJ, S.A. PIKUZ, S. KOROSTIY, A. BLAZEVIC, E. BRAMBRINK, A. FERTMAN, T. MUTIN, V.P. SHEVELKO, V.P. EFREMOV, T.A. PIKUZ, A.Ya. FAENOV, P. LOBODA, A.A. GOLUBEV, D.H.H. HOFFMANN
Journal: Laser and Particle Beams / Volume 23 / Issue 3 / September 2005
Published online by Cambridge University Press: 30 August 2005, p. 396
Below is the complete Reference citation for Hoffmann et al. (2005).
Hoffmann, D.H.H., Blazevic, A., Ni, P., Rosmej, O., Roth, M., Tahir, N., Tauschwitz, A., Udrea, S., Varentsov, D., Weyrich, K. & Maron, Y. (2005). Present and future perspectives for high energy density physics with intense heavy ion and laser beams. Laser Part. Beams 23, 47–53.
Status of PHELIX laser and first experiments
P. NEUMAYER, R. BOCK, S. BORNEIS, E. BRAMBRINK, H. BRAND, J. CAIRD, E.M. CAMPBELL, E. GAUL, S. GOETTE, C. HAEFNER, T. HAHN, H.M. HEUCK, D.H.H. HOFFMANN, D. JAVORKOVA, H.-J. KLUGE, T. KUEHL, S. KUNZER, T. MERZ, E. ONKELS, M.D. PERRY, D. REEMTS, M. ROTH, S. SAMEK, G. SCHAUMANN, F. SCHRADER, W. SEELIG, A. TAUSCHWITZ, R. THIEL, D. URSESCU, P. WIEWIOR, U. WITTROCK, B. ZIELBAUER
This paper reports on the status of the PHELIX petawatt laser which is built at the Gesellschaft fuer Schwerionenforschung (GSI) in close collaboration with the Lawrence Livermore National Laboratory (LLNL), and the Commissariat à l'Energie Atomique (CEA) in France. First experiments carried out with the chirped pulse amplification (CPA) front-end will also be briefly reviewed.
Laser accelerated ions and electron transport in ultra-intense laser matter interaction
M. ROTH, E. BRAMBRINK, P. AUDEBERT, A. BLAZEVIC, R. CLARKE, J. COBBLE, T.E. COWAN, J. FERNANDEZ, J. FUCHS, M. GEISSEL, D. HABS, M. HEGELICH, S. KARSCH, K. LEDINGHAM, D. NEELY, H. RUHL, T. SCHLEGEL, J. SCHREIBER
Published online by Cambridge University Press: 02 June 2005, pp. 95-100
Since their discovery, laser accelerated ion beams have been the subject of great interest. The ion beam peak power and beam emittance is unmatched by any conventionally accelerated ion beam. Due to the unique quality, a wealth of applications has been proposed, and the first experiments confirmed their prospects. Laser ion acceleration is strongly linked to the generation and transport of hot electrons by the interaction of ultra-intense laser light with matter. Comparing ion acceleration experiments at laser systems with different beam parameters and using targets of varying thickness, material and temperature, some insight on the underlying physics can be obtained. The paper will present experimental results obtained at different laser systems, first beam quality measurement on laser accelerated heavy ions, and ion beam source size measurements at different laser parameters. Using structured targets, we compare information obtained from micro patterned ion beams about the accelerating electron sheath, and the influence of magnetic fields on the electron transport inside conducting targets.
O.N. ROSMEJ, S.A. PIKUZ, S. KOROSTIY, A. BLAZEVIC, E. BRAMBRINK, A. FERTMAN, T. MUTIN, V.P. EFREMOV, T.A. PIKUZ, A.YA. FAENOV, P. LOBODA, A.A. GOLUBEV, D.H.H. HOFFMANN
Published online by Cambridge University Press: 02 June 2005, pp. 79-85
The study of heavy ion stopping dynamics using associated K-shell projectile and target radiation was the focus of the reported experiments. Ar, Ca, Ti, and Ni projectile ions with the initial energies of 5.9 and 11.4 MeV/u were slowed down in quartz and arogels. Characteristic radiation of projectiles and target atoms induced in close collisions was registered. The variation of the projectile ion line Doppler shift due to the ion deceleration measured along the ion beam trajectory was used to determine the ion velocity dynamics. The dependence of the ion velocity on the trajectory coordinate was measured over 70–90% of the ion beam path with a spatial resolution of 50–70 μm. The choice of SiO2 aerogel with low mean densities of 0.04–0.15 g/cm3 as a target material, made it possible to stretch the ion stopping range by more than 20–50 times in comparison with solid quartz. It allowed for resolving the dynamics of the ion stopping process. Experimentally, it has been proven that the fine porous nano-structure of aerogels does not affect the ion energy loss and charge state distribution. The strong increase of the ion stopping range in aerogels made it possible to resolve fast ion radiation dynamics. The analysis of the projectile Kα-satellites structure allows supposing that ions propagate in solid in highly exicted states. This can provide an experimental explanation for so called gas-solid effect.
Methods of charge-state analysis of fast ions inside matter based on their X-ray spectral distribution
F.B. ROSMEJ, R. MORE, O.N. ROSMEJ, J. WIESER, N. BORISENKO, V.P. SHEVELKO, M. GEIßEL, A. BLAZEVIC, J. JACOBY, E. DEWALD, M. ROTH, E. BRAMBRINK, K. WEYRICH, D.H.H. HOFFMANN, A.A. GOLUBEV, V. TURTIKOV, A. FERTMAN, B.YU. SHARKOV, A.YA. FAENOV, T.A. PIKUZ, A.I. MAGUNOV, I.YU. SKOBELEV
Journal: Laser and Particle Beams / Volume 20 / Issue 3 / July 2002
The X-ray spectral distribution of swift heavy Ti and Ni ions (11 MeV/u) observed inside aerogels (ρ = 0.1 g/cm3) and dense solids (quartz, ρ = 2.23 g/cm3) indicates a strong presence of simultaneous 3–5 charge states with one K-hole. We show that the theoretical analysis can be split into two tasks: first, the treatment of complex autoionizing states together with the originating spectral distribution, and, second, a charge-state distribution model. Involving the generalized line profile function theory, we discuss attempts to couple charge-state distributions. | CommonCrawl |
\begin{document}
\title{Persistent Shadowing For Actions Of Some Finitely Generated Groups And Related Measures} \author { Ali Barzanouni} \address{Department of Mathematics, School of Mathematical Sciences, Hakim Sabzevari University, Sabzevar, Iran} \email{[email protected], [email protected]} \subjclass[2010]{Primary: 37C85; Secondary: 37B25, 37B05}
\keywords{ Shadowing, Persistent, Borel Measure} \date{} \maketitle
\begin{abstract} In this paper, $\varphi:G\times X\to X$ is a continuous action of finitely generated group $G$ on compact metric space $(X, d)$ without isolated point. We introduce the notion of persistent shadowing property for $\varphi:G\times X\to X$ and study it via measure theory. Indeed, we introduce the notion of compatibility the Borel probability measure $\mu$ with respect persistent shadowing property of $\varphi:G\times X\to X$ and denote it by $\mu\in\mathcal{M}_{PSh}(X, \varphi)$. We show $\mu\in\mathcal{M}_{PSh}(X, \varphi)$ if and only if $supp(\mu)\subseteq PSh(\varphi)$, where $PSh(\varphi)$ is the set of all persistent shadowable points of $\varphi$. This implies that if every non-atomic Borel probability measure $\mu$ is compatible with persistent shadowing property for $\varphi:G\times X\to X$, then $\varphi$ does have persistent shadowing property. We prove that $\overline{PSh(\varphi)}=PSh(\varphi)$ if and only if $\overline{\mathcal{M}_{PSh}(X, \varphi)}= \mathcal{M}_{PSh}(X, \varphi)$. Also, $\mu(\overline{PSh(\varphi)})=1$ if and only if $\mu\in\overline{\mathcal{M}_{PSh}(X, \varphi)}$. Finally, we show that $\overline{\mathcal{M}_{PSh}(X, \varphi)}=\mathcal{M}(X)$ if and only if $\overline{PSh(\varphi)}=X$.
For study of persistent shadowing property, we introduce the notions of uniformly $\alpha$-persistent point, uniformly $\beta$-persistent point and recall notions of shadowing property, $\alpha$-persistent, $\beta$-persistent and we give some further results about them.
\end{abstract} \section{Introduction} A continuous action is a triple $(X, G, \varphi)$ where $X$ is a compact metric space with a metric $d$ and $G$ is a finitely generated group with the discrete topology which acts on $X$, such that the action $\varphi$ is continuous.
We denote a continuous action $(X, G, \varphi)$ by $\varphi:G\times X\to X$, also we denote by $Act(G;X)$ the set of all continuous actions $\varphi$ of $G$ on $X$. Let $S$ be a finite generating set of $G$. We consider a metric $d_S$ on $Act(G;X)$ by
\begin{equation*} d_S(\varphi, \psi)= \sup \{d(\varphi(s, x), \psi(s, x)): x\in X, s\in S\} \end{equation*} for $\varphi, \psi\in Act(G; X)$.\\ A map $f:G\to X$ is called a $\delta$-pseudo orbit for a continuous action $\varphi:G\times X\to X$ ( with respect to $S$), if $d(f(sg), \varphi(s, f(g)))<\delta$ for all $s\in S$ and all $g\in G$. A $\delta$-pseudo orbit $f:G\to X$ (with respect to $S$ ) is $\epsilon$-shadowed by $\varphi$-orbit $x\in X$, if $d(f(g), \varphi(g, x))<\epsilon$ for all $g\in G$. A continuous action $\varphi:G\times X\to X$ has shadowing property (with respect to $S$) if for every $\epsilon>0$ there is a $\delta>0$ such that every $\delta$-pseudo orbit $f:G\to X$ for $\varphi$ can be $ \epsilon$-shadowed by the $\varphi$-orbit of a point $p\in X$, this means that $d(f(g), \varphi(g, p))<\epsilon$ for all $g\in G$. The notion of shadowing property for actions of finitely generated groups introduced by Osipov and Tikhomirov in \cite{osipov}. They showed that the shadowing property for actions of finitely generated groups depends on both of the hyperbolic properties of actions of its elements and the group structure. For example, if $G$ is a finitely generated nilpotent group and action of one element in $G$ is hyperbolic, then the group action has the shadowing property while it cannot be directly generalized to the case of solvable groups.
The notion of topological stability for an action of a finitely generated group on a compact metric space was introduced by
Chung and Lee in \cite{chung} and they gave a group action version of the Walter's stability theorem. Indeed, a continuous action $\varphi:G\times X\to X$ is topologically stable (with respect to $S$) , if for every $\epsilon>0$ there is $\delta>0$ such that for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$, there is a continuous map $f:X\to X$ such that
$d_{C^0}(f, id)<\epsilon$ and $\varphi_gf=f\psi_g$ for all $g\in G$. Moreover, $\varphi$ is called $s$-topologically stable when there exists a surjective continuous map $f:X\to X$ that satisfies the mentioned properties.
If $\varphi:G\times X\to X$ is topologically stable, then for every $\epsilon>0$ there is $\delta>0$ such that for every $x\in X$ and every continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$, we have $d(\varphi(g, f(x)), \psi(g, x))<\epsilon$ for all $g\in G$. Having this property is well known to say that the continuous action $\varphi$ is $\alpha$-persistent. When $\varphi$ is $s$-topologically stable, for every $\epsilon>0$ there is $\delta>0$ such that for every $x\in X$ and every continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$, we can say that if $f(y)=x$, then $d(\varphi(g, x ), \psi(g, y))<\epsilon$ for all $g\in G$.
In this case, $\varphi$ is called $\beta$-persistent. In other words, a dynamical system is $\beta$- persistent if its trajectories can be seen on every small perturbation of it. Although $s$-topologically stable implies $\beta$-persistent but topological stability does not imply $\beta$-persistent. For example, Sakai and Kobayashi \cite{sakai1} observed that the full shift on two symbols is not $\beta$-persistent while it is topologically stable. Recently, the authors in \cite{jung}, introduced a new tracing property for a homeomorphism $f:X\to X$ referred to as persistent shadowing property and proved that a homeomorphism has persistent shadowing property
if and only if it has shadowing property and it is $\beta$-persistent. This implies that a homeomorphism has persistent shadowing property if and only if it
is pointwise persistent shadowable.\\
\noindent
In this paper, we extend the notion of persistent shadowing property for a continuous action $\varphi:G\times X\to X$ of some finitely generated group $G$ on metric space $(X, d)$. Persistent shadowing property is stronger than of shadowing property and $\beta$-persistent, but in equicontinuous actions, shadowing and persistent shadowing properties are equivalent. This implies that every equicontinuous action on the Cantor space $X$ does have persistent shadowing property. The notion of persistent shadowing property does not depend on the choice of a symmetric finitely generating set and it does not depend on choice of metric $X$ if $X$ is compact metric space. But Example \ref{example2} shows that compactness is essential. Assume that $H$ be a subgroup of $G$. It may be happen that $\varphi:H\times X\to X$ does have the persistent shadowing property
while $\varphi:G\times X\to X$ does not have it. But in Proposition \ref{syndetic}, we show that if $H$ is a syndetic subgroup of $G$, then the situation
is different. Also we study relation between persistent shadowing property of $\varphi:G\times X\to X$ and $\varphi_g:X\to X$.
There is system $\varphi:G\times X\to X$ with persistent shadowing property while $\varphi_g:X\to X$ does not have persistent shadowing property.
If $G$ is free group, then the situation is different. Indeed in Proposition \ref{op2},
we show that if $F_2=\langle a, b\rangle$ is a free group and $\varphi:F_2\times X\to X$ has shadowing property, then $\varphi_{a^{-1}b}:X\to X$ has
persistent shadowing property. Also, one can check that these results do hold for notions of shadowing property, $\alpha$-persistent and $\beta$-persistent, see Remark\ref{224} and Remark \ref{225}\\
\noindent
Recently, in \cite{ali2}, we introduced the notion of compatibility of a measure with respect to $\alpha$-persistent. We extend this notion with respect persistent shadowing property and the set of compatibility measures with persistent shadowing property for $\varphi$ is denoted by $\mathcal{M}_{PSh}(X, \varphi)$, see Subsection \ref{s22}. We show that $\mathcal{M}_{PSh}(X, \varphi)$ is an $F_{\sigma\delta}$ subset of $\mathcal{M}(X)$ and for $\mu\in\mathcal{M}_{PSh}(X, \varphi)$ and homeomorphism $f:X\to Y$, we have $f_*(\mu)\in \mathcal{M}_{PSh}(Y, f\circ \varphi\circ f^{-1})$ where $f\circ \varphi\circ f^{-1}:G\times Y\to Y$ is defined by
$f\circ \varphi\circ f^{-1}(g, x)=f\circ \varphi_g\circ f^{-1}(x)$, see Proposition \ref{kj}. In Proposition \ref{pki}, we show that
if measure $\mu$ is compatible with persistent shadowing property for continuous action $\varphi$, then $\varphi$ does have persistent shadowing property on
$supp(\mu)$. This implies that if every non-atomic probability measure is compatible with persistent shadowing property for continuous action $\varphi$,
then $\varphi$ has persistent shadowing property. Also, we introduce compatibility a measure with respect shadowing property, $\alpha$-persistent and
$\beta$-persistent for continuous action $\varphi:G\times X\to X$ and denote them by
$\mathcal{M}_{Sh}(X, \varphi)$, $\mathcal{M}_\alpha(X, \varphi)$ and $\mathcal{M}_{\beta}(X, \varphi)$, respectively. Results of Proposition \ref{pki}
can be obtain for compatibility a measure in the case of shadowing property and $\beta$-persistent, see Remark \ref{pkii}.
In Section 3, we introduce the notions of persistent shadowable points, uniformly $\alpha$-persistent point, uniformly $\beta$-persistent point
and denote them by $PSh(\varphi)$, $UPersis_\alpha(\varphi)$ and $UPersis_\beta(\varphi)$, respectively. Also, we recall the notions of shadowable points,
$\alpha$-persistent points, $\beta$-persistent points for continuous action $\varphi:G\times X\to X$ and denote them by $Sh(\varphi)$, $Persis_\alpha(\varphi)$ and $Persis_\beta(\varphi)$, respectively. Although $Sh(\varphi)\subseteq UPersis_\alpha(\varphi)\subseteq Persis_\alpha(\varphi)$ but Example \ref{non-shadowable} shows that the $Sh(\varphi)\neq UPersis_\alpha(\varphi)$ and $Upersis_\alpha(\varphi)\neq Persis_\alpha(\varphi)$. For equicontinuous action $\varphi:G\times X\to X$, we have $UPersis_\beta(\varphi)=Persis_\beta(\varphi)$ and $Persis_\alpha(\varphi)\subseteq Persis_\beta(\varphi)$. Moreover, if $X$ is generalized homogeneous compact metric space, then $Sh(\varphi)=UPersis_\alpha(\varphi)=Persis_\alpha(\varphi)$, see Proposition \ref{u}.\\ In Subsection \ref{400}, we study persistent shadowable point for a group action and in item 3 of Proposition \ref{wok}, we show that
Continuous action $\varphi:G\times X\to X$ has the persistent shadowing property if and only if it is pointwise persistent shadowable. Also in item 4 of
Proposition \ref{wok}, we prove that $PSh(\varphi)= UPersis_\beta(\varphi)\cap Sh(\varphi)$. This implies that continuous action $\varphi:G\times X\to X$ has
persistent shadowing property if and only if it is $\beta$-persistent and it has shadowing
property.\\
In Subsection \ref{401}, we study various shadowable points via
measure theory. Indeed, for continuous action $\varphi:G\times X\to X$ on compact metric space $(X,
d)$ and Borel probability measure
$\mu$, we show that
$\mu\in M_{PSh}(X, \varphi)\Leftrightarrow
supp(\mu)\subseteq PSh(\varphi)$, see Proposition \ref{Lb}. In Proposition \ref{Lbb} we show that $\mu(\overline{PSh(\varphi))}=1\Leftrightarrow \mu\in\overline{\mathcal{M}_{PSh}(X,
\varphi)}$ and in Proposition \ref{12354}, we show that $\overline{PSh(\varphi)}=PSh(\varphi)$ if and only if $\overline{\mathcal{M}_{PSh}(X, \varphi)}=\mathcal{M}_{PSh}(X, \varphi)$. Note that, result of this paragraph, obtain for other types of shadowing property, see Proposition \ref{12355}. In equicontinuous action $\varphi:G\times X\to X$, $Persis_\beta(\varphi)=UPersis_\beta(\varphi)$ is closed set in $X$, hence we can say that $\overline{\mathcal{M}_{\beta}(X, \varphi)}=\mathcal{M}_\beta(X, \varphi)$ if $\varphi:G\times X\to X$ is equicontinuous action. This implies that $\mu(Persis_\beta(\varphi))=1$ if and only if $\mu\in\mathcal{M}_\beta(X, \varphi)$, whenever $\varphi:G\times X\to X$ is equicontinuous action.
Finally, In Proposition \ref{3214}, we show that $\overline{\mathcal{M}_{PSh}(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{PSh(\varphi)}=X$, $\overline{\mathcal{M}_{Sh}(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{Sh(\varphi)}=X$, $\overline{\mathcal{M}_\beta(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{Persis_\beta(\varphi)}=X$ and $\overline{\mathcal{M}_{\alpha}(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{Persis_\alpha(\varphi)}=X$.
\section{Persistent shadowing property } In this section, firstly, we extend the notion of persistent shadowing property from \cite{jung} to group actions and study it.
\begin{definition}\label{def21}
A continuous action $\varphi:G\times X\to X$ has persistent shadowing property ( with respect $S$) if for every $\epsilon>0$ there is $\delta>0$ such that
every $\delta$-pseudo orbit $f:G\to X$ for $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$ can be $(\psi, \epsilon)$-shadowed by a point $p\in X$.
\end{definition} It is not hard to see that notion of persistent shadowing property does not depend on the choice of a symmetric finitely generating set. Also one can check that the this notion does not depend on choice of metric $X$ if $X$ is compact metric space. The following example shows that compactness is essential.
\begin{example}\label{example2} Let $T:\mathbb{R}\to S^1\setminus\{(0, 1)\}$ be a map given by \begin{equation*} T(t)=(\frac{2t}{1+t^2}, \frac{t^2-1}{t^2+1}), \quad \text{for all }t\in\mathbb{R}, \end{equation*} and let $X=T(\mathbb{Z})$. Let $d'$ be the metric on $X$ induced by the Riemannian metric on $S^1$, and let $d$ be a discrete metric on $X$. It is clear that $d$ and $d'$ induce the same topology on $X$. Let $g_1 :X\to X$ be a homeomorphism defined by $g(a_i)=a_{i+1}$ and $g_2:X\to X$ be defined by $g_2(a_i)= a_{i+2}$. Consider action $\varphi:G\times X\to X$ generated $g_1, g_2:X\to X$. Since the metric $d$ is discrete, one can see that $\varphi$ does have persistent shadowing property. By contradiction, let $\varphi$ has persistent shadowing property with respect to $ d' $. Hence it does have shadowing property with respect to $d'$.
For $\epsilon=\frac{1}{2}$ and $z\in X$, let $\delta>0$ be an $ \epsilon $-modulus of shadowing property of continuous action $\varphi$. Choose $k\in\mathbb{N}$ satisfying $d'(a_k, a_{-k})<\frac{\delta}{2}$, and consider a homeomorphisms $f_i:X\to X$, $i=1, 2$, given by
\[ f_1(a_i)= \left\lbrace
\begin{array}{c l}
a_{i+1}, & \text{\rm{ $ i\in\{-k, \ldots, k-1\}$}},\\ a_{-k}, & \text{\rm{$i=k$}},\\ a_i, & \text{\rm{otherwise}}.
\end{array} \right. \] and
\[ f_2(a_i)= \left\lbrace
\begin{array}{c l}
a_{i+2}, & \text{\rm{ $ i\in\{-k, \ldots, k-2\}$}},\\
a_{-k+1}, &\text{\rm{$i=k-1$}}\\ a_{-k}, & \text{\rm{$i=k$}},\\ a_i, & \text{\rm{otherwise}}.
\end{array} \right. \]
Since $d'(f_i(x), g_i (x))<\delta$ for all $x\in X$, hence if $\psi:G\times X\to X$ generated by $f_1, f_2$, then $d(\varphi, \psi)<\delta$ and for $x\in X$ there is $z\in X$ such that $d'(\varphi(g, z), \psi(g, x))<\epsilon$, this implies that $d'(g_{i}^n(z), f^n_i(y))<\epsilon$. Since $\{g_1^n(z), n\in \mathbb{Z}\}=X$, so we can find an integer $ n\in \mathbb{Z}$ such that $d'(g^n_1(z),f^n_1(y))\geq\epsilon$, which is a contradiction. Therefore $\varphi$ does have persistent shadowing property with respect to $ d $ but it does not have persistent shadowing property with respect to $ d' $. \end{example}
\subsection{The Action of Syndetic Subgroups}\label{s20}
Let $BS(1, n)=\langle a, b: ba = a^nb \rangle$ and $\varphi:G\times \mathbb{R}^2\to \mathbb{R}^2$ be generated by $f_a(x)=Ax$ and $f_b(x)=Bx$ where
\begin{equation} A=\left( \begin{array}{cc}
1 & 0 \\
1 & 1 \\ \end{array} \right) \;\& \;\; B=\left( \begin{array}{cc}
\lambda & 0 \\
0 & n\lambda \\ \end{array} \right) \end{equation} Then for $1<\lambda\leq n$ and $n>1$, $f_b$ has persistent shadowing property. In \cite{jung}, it is shown that the homeomorphism $f:X\to X$ does have persistent shadowing property if and only if it does have shadowing property and it is $\beta$-persistent. Hence if $H=\langle b\rangle$ is a subgroup of $B(1, n)$, then
$\varphi|H:H\times \mathbb{R}^2\to \mathbb{R}^2$ has persistent shadowing property while by \cite[Theorem 4.4(1)]{osipov}, $\varphi:G\times \mathbb{R}^2\to \mathbb{R}^2$ does not have shadowing property. In this subsection, we show that if $H$ is a syndetic subgroup of $G$, then situation is different.\\
Let $||g||_{S}$ denote the length of the shortest representation of the element $g$ in term of element from $S$. For continuous action $\varphi:G\times X\to X$ on compact metric space $(X,d)$, $\epsilon>0$ and $k\in\mathbb{N}$, there is $\delta>0$ such that for all $g\in G$ with $||g||_S<k$ \begin{align}\label{u} d(x, y)<\delta\Rightarrow d(\varphi(g, x), \varphi(g, y))<\frac{\epsilon}{k} \end{align} By triangle inequality, it can to see that the following lemma is true. \begin{lemma}\label{pseudo} Let $S$ be a finitely generating set of $G$ and $\varphi:G\times X\to X$ be a continuous action on compact metric space $(X,d)$. For $\epsilon>0$ and $N\in\mathbb{N}$, there is $\delta>0$ such that if $f:G\to X$ is a $\delta$-pseudo orbit, then for every $h\in G$ with
$||h||_S<N$ and every $g\in G$ we have $d(f(hg), \varphi(h, f(g)))<\epsilon$ \end{lemma} Subset $H\subseteq G$ is syndetic if there is finite set $F\subseteq G$ such that $G=FH$. Hence a subgroup $H$ is syndetic in $G$ if and only if it is finite index subgroup of $G$ i.e. there is finite set $\{g_i\}_{i=1}^n$ such that $G= \bigcup_{i=1}^n g_iH$. \begin{proposition}\label{syndetic} Let $H$ be a finite index subgroup of $G$. Then continuous action $\varphi:G\times X\to X$ has persistent shadowing property if $\varphi:H\times X\to X$ has persistent shadowing property. \end{proposition} \begin{proof} Let $H$ be finite index subgroup of $G$. Let $A$ be a symmetric finitely generating set of $H$. We can add more elements to $A$ to get a symmetric finitely generating set $S$ of $G$. Also let $G=
\bigcup_{i=1}^n g_iH$ and $N=\max \{ ||g_i||_S: 1\leq i \leq n\}$.
Let $\epsilon>0$ be given. Choose $\delta>0$ such that for all $g\in G$ with $||g||_S<N$ \begin{align}\label{k} d(x, y)<\delta\Rightarrow d(\varphi(g, x), \varphi(g, y))<\frac{\epsilon}{N} \end{align} Choose $\eta>0$ corresponding to $\delta>0$ and $N\in\mathbb{N}$ that satisfies Lemma\ref{pseudo}.\\
Let $\epsilon>0$ be given. By triangle inequality, it is easy to see that for $\epsilon>0$ and $N$ above, there is $\eta>0$
such that if $d_S(\varphi, \psi)<\eta$, and $d(a, b)<\eta$, then
\begin{equation}\label{b}
d(\varphi(g, a), \psi(g, b))<\frac{\epsilon}{N}, \forall g\in G \text{ with } ||g||_{S} < N.
\end{equation}
and
\begin{equation}\label{pl} d(\psi(g, a), \psi(g, b))<\frac{\epsilon}{N}, \text{ for all }
||g||_S<N.
\end{equation} Let $\varphi:H\times X\to X$ has persistent shadowing property, we show that $\varphi:G\times X\to X$ has persistent shadowing property. Let $\epsilon>0$ be given. Choose $\delta>0$ that satisfies Lemma \ref{pseudo} and Relation \ref{b}, Relation
\ref{pl}. Choose $\eta>0$ corresponding $\frac{\delta}{2}>0$ by definition of persistent shadowing property of $\varphi|H$. We show that every $\eta$-pseudo orbit $F:G\to X$ for continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\eta$ can $\psi$-shadowed by a point of $X$. The map $F:G\to X$ is $2\eta$-pseudo orbit for $\varphi:G\times X\to X$, hence
\begin{equation}\label{kl}
d(F(gh), \varphi(g, F(h)))<\frac{\epsilon}{N}, \text{ for all } g \in G \text{ with } ||g||_S<N.
\end{equation}
Since $F:H\to X$ is $\eta$-pseudo orbit for $\psi:H\times X\to X$ with $d_A(\varphi, \psi)<\eta$,
hence by persistent shadowing property of $\varphi|H$ there is $p\in X$ such that $d(F(h), \psi(h, p))<\frac{\delta}{2}$.
Also by Relation \ref{pl} we have $d(\varphi(g_i, F(h)), \psi(g_i, \psi(h, p)))<\frac{\epsilon}{N}$. Hence by Relation \ref{kl}, we have
$d(F(g_ih), \psi(g_ih, p))<\epsilon$ i.e. $d(F(g), \psi(g, p))<\epsilon$ for all $g\in G$.
\end{proof} \begin{remark}\label{224} Let $P$ be one of the following property: $(a)$ shadowing property, $(b)$ $\alpha$-persistent, $(c)$ $\beta$-persistent. Similar to proof of Proposition \ref{syndetic}, if $H\leq G$ is syndetic subset of $G$ and continuous action $\varphi:H\times X\to X$ does have $P$- property, then $\varphi:G\times X\to X$ has $P$-property. \end{remark} \subsection{ The Action Of Free Groups}\label{s21} Group $BS(1, n)$ is solvable group, hence it is not free group. In the case of finitely free group actions $G$ , by \cite[Theorem 4.9.]{osipov}, if $\varphi:G\times X\to X$ has shadowing property, then $\varphi_g:X\to X$ has shadowing property, for all $g\in G$. In the following, we extend it in the case of persistent shadowing property. \begin{proposition}\label{op2} Let $\varphi:G\times X\to X$ be a continuous action of free group $F_2=\langle a, b\rangle$ on compact metric space $(X, d)$.
If $\varphi:F_2\times X\to X$ has persistent shadowing property, then $\varphi_{a^{-1}b}:X\to X$ has persistent shadowing property.
\end{proposition} \begin{proof}
Let $\epsilon>0$ be given. Choose $\epsilon_0>0$ corresponding to $\epsilon>0$ by persistent shadowing property of $\varphi:G\times X\to X$. For $\epsilon_0>0$ there is $\delta>0$ such that for every continuous action $\psi:F_2\times X\to X$ that is $\delta$-close to $\varphi:G\times X\to X$, we have
\begin{equation*}
d(x, y)<\delta\Rightarrow d(\psi(g, x), \psi(g, y))<\epsilon, |g|_S\leq 2,
\end{equation*}
Let $\{x_n\}_{n\in\mathbb{Z}}$ be $\delta$ pseudo orbit of homeomorphism $f:X\to X$ wit $d(f, \varphi_{a^{-1}}\varphi_b)<\delta$.
It is easy to see that if $\psi:F_2\times X\to X$ is generated by $\psi_a=\varphi_a$ and $\psi_b= \varphi_a\circ f$, then $\psi:F_2\times X\to X$
is $\epsilon_0$- close to $\varphi:F_2\times X\to X$. Define $K: F_2\to X$ define by $K(t)= \psi(v, x_k)$ where $v\in G$ is an element with minimal
length such that $t=v(a^{-1}b)^k$ for some $k\in \mathbb{Z}$. It is not hard to see that $K:G\to X$ is $\epsilon_0$- pseudo orbit of continuous
action $\psi:F_2\times X\to X$ with $d(\varphi, \psi)<\epsilon_0$. By persistent shadowing property, there is $y\in X$ such that
$d(K(g), \psi(g, y))<\epsilon$ for all $g\in G$. Since $K((a^{-1}b)^k)= x_k$ and $\psi((a^{-1}b)^k, y)= f^k(y)$, we have $d(f^k(y), x_k)<\epsilon$ for
all $k\in \mathbb{Z}$.
\end{proof}
\begin{remark}\label{225} Let $P$ be one of the following property: $(a)$ shadowing property, $(b)$ $\alpha$-persistent, $(c)$ $\beta$-persistent. Similar to proof of Proposition \ref{op2}, we can show that if continuous action $\varphi:F_2\times X\to X$ of free group $F_2=\langle a, b\rangle$ on compact metric space $(X, d)$ has $P$- property, then $\varphi_{a^{-1}b}:X\to X$ has $P$- property. \end{remark} \subsection{Persistent shadowing property and related measure}\label{s22}
For continuous action $\varphi:G\times X\to X$, $\epsilon>0$, $\delta>0$ and generating set $S$, we denote $PSh_{\varphi, S}(\delta, \epsilon)$ the set of all $x\in X$ such that every $\delta$-pseudo orbit $f:G\to X$ of continuous action $\psi$ with $d_S(\varphi, \psi)<\delta$ and $f(e)=x$, respect to generating set $S$, can be $(\epsilon, \psi)$-shadowed by a point in $X$.
It is clear that \begin{enumerate} \item Let $S, S'$ be generating sets for $G$ and $\epsilon>0$ be given. Then for every $\delta>0$ there is $\eta>0$ such that $PSh_{\varphi, S}(\eta, \epsilon)\subseteq PSh_{\varphi, S'}(\delta, \epsilon)$
\item Continuous action $\varphi:G\times X\to X$ does have persistent shadowing property with respect to generating set $S$ if and only if for every $\epsilon>0$ there is $\delta>0$ such that $PSh_{\varphi, S}(\delta, \epsilon)=X$.
\item If continuous action $\varphi:G\times X\to X$ does have persistent shadowing property on compact set $K\subseteq X$, then for every $\epsilon>0$ there exist a neighborhood $U$ of $K$ and $\delta>0$ such that $U\subseteq PSh_{\varphi, S}(\delta, \epsilon)$.
\item $PSh_{\varphi, S}(\delta, \epsilon)$ is a closed subset of $X$. \end{enumerate} The Borel $\sigma$-algebra of $X$ is the $\sigma$-algebra $\mathcal{B}(X)$ generated by the open subsets of $X$.
A Borel probability measure is a $\sigma$-additive measure $\mu$ defined in $\mathcal{B}(X)$ such that $\mu(X)=1$.
We denote by $\mathcal{M}(X)$ the set of all Borel probability measures of $X$. This set is convex and compact metrizable if it endowed with weak$^*$ topology:
the one ruled by the convergence $\mu_n\to \mu$ if and only if $\int f d\mu_n\to \int f d\mu$ for every continuous map $f:X\to \mathbb{R}$.\\ \begin{definition}\label{plr}
A measure $\mu\in\mathcal{M}(X)$ is compatible with the persistent shadowing property for the continuous action $\varphi:G\times X\to X$, $\mu\in \mathcal{M}_{PSh}(X, \varphi)$ if for every $\epsilon>0$ there is $\delta>0$ such that if $\mu(A)>0$, then \begin{equation*} A\cap Sh_{\psi}(\delta, \epsilon)\neq \emptyset. \end{equation*} for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi,\psi)<\delta$.
\end{definition}
\begin{example}\label{exam1} Let continuous action $\varphi:G\times X\to X$ admits an $\varphi$-invariant measure and let $\varphi$ does have persistent shadowing property on the non-wandering set $\Omega(\varphi)$. Then every $\varphi$-invariant Borel probability measure $\mu$ on $X$ is compatible with the property of persistent shadowing property.
\end{example}
Let $\mu\in\mathcal{M}_{PSh}(X, \varphi)$ , $\epsilon>0$ and $h:(X, d)\to (Y, \rho)$ be a homeomorphism. We will show that there is $\delta>0$ such that \begin{equation} h_{*}(\mu)(B)>0 \Rightarrow B\cap PSh_{h\circ \varphi \circ h^{-1}}(\delta, \epsilon)\neq \emptyset. \end{equation} For $\epsilon>0$ there is $\epsilon_0>0$ such that \begin{equation}\label{rty} d(a, b)<\epsilon_0\Rightarrow \rho(h(a), h(b))<\epsilon. \end{equation} For $\epsilon_0>0$ there is $\epsilon_1>0$ in definition of $\mu\in\mathcal{M}_{PSh}(X, \varphi)$. Since $\mu(h^{-1}(B))>0$, hence $h^{-1}(B)\cap \mathcal{M}_{PSh}(\epsilon_1, \epsilon_0)\neq \emptyset$. For $\epsilon_1>0$ there is $\delta>0$ such that \begin{equation} \rho(c, d)<\delta\Rightarrow d(h^{-1}(c), h^{-1}(d))<\epsilon_1. \end{equation}
Fix $x\in h^{-1}(B)\cap PSh_\varphi(\epsilon_1, \epsilon_0)$. One can check that if
$F':G\to Y$ be $\delta$-pseudo orbit of continuous action
$\psi':G\times Y\to Y$ with $\rho_S(f\circ \varphi\circ f^{-1},
\psi')<\delta$ and $F'(e)=h(x)$, then $F:G\to X$ defined by
$F(g)=h^{-1}(F'(g))$ is $\epsilon_1$-pseudo orbit of continuous
action $h^{-1}\circ \psi'\circ h $ with $d_S(\varphi,
h^{-1}\circ \psi'\circ h)<\epsilon_1$ and $F(e)=x$. By $x\in h^{-1}(B)\cap PSh_\varphi(\epsilon_1,
\epsilon_0)$, there is $p\in X$ such that $d(F(g), h^{-1}\circ
\psi'_g\circ h(p))<\epsilon$ for all $g\in G$. By relation
\ref{rty}, we have $d(h\circ F(g), \psi'_g(h(p)))<\epsilon$ i.e.
$d(F'(g), \psi'_g(h(p)))<\epsilon$. This means that $h(x)\in B\cap
PSh_{h\circ \varphi \circ h^{-1}}(\delta, \epsilon)$. \begin{itemize} \item If $h:(X, d)\to(Y, \rho)$ is a homeomorphism and $\mu\in \mathcal{M}_{PSh}(X, \varphi)$, then $h_{*}(\mu)\in \mathcal{M}_{PSh}(Y, h\circ \varphi\circ h^{-1})$, \item If $G$ is an abelian group, then $\mathcal{M}_{PSh}(X, \varphi)$ is $(\varphi_g)_*$-invariant, for all $g\in G$. \end{itemize} It is known that every continuous action of a countable abelian group on compact metric space admit an $\varphi$-invariant measure. Also, One can check that $\mathcal{M}_{PSh}(X, \varphi)$ is a convex subset of $\mathcal{M}(X)$, hence by Proposition 2.12 in \cite{ali2}, the following item does hold. \begin{itemize} \item If $G$ is an abelian group, then $\overline{\mathcal{M}_{PSh}(X, \varphi)}$ contains a $\varphi$-invariant measure. \end{itemize} It is known that a group action $\varphi:G\times X\to X$ on the compact metric space $X$ has a $\varphi$- invariant Borel probability measure on $X$ if and only if $G$ is amenable, see \cite{1}. Hence, the following group action where $G=SL(2, \mathbb{Z})$ and $X=\mathbb{R}\cup\{\infty\}$ does not contan any invarant meausre.\\
\begin{equation*}
\varphi
(\left(
\begin{array}{ll}
a & b\\
c & d
\end{array}
\right),
z)=\frac{az+b}{cz+d}. \end{equation*}
Take $$\mathcal{C}_{PSh(\varphi)}(\delta,
\epsilon)=\{\mu\in\mathcal{M}(X): \mu(PSh_\varphi(\delta,
\epsilon))=1\}.$$ It is easy to see that $\mathcal{C}_{PSh(\varphi)}(\delta,
\epsilon)$ is a convex and closed subset of $\mathcal{M}(X)$ for
every $\epsilon>0$ and any $\delta>0$.
\begin{proposition}\label{kj}
Let $\varphi:G\times X\to X$ be a continuous action. Then
\begin{enumerate}
\item $ \mathcal{M}_{PSh}(X, \varphi)=\bigcap_{n\in\mathbb{N}}(\bigcup_{m\in\mathbb{N}}(\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(n^{-1}+l^{-1}, m^{-1})))$
\item The subset $\mathcal{M}_{PSh}(X, \varphi)$ is an
$F_{\sigma\delta}$ subset of $\mathcal{M}(X)$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item Fix a $\mu\in\mathcal{M}_{PSh}(X, \varphi)$ and an $n\in\mathbb{N}$. Choose a $\delta>0$ such that if $\mu(A)>0$ then $A\cap PSh_\varphi(\delta, \frac{1}{n})\neq \emptyset$.Choose $m\in\mathbb{N}$ such that $m^{-1}<\delta$. Note that if $A\cap PSh_\varphi(\delta, \frac{1}{n})\neq \emptyset$, then $A\cap PSh_\varphi(\frac{1}{m}, \frac{1}{n})\neq \emptyset$. This implies that $\mu\in\bigcup_{m\in\mathbb{N}}(\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}( m^{-1}, n^{-1}+l^{-1})))$. Conversely choose a $\mu\in\bigcap_{n\in\mathbb{N}}(\bigcup_{m\in\mathbb{N}}(\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}( m^{-1}, n^{-1}+l^{-1})))$. Thus, for every $n\in\mathbb{N}$, there is $k\in\mathbb{N}$ such that $\mu\in \bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(\frac{1}{k}, \frac{1}{n}+ \frac{1}{l})$. This means that for every $n\in\mathbb{N}$ there is $k\in\mathbb{N}$ such that $\mu\in \bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(\frac{1}{k}, \frac{1}{n}+ \frac{1}{l})$. This implies that for every $\epsilon>0$ there exist $N, K\in\mathbb{N}$ such that $(\frac{1}{N}+\frac{1}{K})<\epsilon$ and $\mu\in C_{PSh(\varphi)}(\frac{1}{N}+\frac{1}{K}, \frac{1}{K})$. Therefor, for every $\epsilon>0$ choose $\delta=\frac{1}{K}$ to conclude then $\mu\in\mathcal{M}_{PSh(\varphi)}(X)$. \item Since $C_{PSh(\varphi)}( m^{-1}, n^{-1}+l^{-1})))$ is a closed subset of $\mathcal{M}(X)$ and a countable intersection of closed sets is closed, hence we can say that $(\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}( m^{-1}, n^{-1}+l^{-1})))$ is a closed subset of $\mathcal{M}(X)$ for every pair $m, n\in\mathbb{N}$. Therefor $\bigcup_{m\in\mathbb{N}}(\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}( m^{-1}, n^{-1}+l^{-1})))$ is an $F_{\sigma\delta}$ subset of $\mathcal{M}(X)$, for every $n\in\mathbb{N}$ and hence by item (1), $\mathcal{M}_{PSh}(X,\varphi)$ is an $F_{\sigma\delta}$ subset of $\mathcal{M}(X)$. \end{enumerate}
\end{proof}
\begin{proposition}\label{pki}
Let $\varphi:G\times X\to X$ be a continuous action on compact metric space $(X, d)$ and $\mu\in\mathcal{M}_{PSh}(\varphi)$.
Then $\varphi$ has persistent shadowing property on $supp(\mu)$. \end{proposition} \begin{proof}
Let $\epsilon>0$ be given. Choose $0<\delta<\frac\epsilon2$ such that for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$ we have
\begin{equation*}
\mu(A)>0\Rightarrow A\cap Sh_{\delta, \frac\epsilon2}(\psi)\neq \emptyset \end{equation*}
For $\delta>0$ there is $0<\eta<\frac\delta2$ such that for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\eta$ we have \begin{equation*}
d(a, b)<\eta\Rightarrow d(\psi_s(a), \psi_s(b))<\frac\delta2, \forall s\in S, \end{equation*} We claim that if $d_S(\varphi, \psi)<\delta$ and $F:G\to X$ is an $\eta$-pseudo orbit for $\psi:G\times X\to X$ with $F(e)=p\in supp(\mu)$, then there is $z\in X$ such that $d(F(g), \psi(g, z))<\epsilon$ for all $g\in G$. \\
By $p\in supp(\mu)$ we have $\mu(B_\eta(p))>0$, this implies that $B_\eta(p)\cap S_{\delta, \epsilon}(\psi)\neq \emptyset$ for $\psi:G\times X\to X$. Take $q\in B_\eta(p)\cap Sh_{\delta, \epsilon}(\psi)$ and define $f:G\to X$ by $f(e)=q$ and $f(g)=F(g)$ for all $g\neq e$. It is easy to see that $f:G\to X$ is a $\delta$-pseudo orbit of $\psi$. By $f(e)\in Sh_{\delta, \epsilon}(\psi)$, there is $y\in X$ with $d(f(g), \varphi(g, y))<\frac\epsilon2$ for all $g\in G$. This implies that $d(F(g), \psi(g, y))<\epsilon$ for all $g\in G$.
\end{proof}
It is known that the set of Borel probability measures of a compact metric space $X$ with support equals to $X$ is a dense $G_\delta$ subset of $\mathcal{M}(X)$, see \cite[Lemma 3.6]{baut}, also if $X$ has no isolated point, then the non-atomic Borel probability measures is a dense $G_\delta$ subset of $\mathcal{M}(X)$ see \cite[Corollary 8.2]{par}, thus since $X$ is a compact space, we can say that if $X$ is a compact space without isolated point, then the set of non-atomic Borel probability measures with support equals to $X$ is dense in $\mathcal{M}(X)$. Hence by Proposition \ref{pki}, we have \begin{corollary}
Let $\varphi:G\times X\to X$ be a continuous action of a compact metric space $X$ without isolated point.If every non-atomic Borel probability measure $\mu$ is compatible with persistent shadowing property for $\varphi:G\times X\to X$, then $\varphi$ has persistent shadowing property. \end{corollary} For continuous actions $\varphi, \psi:G\times X\to X$, and $x\in X$, we denote
\begin{equation*} \Gamma_\epsilon^{\varphi,\psi}(x)= \bigcap_{g\in G}\varphi (g^{-1}, B[\psi(g, x), \epsilon])=\{y\in X: d(\varphi(g, y), \psi(g, x))\leq \epsilon \text{ for every } g\in G\} \end{equation*} and \begin{equation*} B(\epsilon, \varphi, \psi)=\{x\in X: \Gamma_\epsilon^{\varphi, \psi}(x)\neq \emptyset\}. \end{equation*} It is easy to see that $B(\epsilon, \varphi, \psi)$ is a compact set in $X$.
We say that \begin{enumerate} \item A measure $\mu\in\mathcal{M}(X)$ is compatible with the shadowing property for the continuous action $\varphi:G\times X\to X$, $\mu\in \mathcal{M}_{Sh}(X, \varphi)$ if for every $\epsilon>0$ there is $\delta>0$ such that if $\mu(A)>0$, then \begin{equation*} A\cap Sh_{\varphi}(\delta, \epsilon)\neq \emptyset. \end{equation*} \item (\cite{ali2}) A measure $\mu\in\mathcal{M}(X)$ is compatible with the $\alpha$-persistent for the continuous action $\varphi:G\times X\to X$, $\mu\in \mathcal{M}_\alpha(X, \varphi)$, if for every $\epsilon>0$ there is $\delta>0$ such that if $\mu(A)>0$, then \begin{equation*} A\cap B(\epsilon, \varphi, \psi)\neq \emptyset. \end{equation*} for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi,\psi)<\delta$.
\item A measure $\mu\in\mathcal{M}(X)$ is compatible with the $\beta$-persistent for the continuous action $\varphi:G\times X\to X$, $\mu\in\mathcal{M}_\beta(X, \varphi)$, if for every $\epsilon>0$ there is $\delta>0$ such that if $\mu(A)>0$, then \begin{equation*} A\cap B(\epsilon, \psi, \varphi)\neq \emptyset. \end{equation*} for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi,\psi)<\delta$. \end{enumerate}
\begin{remark}\label{pkii} With similar proof of Proposition \ref{kj}, one can check that $\mathcal{M}_{Sh}(X, \varphi)$, $\mathcal{M}_\alpha(X, \varphi)$ and $\mathcal{M}_\beta(X, \varphi)$ are $F_{\sigma\delta}$ subsets of $\mathcal{M}(X)$. Also for homeomorphism $h:(X, d)\to (Y,\rho)$, if $\mu\in \mathcal{M}_{Sh}(X, \varphi)$, $\mu\in \mathcal{M}_{\alpha}(X, \varphi)$ and $\mu\in \mathcal{M}_{\beta}(X, \varphi)$, then $f_{*}(\mu)\in\mathcal{M}_{Sh}(Y, h\circ \varphi\circ h^{-1})$, $f_{*}(\mu)\in\mathcal{M}_\alpha(Y, h\circ \varphi\circ h^{-1})$, $f_{*}(\mu)\in\mathcal{M}_\beta(Y, h\circ \varphi\circ h^{-1})$, respectively. With similar technics in Proposition \ref{pki}, we can show that if $\mu\in \mathcal{M}_{Sh}(X, \varphi)$, $\mu\in \mathcal{M}_{\alpha}(X, \varphi)$ and $\mu\in \mathcal{M}_{\beta}(X, \varphi)$, then $\varphi:G\times X\to X$ does have shadowing property, $\alpha$-persistent and $\beta$-persistent on $supp(\mu)$, respectively. \end{remark}
\section{Pointwise dynamic} In this section we introduce persistent shadowable points, uniformly $\alpha$-persistent, $\beta$-persistent points for a continuous action $\varphi$. Also, we recall notions of shadowable points, $\alpha$-persistent points and $\beta$-persistent points for continuous action $\varphi:G\times X\to X$. This section consists of 3-subsection. In Subsection \ref{3001}, we study relations between various of shadowable points. In Subsection \ref{400}, we study the set of persistent shadowable points and we give some properties of it. Finally, in Subsection \ref{401}, we study the relation between compatibility of a measure with respect $P$-property and measure of points in $X$ with $P$-property, where $P$-property can be persistent shadowing property, shadowing property, $\alpha$-persistent and $\beta$-persistent.
\subsection{Relation between various shadowable points}\label{3001} \begin{definition} Let $S$ be a finitely generating set of $G$ and $\varphi:G\times X\to X$ be a continuous action. \begin{enumerate} \item (\cite{sang}) A point $x\in X$ is called shadowable point for $G$- action $\varphi:G\times X\to X$, if for every $\epsilon>0$ there is $\delta=\delta(\epsilon,x)>0$ such that for every $\delta$- pseudo orbit $f:G\to X$ with $f(e)=x$ there is $p\in X$ such that $d(f(g), \varphi(g, p))<\epsilon$ for all $g\in G$. \item A point $x\in X$ is $\alpha$-persistence ( uniformly $\alpha$-persistent) for a continuous action $\varphi:G\times X\to X$ if for every $\epsilon>0$ there is $\delta_x>0$ such that for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$ ( and every $x'\in B_\delta(x)$) there is $y\in X$ such that $d(\varphi(g, y), \psi(g, x))<\epsilon$ ( resp. $d(\varphi(g, y), \psi(g, x'))<\epsilon$ ) for all $g\in G$. Hereafter $Perssis_\alpha(\varphi)$ and $UPersis_\alpha(\varphi)$ will denote the set of all $\alpha$-persistence points and uniformly $\alpha$-persistent points of $\varphi$, respectively.
\item A point $x\in X$ is $\beta$-persistence ( uniformly $\beta$-persistent) for a continuous action $\varphi:G\times X\to X$ if for every $\epsilon>0$ there is $\delta_x>0$ such that for every continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$ ( and every $x'\in B_\delta(x)$) there is $y\in X$ such that $d(\varphi(g, x), \psi(g, y))<\epsilon$ ( resp. $d(\varphi(g, x'), \psi(g, y))<\epsilon$) for all $g\in G$. Hereafter $Perssis_\beta(\varphi)$ and $UPersis_\beta(\varphi)$ will denote the set of all $\beta$-persistence points and uniformly $\beta$-persistent points of $\varphi$, respectively. \item A point $x\in X$ is called persistent shadowable point for $\varphi:G\times X\to X$, $x\in PSh(\varphi)$, if for every $\epsilon>0$ there is $\delta>0$ such that every $\delta$-pseudo orbit $f:G\to X$ for $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$ and $f(e)=x$ can be $(\psi, \epsilon)$-shadowed by a point.
\end{enumerate}
\end{definition}
It is easy to see that $PSh(\varphi)\subseteq Sh(\varphi)\subseteq UPersis_\alpha(\varphi\subseteq Persis_\alpha(\varphi)$ and $PSh(\varphi)\subseteq UPersis_\beta(\varphi)\subseteq Persis_\beta(\varphi)$. The following example shows that the converse of inclusions need not be hold.
\begin{example}\label{non-shadowable}
\begin{enumerate}
\item $Persis_\alpha\neq UPersis_\alpha(\varphi)$. Let $X=\mathbb{S}^1\cup\{(x, 0): -1\leq x\leq 1\}$. Since $(-1, 0)$ is a fixed point of every continuous action, then $(1, 0)\in Persis_\alpha(\varphi)$ where $\varphi:F_2\times X\to X$ is defined by $\varphi(g, x)=x$. We claim that $(1, 0)\in Persis_\alpha(\varphi)- UPersis_\alpha(\varphi)$. By contradiction, we assume that $(1, 0)\in UPersis_\alpha(\varphi)$. For $\epsilon>0$ there is $\delta>0$ by $(1, 0)\in UPersis_\alpha(\varphi)$.
Let $s_1:[-1,1]\to\mathbb{R}:x\mapsto\frac{\delta}{4}(1-|x|)$ and $s_2:[-1,1]\to\mathbb{R}:x\mapsto\frac{\delta}{6}(1-|x|)$ then $-1\le x-s_i(x)\le x+s_i(x)\le 1$ for any $x\in[-1,1]$, and $s_i(-1)=s_i(1)=0$, for $i=1, 2$. Define
$$g_i:X\to X:\langle x,y\rangle\mapsto\begin{cases}
\langle x-s_i(x),0\rangle,&\text{if }y=0\\
\left\langle x,y\right\rangle,&\text{otherwise}
\end{cases}$$
For any $(x, y)\in \{(x, 0): -1<x<1 \}$, the first coordinate of $g_i((x, y))$ is less than $x$.
Assume that $\psi:F_2\times X\to X$ is generated by $\varphi_a=g_1$ and $\varphi_b=g_2$. Then $d_S(\varphi, \psi)<\delta$, hence for
$y\in (-1, 1)\times \{0\}$ with $d(y, (1, 0))<\delta$, there is $p\in X$ with $d(p, \psi(g, y))<\epsilon$ for all $g\in G$ that is a contradiction, because $g^k_i(y)\to (-1, 0)$ as $k\to \infty$.
\item By \cite[Remark 4.4]{art1}, there is system $(X, f)$ such that $f:X\to X$ is $\alpha$-persistent while it does not shadowing property. Hence $UPersis_\alpha(f)=X$ but $Sh(f)\neq X$.
\end{enumerate}
\end{example}
A point $x\in X$ is an equicontinuous point for continuous action $\varphi:G\times X\to X$ if for every $\epsilon>0$ there is $\delta_x>0$ such that \begin{equation} d(x, y)<\delta_x\Rightarrow d(\varphi(g, x), \varphi(g, y))<\epsilon, \forall g\in G \end{equation} The set of equicontinuous points for $\varphi:G\times X\to X$ is denoted by $Eq(\varphi)$. It is easy to see that \begin{equation} Persis_\beta(\varphi)\cap Eq(\varphi)\subseteq UPersis_\beta(\varphi) \end{equation} and
\begin{equation}\label{268} Persis_\alpha(\varphi)\cap Eq(\varphi)\subseteq Persis_\beta(\varphi) \end{equation}
Let $\varphi:G\times X\to X$ be continuous action as in Example\ref{non-shadowable}. Then $\varphi$ is equcontinuous action and $(1, 0)\in Persis_\beta(\varphi)=UPersis_\beta(\varphi)$ while by Example \ref{non-shadowable}, $(1, 0)\notin UPersis_\alpha(\varphi)$. This implies that $UPersis_\beta(\varphi)\neq UPersis_\alpha(\varphi)$ and the converse of Relation \ref{268} does not hold.\\ In the following, we show that on compact manifold $M$ without boundary with $dim(M)\geq 2$, notions of shadowable point, uniform $\alpha$-persistent point and $\alpha$-persistent point are equivalent. It is not hard to see that $x$ is shadowable point if and only if it is finite shadowable point. We say that $x\in X$ is finite shadowable point, if for every $\epsilon>0$ there is $\delta>0$ such that for every $n\in \mathbb{N}$, every $\delta-n$-
pseudo orbit $f:G_n\to X$ with $f(e)=x$ can be $\epsilon$-shadowed by point $p\in X$. Note that $G_n=\{g\in G: |g|_S\leq n\}$.
\begin{definition} We say that the space $X$ is generalized homogeneous, if for every $\epsilon>0$ there exists $\delta > 0$ such that if $\{(x_1, y_1),\ldots , (x_n, y_n)\}$ is a finite set of points in $X\times X$ satisfying: \begin{enumerate}
\item for every $i=1,\ldots,n, d(x_i,y_i)<\delta$,
\item if $i\neq j$ then $x_i\neq x_j$ and $y_i\neq y_j$, \end{enumerate} then there is a homeomorphism $h:X\rightarrow X$ with $d_0(h, id)<\epsilon$ and $h(x_i)=y_i$ for $i=1,\ldots,n$. \label{homogen} \end{definition} For example, a topological manifold $X$ without boundary ($dim(X)\geq 2$), a Cartesian product of a countably infinite number of manifolds with nonempty boundary and a cantor set are generalized homogeneous \cite{pil}. \begin{proposition}\label{u} Let $X$ be a generalized homogeneous compact metric space and $\varphi:G\times X\rightarrow X$ be a continuous action. Then $Sh(\varphi)= Persis_\alpha(\varphi)$. \end{proposition} \begin{proof}
It is clear that $Sh(\varphi)\subseteq Persi_\alpha(\varphi)$. Let $x\in Persis_\alpha(\varphi)$ and $\epsilon$ be given. W claim that there is $\delta>0$ such that for every $n\in \mathbb{N}$, every $\delta-n$- pseudo orbit $f:G_n\to X$ with $f(e)=x$ can be $\epsilon$-shadowed by point $p\in X$. Choose $0<\epsilon_0<\frac{\epsilon}{2}$ corresponding to $\epsilon>0$ by $x\in Persis_\alpha(\varphi)$. Choose $0<\delta_0<\frac{\epsilon_0}{2}$ corresponding to $\epsilon_0>0$ by Definition \ref{homogen}. For $\delta_0>0$ there is $0<\delta<\frac{\delta_0}{2}$ such that
\begin{equation*}
d(a, b)<\delta\Rightarrow d(\varphi(g, a), \varphi(g, b))<\delta_0, \forall |g|_S\leq 2.
\end{equation*}
Let $f:G_n\to X$ be a $\delta-n$pseudo orbit with $f(e)=x$. Similar proof of Lemma 2.1.2 in \cite{pil}, we can construct $\delta_0$-pseudo orbit $F:G_n\to X$ with the following property:
\begin{itemize}
\item $F(e)=x$,
\item $\varphi(s, F(g))\neq F(sg), \forall s\in S, \forall |g|_S<n$,
\item $d(F(g),f(g))<\delta_0, \forall g\in G_n$
\end{itemize}
By Definition \ref{homogen}, for $\{ (\varphi(s, F(g)), F(sg)); g\in G_n\}$, there is a homeomorphism $h: X\to X$ with $d_0(h, id)<\epsilon_0$ such that $h(\varphi(s, F(g)))= F(sg)$. Let $\psi:G\times X\to X$ be generated by $h\circ \varphi_s$ for $s\in S$. Then $\psi:G\times X\to X$ is $\epsilon_0$-close to $\psi:G\times X\to X$. This implies that there is $p\in X$ with $d(\psi(g, x), \varphi(g, p))<\epsilon$ for all $g\in G$. But $\psi(g, x)=F(g)$, hence $d(f(g), \varphi(g, p))\leq d(f(g), F(g))+ d(F(g), \varphi(g, p))<\epsilon$ for all $g\in G$. \end{proof}
By Theorem 3.5 in \cite{sang}, a continuous action $\varphi:G\times X\to X$ on compact metric space $(X, d)$ has shadowing property if and only if it is pointwise shadowable point. Hence by Proposition \ref{u}, we have
\begin{corollary}
If $\varphi:G\times X\to X$ is a continuous action of finitely generated group $G$ on generalized homogeneous compact metric space $(X, d)$, then the following equivalent:
\begin{enumerate}
\item $\varphi$ has shadowing property,
\item $\varphi$ is pointwise shadowable point,
\item $\varphi$ is pointwise $\alpha$-persistent,
\item $\varphi$ is $\alpha$-persistent.
\end{enumerate}
\end{corollary} Let $X$ be a compact metric space. It is easy to see that the following properties hold. \begin{enumerate}
\item Continuous action $\varphi:G\times X\to X$ is
$\alpha$-persistent if and only if $UPersis_\alpha(\varphi)=X$
\item Continuous action $\varphi:G\times X\to X$ is
$\beta$-persistent if and only if $UPersis_\beta(\varphi)=X$
\item Equicontinuous action $\varphi:G\times X\to X$ is
$\beta$-persistent if and only if $Persis_\beta(\varphi)=X$.
\end{enumerate}
\subsection{Some properties of persistent shadowable points}\label{400}
In the following, we give some properties of persistent shadowable points.
\begin{theorem}\label{wok}
Let $S$ be a finitely generating set of $G$ and $\varphi:G\times X\to X$ be a continuous action on compact metric space $(X, d)$.
\begin{enumerate}
\item The point $x=p$ is persistent shadowable if and only if for every $\epsilon>0$ there is $\delta>0$ such that every $\delta$-pseudo orbit through $B[p, \delta]$ of a continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$ can be shadowed by a $\psi$-orbit.
\item Continuous action $\varphi:G\times X\to X$ has persistent shadowing property on compact set $K$ if and only if $K\subseteq PSh(\varphi)$.
\item Continuous action $\varphi:G\times X\to X$ has the persistent shadowing property if and only if it is pointwise persistent shadowable.
\item $PSh(\varphi)= UPersis_\beta(\varphi)\cap Sh(\varphi)$.
\item Continuous action $\varphi:G\times X\to X$ has persistent shadowing property if and only if it is $\beta$-persistent and it has shadowing property.
\end{enumerate} \end{theorem} \begin{proof}
\begin{enumerate}
\item Suppose by contradiction that $x=p$ is persistent shadowable point but there are $\epsilon>0$, a sequence of continuous actions $\psi_k:G\times X\to X$ with $d_S(\varphi, \psi_k)\leq \frac{1}{k}$, a sequence of $\frac{1}{k}$-pseudo orbits $f^k:G\to X$ of $\psi_k:G\times X\to X$ with $d(f^k(e), x)\leq \frac{1}{k}$ such that $f^k:G\to X$ can not be $2\epsilon$-shadowed by any $\psi_k$-orbit, for every $k\in \mathbb{N}$. For this $\epsilon$ we let $\delta$ be given by the persistently shadowableness of $x$. We can assume that $\delta<\epsilon$. \\
On the one hand,
\begin{align*}
d(\psi_k(s, p), \psi_k(s, f^k(e)) & \leq d(\psi_k(s,p), \varphi(s, p))+ d(\varphi(s, p), \varphi(s, f^k(e)))+ d(\varphi(s, f^k(e)), \psi_k(s, f^k(e)))\\
& \leq 2d_s(\varphi, \psi_k)+ d(\varphi(s, p), \varphi(s, f^k(e)))
\end{align*}
We can choose $k$ large satisfying
\begin{equation}\label{301}
\max \{d(\psi_k(s, p), \psi_k(s, f^k(e)), \frac{1}{k}\}<\frac{\delta}{2}, \forall s\in S
\end{equation}
Let us define $F^k:G\to X$ by
\[ F^k(g)= \left\lbrace
\begin{array}{c l}
f^k(g), & \text{\rm{ $ g\neq e$}},\\ e, & \text{\rm{$g=e$}}.
\end{array} \right. \]
Then
\[ d(\psi_k(s, F^k(g)), F^k(sg))= \left\lbrace
\begin{array}{c l}
d(\psi_k(s, f^k(g)), f^k(sg), & \text{\rm{ $ \text{ for } g\notin\{e, s^{-1}: s\in S\}$}},\\ d(\psi_k(s, f^k(g)), p), & \text{\rm{$ \text{ for } g=s^{-1}$}},\\ d(\psi_k(s, p), f^k(s)), & \text{\rm{$ \text{ for } g=e$}}.
\end{array} \right. \]
Since $f^k:G\to X$ is $\frac{1}{k}$-pseudo orbit of $\psi_k:G\times X\to X$, hence \begin{equation}\label{zx}
d(\psi_k(s, F^k(g)), F^k(sg))<\frac{1}{k}<\delta, \text{ for } g\notin \{e, s^{-1}: s\in S\} \end{equation} Also inequality \begin{equation*}
d(\psi_k(s, f^k(s^{-1})), p)\leq d(\psi_k(s, f^{k}(s^{-1})), f^{k}(e))+d(f^k(e), p), \end{equation*} implies that
\begin{equation}\label{zc}
d(\psi_k(s, F^k(g)), F^k(sg))<\delta, \text{ for } g=s^{-1}. \end{equation} By Relation \ref{301} and Relation \ref{zc} \begin{equation*}
d(\psi_k(s, p), f^k(s))\leq d(\psi_k(s, p)+\psi_k(s, f^k(e)))+ d(\psi_k(s, f^k(e)), f^k(s)) \end{equation*} we have \begin{equation}\label{zn}
d(\psi_k(s, F^k(g)), F^k(sg))<\delta, \text{ for } g=e \end{equation} Hence by Relations \ref{zx},\ref{zc}, \ref{zn}, we get $F^k:G\to X$ is $\delta$-pseudo orbit of $\psi_k:G\times X\to X$. But $d_S(\varphi, \psi_k)<\delta$, hence , by persistent shadowing property of $\varphi$, for $\delta$-pseudo orbit $F^k:G\to X$ with $F(e)=p$ of continuous action $\psi_k$ with $d_S(\varphi, \psi_k)<\delta$, there is $z\in X$ such that $d(F^k(g), \psi_k(g, z))<\epsilon$ for all $g\in G$. Since for $g\neq e$ one has $d(f^k(g), \psi_k(g, z))=d(F^k(g), \psi_k(g, z))<\epsilon$ and for $g=e$ \begin{equation*}
d(z, f^k(e))\leq d(z, p)+ d(p, f^k(e))<\epsilon+\frac{1}{k}<2\epsilon, \end{equation*} Hence $f^k:G\to X$ can be $2\epsilon$- shadowed by $\psi_k$-orbit of $z\in X$. That is a contradiction. \item It is sufficient to show that if $K\subseteq PSh(\varphi)$, then $\varphi$ has persistent shadowing property on $K$. Let $\epsilon>0$ be given. For every $x\in X$ there is $\delta_x>0$ corresponding to $\epsilon>0$ by item (2). Since $K$ is a compact space, the open cover $\{B[x, \delta_x]:x\in K\}$ has a finite open subcover $\{B[x_i, r_{x_i}]: i=1, 2, \ldots, n\}$.\\
Take $\delta=\min \{ \delta_{x_i}: i\in \{1, 2, \ldots, n\}\}$ and let $F:G\to X$ be a $\delta$-pseudo orbit of continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$ and $F(e)\in K$. Clearly $F(e)\in B[x_i, \delta_{x_i}]$ for some $1\leq i\leq n$. This implies that $F:G\to X$ is $\delta_{x_i}$-pseudo orbit through $B[x_i, \delta_{x_i}]$. Then $F:G\to X$ can be eventually $\epsilon$-shadowed by some $\psi$-orbit.
\item Take $K = X$ in item (3), we have that $\varphi$ has the persistent shadowing property if and only if $PSh(\varphi)=X$.
\item Firstly, we show that $PSh(\varphi)\subseteq UPersis_\beta(\varphi)\cap Sh(\varphi)$. Take $x\in PSh(\varphi)$, $\epsilon>0$. Choose $\delta_0>0$ corresponding $\frac\epsilon2>0$ by $x\in PSh(\varphi)$. Choose $\delta<\frac{\delta_0}{2}$ such that
\begin{equation*}
d(a, b)<\delta\Rightarrow d(\varphi(s, a), \varphi(s, b))<\frac{\delta_0}{2}.
\end{equation*}
Fix a continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$. For $y\in B_\delta(x)$, define
$F:G\to X$ by $F(g)=\varphi(g, y)$ if $g\neq e$ and $F(e)=x$, then $F:G\to X$ is a $\delta$-pseudo orbit of continuous action $\psi:G\times X\to X$ with $F(e)=x$. By $x\in PSh(\varphi)$, there is $p\in X$ such that $d(F(g), \psi(g, p))<\frac\epsilon2$ for every $g\in G$. This implies that $d(\varphi(g, y), \psi(g, p))<\epsilon$ for all $g\in G$. It follows that $x\in UPersis_\beta(\varphi)$. Since $PSh(\varphi)\subseteq Sh(\varphi)$, we get $x\in UPersis_\beta(\varphi)\cap Sh(\varphi)$. Therefore, $PSh(\varphi)\subseteq UPersis_\beta(\varphi)\cap Sh(\varphi)$.
Now we show that $UPersis_\beta(\varphi)\cap Sh(\varphi)\subseteq PSh(\varphi)$. Suppose that $x\in UPersis_\beta(\varphi)\cap Sh(\varphi)$ and $\epsilon>0$ be given. We show that there is $\delta>0$ such that for every $\delta$-pseudo orbit $f:G\to X$ with $f(e)=x$ of $\psi$ such that $d_S(\varpi, \psi)<\delta$, there is $p\in X$ with $d(f(g), \psi(g, p))<\epsilon$ for all $g\in G$.
Choose $\epsilon_0<\frac\epsilon4$ corresponding $\frac\epsilon2$ by $x\in UPersis_\beta(\varphi)$. There is $\eta>0$ corresponding $\frac{\epsilon_0}{2}$ by $x\in Sh(\varphi)$. Take $\delta<\frac\eta2$. If $f:G\to X$ is $\delta$-pseudo orbit of $\psi$ with $f(e)=x$ for continuous action $\psi:G\times X\to X$ with $d_S(\varphi, \psi)<\delta$, then $f:G\to X$ is $\eta$-pseudo orbit of $\varphi$ with $f(e)=x$. By $x\in Sh(\varphi)$ and $f(e)=x$, there is $y\in B_{\epsilon_0}(x)$ such that $d(f(g), \varphi(g, y))<\frac\epsilon2$ for all $g\in G$. Also by $x\in UPersis_\beta(\varphi)$ and $y\in B_{\epsilon_0}(x)$ there is $p\in X$ such that $d(\varphi(g, y), \psi(g, p))<\frac\epsilon2$ for all $g\in G$. This implies that $d(f(g), \psi(g, p))<\epsilon$ for all $g\in G$.
.
\item It is clear that persistent shadowing property implies $\beta$-persistent and shadowing property. For the converse let $\varphi$ is $\beta$-persistent
and it has shadowing property, then by item (4), $PSh(\varphi)= Persis_\beta(\varphi)\cap Sh(\varphi)= X$, hence $\varphi$ is pointwise persistent shadowple and by item (3), $\varphi$ has persistent shadowing property. \end{enumerate} \end{proof}
\subsection{Various shadowable points and related measures}\label{401}
Item 1 in Proposition \ref{wok} implies that if continuous action $\varphi:G\times X\to X$ does have persistent shadowing property on compact set $K\subseteq X$, then for every $\epsilon>0$ there
exist a neighborhood $U$ of $K$ and $\delta>0$ such that $U\subseteq PSh_{\varphi}(\delta, \epsilon)$. Also, by Item 2 in Proposition \ref{wok}, $K\subseteq PSh(\varphi)$
implies that continuous action $\varphi:G\times X\to X$ does have persistent shadowing property on compact set $K\subseteq X$. Hence we have the following proposition.
\begin{proposition}\label{rtg} Let $\varphi:G\times X\to X$ be a continuous action on compact metric space $(X, d)$. Let $K\subset PSh(\varphi)$ be a compact subset. Then for every $\epsilon>0$ there exist a neighborhood $U$ of $K$ and $\delta>0$ such that $U\subseteq PSh_{\varphi}(\delta, \epsilon)$.
\end{proposition}
One can check that Proposition \ref{rtg} is true in the case of
shadowing property, $\alpha$-persistent and $\beta$-persistent.
\begin{proposition}
If $\mathcal{X}\in \{ Sh(\varphi),
UPersis_\alpha(\varphi), UPersis_\beta(\varphi)\}$ and $K\subseteq
X$ be a compact set. Then for every $\epsilon>0$ there exist a
neighborhood $U$ of $K$ and $\delta>0$ such that $U\subseteq
\mathcal{X}_{\varphi}(\delta, \epsilon)$.
\end{proposition} Assume that $supp(\mu)\subset PSh(X, \varphi)$ and $X$ be a compact metric space. Since $supp(\mu)$ is a compact set, hence by Proposition \ref{rtg}, for every $\epsilon>0$ there is $\delta>0$ such that \begin{equation} A\cap supp(\mu)\neq\emptyset \Rightarrow A\cap PS_{\delta, \epsilon}(X, \varphi)\neq\empty. \end{equation} Hence we have the following corollary \begin{equation}\label{pkiii} supp(\mu)\subseteq PSh(\varphi)\Rightarrow \mu\in\mathcal{M}_{PSh}(X, \varphi) \end{equation} By Proposition \ref{pki} and Remark \ref{pkii} and with similar proof of Remark \ref{pkiii} we have the following proposition. \begin{proposition}\label{Lb} Let $\varphi:G\times X\to X$ be a continuous action of finitely generated group on compact metric space $(X, d)$. Then \begin{enumerate}
\item $\mu\in M_{PSh}(X, \varphi)\Leftrightarrow
supp(\mu)\subseteq PSh(\varphi)$,
\item $\mu\in M_{Sh}(X, \varphi)\Leftrightarrow
supp(\mu)\subseteq Sh(\varphi)$,
\item $\mu\in M_{\alpha}(X, \varphi)\Leftrightarrow
supp(\mu)\subseteq UPersis_\alpha(\varphi)$,
\item $\mu\in M_{\beta}(X, \varphi)\Leftrightarrow
supp(\mu)\subseteq UPersis_\beta(\varphi)$ \end{enumerate} \end{proposition} By Proposition \ref{kj}, the set of persistent shadowable points is measureable. With similar proof, one can check that $Sh(\varphi)$, $UPersis_\alpha(\varphi)$ and $UPersis_\beta(\varphi)$ are measureable sets. Assume that $supp(\mu)\subseteq \overline{PSh(\varphi)}$, then by Lemma 2.8 in \cite{boom}, if $X$ is a compact metric space, then there is a sequence $\mu_n\in\mathcal{M}(X)$ with $supp(\mu_n)\subseteq PSh(\varphi)$ and converging to $\mu$ with respect to the $weak^*$ topology. By Proposition \ref{Lb}, $\mu_n\in\mathcal{M}_{PSh}(X, \varphi)$. This implies that $\mu\in \overline{\mathcal{M}_{PSh}(X, \varphi)}$. Hence, we have following relation \begin{equation}\label{27}
\mu(\overline{PSh(\varphi)})=1\Rightarrow \mu\in\overline{\mathcal{M}_{PSh}(X, \varphi)}. \end{equation} Conversely, let $\mu\in\overline{\mathcal{M}_{PSh}(X, \varphi)}$. Choose $\mu_n\in \mathcal{M}_{PSh}(X, \varphi)$ such that $\mu_n\to \mu$. By inequality $$\limsup_{n\to\infty}\mu_n(\overline{PSh(\varphi)})\leq \mu(\overline{PSh(\varphi)})$$ and $\mu_n(\overline{PSh(\varphi)})=1$, we have $\mu(\overline{PSh(\varphi)})=1$. Hence we have the following relation. \begin{equation}\label{277} \mu\in\overline{\mathcal{M}_{PSh}(X, \varphi)}\Rightarrow \mu(\overline{PSh(\varphi)})=1. \end{equation} By Relation \ref{27} and Relation \ref{277}, we have the following proposition. \begin{proposition}\label{278} Let $\varphi:G\times X\to X$ be a continuous action on compact metric space $(X, d)$. Then $\mu(\overline{PSh(\varphi)})=1$ if and only if $\mu\in\overline{\mathcal{M}_{PSh}(X, \varphi)}$. \end{proposition} With similar proof, we have the following proposition. \begin{proposition}\label{Lbb} Let $\varphi:G\times X\to X$ be a continuous action on compact metric space $(X, d)$. Then \begin{enumerate}
\item $\mu(\overline{PSh(\varphi)})=1\Leftrightarrow \mu\in\overline{\mathcal{M}_{PSh}(X,
\varphi)}.$
\item $\mu(\overline{Sh(\varphi)})=1\Leftrightarrow \mu\in\overline{\mathcal{M}_{Sh}(X,
\varphi)}.$
\item $\mu(\overline{UPersis_\alpha(\varphi)})=1\Leftrightarrow \mu\in\overline{\mathcal{M}_\alpha(X,
\varphi)}.$
\item $\mu(\overline{UPersis_\beta(\varphi)})=1\Leftrightarrow \mu\in\overline{\mathcal{M}_\beta(X,
\varphi)}.$ \end{enumerate} \end{proposition}
We claim that if $PSh(\varphi)$ is a closed set in $X$, then $\overline{\mathcal{M}_{PSh}(X,
\varphi)}= \mathcal{M}_{PSh}(X,
\varphi)$. Take $\mu_n\in\mathcal{M}_{PSh}(X, \varphi)$ with
$\mu_n\to \mu$. Then $\limsup_{n\to
\infty}\mu_n(PSh(\varphi))\leq
\mu(PSh(\varphi))$ implies that
$\mu(PSh(\varphi))=1$. Hence $supp(\mu)\subseteq
PSh(\varphi)$. By Proposition \ref{Lb},
$\mu\in\mathcal{M}_{PSh}(\varphi)$ i.e. \begin{equation}\label{2312}
\overline{PSh(\varphi)}=PSh(\varphi)\Rightarrow\overline{\mathcal{M}_{PSh}(X, \varphi)}= \mathcal{M}_{PSh}(X, \varphi). \end{equation}
Conversely, let $\overline{\mathcal{M}_{PSh}(X,
\varphi)}= \mathcal{M}_{PSh}(X,
\varphi)$, we claim that $\overline{PSh(\varphi)}=PSh(\varphi)$.
If it is not true, then there exist $\{x_n\}\subseteq
PSh(\varphi)$ with $x_n\to x$ such that $x\notin PSh(\varphi)$.
By $x_n\to x$, we have $m_{x_n}\to m_x$.
Where $m_t$ the Dirac measure supported on $t\in X$, indeed, $m_t(A)=0$ or $1$ depending on whether $t\notin A$ or $t\in A$. It is easy to see that
$$PSh(\varphi)=\{t\in X:m_t\in\mathcal{M}_{PSh}(X, \varphi)\}$$
By $\{x_n\}\subseteq
PSh(\varphi)$, we have $m_{x_n}\in\mathcal{M}_{PSh}(X,
\varphi)$. Also, by $m_{x_n}\to m_x$ and $\overline{\mathcal{M}_{PSh}(X,
\varphi)}= \mathcal{M}_{PSh}(X,
\varphi)$, we have $x\in PSh(\varphi)$ which is a contradiction.
This implies the following relation.
\begin{equation}\label{12321} \overline{\mathcal{M}_{PSh}(X,
\varphi)}= \mathcal{M}_{PSh}(X,
\varphi)\Rightarrow \overline{PSh(\varphi)}=PSh(\varphi).
\end{equation}
By Relation \ref{2312} and Relation \ref{12321}, we have the
following proposition.
\begin{proposition}\label{12354}
Let $\varphi:G\times X\to X$ be a continuous action on compact
metric space $(X, d)$. Then
$$\overline{\mathcal{M}_{PSh}(X,
\varphi)}= \mathcal{M}_{PSh}(X,
\varphi)\Leftrightarrow \overline{PSh(\varphi)}=PSh(\varphi).$$
\end{proposition}
On can check that result of Proposition \ref{12354} can be obtain
for other type of shadowing,indeed we have the following
proposition.
\begin{proposition}\label{12355}
Let $\varphi:G\times X\to X$ be a continuous action on compact
metric space $(X, d)$. Then
\begin{enumerate}
\item $\overline{\mathcal{M}_{PSh}(X,
\varphi)}= \mathcal{M}_{PSh}(X,
\varphi)\Leftrightarrow \overline{PSh(\varphi)}=PSh(\varphi).$
\item $\overline{\mathcal{M}_{Sh}(X,
\varphi)}= \mathcal{M}_{Sh}(X,
\varphi)\Leftrightarrow \overline{Sh(\varphi)}=Sh(\varphi).$
\item $\overline{\mathcal{M}_{\alpha}(X,
\varphi)}= \mathcal{M}_{\alpha}(X,
\varphi)\Leftrightarrow \overline{Persis_\alpha(\varphi)}=Persis_\alpha(\varphi).$
\item $\overline{\mathcal{M}_{\beta}(X,
\varphi)}= \mathcal{M}_{\beta}(X,
\varphi)\Leftrightarrow \overline{Persis_\beta(\varphi)}=Persis_\beta(\varphi).$
\end{enumerate}
\end{proposition}
If $\varphi:G\times X\to X$ is equicontinuous
action, then $UPersis_\beta(\varphi)=Persis_\beta(\varphi)$ is a closed subset of $X$. This implies the following proposition.
\begin{proposition}\label{cin} Let $\varphi:G\times X\to X$ be a equicontinuous action of a finitely generated group $G$ on compact metric space. Then \begin{enumerate}
\item $\overline{\mathcal{M}_{\beta}(X,
\varphi)}= \mathcal{M}_{\beta}(X,
\varphi)$
\item $\mu(Persis_\beta(\varphi))= 1$ if and only if $\mu\in\mathcal{M}_\beta(X,
\varphi)$. \end{enumerate} \end{proposition}
Proof of the following proposition is clear. \begin{proposition}\label{diracm} Let $\varphi:G\times X\to X$ be a continuous action. \begin{enumerate}
\item $PSh(\varphi)=\{x\in X: m_x\in\mathcal{M}_{PSh}(X, \varphi)\}$.
\item $Sh(\varphi)=\{x\in X: m_x\in\mathcal{M}_{Sh}(X, \varphi)\}$
\item $Persis_\beta(\varphi)=\{x\in X: m_x\in\mathcal{M}_{\beta}(X, \varphi)\}$
\item $Persis_\alpha(\varphi)=\{x\in X: m_x\in\mathcal{M}_{\alpha}(X, \varphi)\}$ \end{enumerate}
\end{proposition} Assume that $\overline{PSh(\varphi)}=X$. By Theorem 6.3 in \cite{par}, if $X$ is a separable metric space, then Then the set of all measures whose supports are finite subsets of $PSh(\varphi)$ is dense in $\mathcal{M}(X)$. Also by Lemma 2.7 in \cite{boom}, every measure with finite support and supported on $PSh(\varphi)$ is a finite convex combination of Dirac measures supported on points of $PSh(\varphi)$. By Proposition \ref{diracm}, such Dirac measures are compatible with persistent shadowing property. But finite convex combination of measures in $\mathcal{M}_{PSh}(X, \varphi)$ is compatible with persistent shadowing property. This implies that if $\overline{PSh(\varphi)}=X$, then the set of finite convex combination of measures in $\mathcal{M}_{PSh}(X, \varphi)$ is dense in $\mathcal{M}(X)$. This means that if $\overline{PSh(\varphi)}=X$, then $\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}(X)$. Conversely, assume that $\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}(X)$. We claim that $\overline{PSh(\varphi)}=X$. If it is not true, then there is $x\in X$ with $x\notin \overline{PSh(\varphi)}$. Choose open set $U$ of $x$ with $x\in U\subseteq X-\overline{PSh(\varphi)}$. Since $m_x\in \overline{\mathcal{M}_{PSh}(X, \varphi)}$, there is a sequence $\mu_n\in\mathcal{M}_{PSh}(X, \varphi)$ such that $\mu_n\to m_x$. By Proposition \ref{Lb}, we have $Supp(\mu_n)\subseteq PSh(\varphi)$. By $ U\subseteq X-\overline{PSh(\varphi)}$, we have $\mu_n(U)=0$ for all $n\in \mathbb{N}$. There for $0=\liminf_{n\to \infty}\mu_n(U)\geq m_x(U)=1$ which is a contradiction. With similar technics we can prove the following propositions. \begin{proposition}\label{3214} Let $\varphi:G\times X\to X$ be a continuous action of a finitely generated group $G$ on compact metric space $(X, d)$. Then the following conditions hold. \begin{enumerate}
\item $\overline{\mathcal{M}_{PSh}(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{PSh(\varphi)}=X$
\item $\overline{\mathcal{M}_{Sh}(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{Sh(\varphi)}=X$
\item $\overline{\mathcal{M}_\beta(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{Persis_\beta(\varphi)}=X$
\item $\overline{\mathcal{M}_{\alpha}(X, \varphi)}=\mathcal{M}(X)
\Leftrightarrow \overline{Persis_\alpha(\varphi)}=X$ \end{enumerate} \end{proposition}
\section*{acknowledgments} The author wishes to thank Professor Morales for his idea about Theorem \ref{wok} given in \cite{morales2}.
\end{document} | arXiv |
Benjamin Pulleyne
Benjamin Pulleyne, sometimes spelt Pullan (30 September 1785 – 20 October 1861), was a mathematician, Church of England clergyman, fellow of Clare College, Cambridge, and schoolmaster. For almost fifty years he was the Master of Gresham's School, then usually known as Holt Grammar School.
Born at Scarborough, North Yorkshire, the son of Benjamin Pullan, a merchant, and his wife Elizabeth,[1][2] the young Pulleyne (the spelling of the name he used for himself) was educated at Wakefield Grammar School and was admitted to Clare College, Cambridge, on 2 February 1804.[3] He was elected a Cave Scholar on 19 January 1805 and was Senior Optime in 1808.[4]
He graduated BA in 1808, promoted by seniority to MA in 1811, and was a Fellow of his college from 1808 to 1809. Becoming a deacon of the Church of England in 1808, he was ordained as a priest in 1810 by Henry Bathurst, Bishop of Norwich.[3] This was at a time when most academics at Oxford were obliged to lead celibate lives in college and had to resign their fellowships if they wished to marry.
Pulleyne was Master of Holt Grammar School from 1809 to 1857,[3] the salary for which was £200 a year.[5] Having held the post for 48 years, he remains the school's longest-serving head since its foundation in 1555.[6] He was also Vicar of Sheringham from 1825 until his death, and of Weybourne from 1845.[3] John William Burgon, in The Life and Times of Sir Thomas Gresham (1839), says
Holt school... is an ornament and a blessing to the county, and reflects much credit on the trustees and its worthy principal—the Rev. B. Pulleyne.[7]
Personal life
At the census of March 1851, Pulleyne was living at 38, Market Place, Holt, with his wife Rebecca, who was blind, his son Walter M. Pulleyne, aged 34, an apothecary, his daughter in law, his seven-year-old granddaughter Anne, and two servants.[8] His wife, Rebecca Pulleyne, died at Holt on 13 March 1853.[9] On 12 July of the same year, at Buckenham, Norfolk, Pulleyne married Mary Dinah Partridge, a farmer's daughter.[2][10]
Having retired from his school, Pulleyne then lived at Sheringham, where he died in October 1861. He left a widow, Mary Dinah.[11]
Notes
1. "Benjamin Pullan" in England & Wales, Christening Index, 1530–1980, ancestry.com, accessed 3 December 2020: "Benjamin Pullan / Gender: Male / Christening Age: 0 / Birth Date: 30 Sep 1785 / Christening Date: 30 Oct 1785 / Christening Place: Scarborough, Yorkshire, England / Father: Benjamin Pullan / Mother: Elizabeth Pullan" (subscription required)
2. Marriages Solemnized at the Parish Church of Buckenham in the County of Norfolk, 1853, No. 9, July 12, 1853, ancestry.co.uk, accessed 3 December 2020 (subscription required)
3. "PULLAN (or PULLEYNE), Benjamin", in John Venn, Alumni Cantabrigienses Part II. 1752–1900, Vol. V Pace – Spyers (1953), p. 214
4. Matthew Peacock, The History of Wakefield Grammar School (Milnes, 1892), p. 182: this calls him "Master of Holt Grammar School, Norfolk".
5. City of London Livery Companies' Commission: Report and Appendix, Volume 4, p. 237
6. S. G. G. Benson, Martin Crossley Evans, I Will Plant Me a Tree: an Illustrated History of Gresham's School (James & James, London, 2002)
7. John William Burgon, The Life and Times of Sir Thomas Gresham: Comp. Chiefly from His Correspondence Preserved in Her Majesty's Statepaper Office: Including Notices of Many of His Contemporaries (R. Jennings, 1839), p. 15
8. 1851 United Kingdom census, Market Place, Holt, ancestry.co.uk, accessed 2 December 2020
9. The Gentleman's Magazine, Volume 193 (1853), p. 561: "March 13... At Holt, Norfolk, Rebecca, wife of the Rev. Benjamin Pulleyne."
10. "Partridge, Mary Dinah / Blofield / 4b 345"; "Pulleyne, Benjamin / Blofield / 4b 345" in General Index to Marriages in England and Wales (September quarter, 1853)
11. "Pulleyne, the Reverend Benjamin", in Wills and Administrations 1861 (England and Wales Probate Office, 1862)
| Wikipedia |
\begin{document}
\title{Topological Factors Derived From Bohmian Mechanics}
\begin{abstract} We derive for Bohmian mechanics topological factors for quantum systems with a multiply-connected configuration space $\gencon$. These include nonabelian factors corresponding to what we call holonomy-twisted representations of the fundamental group of $\gencon$. We employ wave functions on the universal covering space of $\gencon$. As a byproduct of our analysis, we obtain an explanation, within the framework of Bohmian mechanics, of the fact that the wave function of a system of identical particles is either symmetric or anti-symmetric.
Key words: topological phases, multiply-connected configuration spaces, Bohmian mechanics, universal covering space
\noindent MSC (2000): \underline{81S99}, 81P99, 81Q70. PACS: 03.65.Vf, 03.65.Ta \end{abstract}
\begin{center}\textit{Dedicated to Rafael Sorkin on the occasion of his 60th birthday}\end{center}
\section{Introduction} \label{sec:intro}
We \zz{consider here} a novel approach towards topological effects in quantum mechanics. These effects arise when the configuration space $\gencon$ \zz{of a quantum system} is a multiply-connected Riemannian manifold and involve \emph{topological factors} forming a representation (or holonomy-twisted representation) of the fundamental group $\fund {\gencon}$ of $\gencon$. Our approach is based on Bohmian mechanics \cite {Bohm52, Bell66, DGZ92, survey, DGZ96, Gol01}, a version of quantum mechanics with particle trajectories. The use of Bohmian paths allows a derivation of the link between homotopy and quantum mechanics that is essentially different from derivations based on \zz{path integrals.}
The topological factors we derive are equally relevant and applicable in orthodox quantum mechanics, or any other version of quantum mechanics. Bohmian mechanics, however, provides a sharp mathematical justification of the dynamics with these topological factors that is absent in the orthodox framework. Different topological factors give rise to different Bohmian dynamics, and thus to different quantum theories, for the same configuration space $\gencon$ (whose metric we regard as incorporating the ``masses of the particles''), the same potential, and the same value space of the wave function.
The motion of the configuration in a Bohmian \zz{system of $N$ distinguishable particles} can be regarded as \z{corresponding to} a dynamical system in the configuration space $\gencon = \mathbb{R}^{3N}$, defined by a time-dependent vector field $v^{\psi_t}$ on $\gencon$ which in turn is defined, by the Bohmian law of motion, in terms of $\psi_t$. We are concerned here with the analogues of the Bohmian law of motion \zz{when} $\gencon$ is, instead of $\mathbb{R}^{3N}$, an arbitrary Riemannian manifold.\footnote{Manifolds will throughout be assumed to be Hausdorff, paracompact, connected, and $C^\infty$. They need not be orientable.} The main result is that, if $\gencon$ is multiply connected, there are several such analogues: several dynamics, which we will describe in detail, corresponding to different choices of the topological factors.
\zz{It is easy to overlook} the multitude of dynamics by focusing too much
on just one, the simplest one, which we will define in
\Sect{\ref{sec:immediate}}: the \emph{immediate generalization} of the
Bohmian dynamics from $\mathbb{R}^{3N}$ to a Riemannian manifold, or, as we
shall briefly call it, the \emph{immediate Bohmian dynamics}. \zz{Of the
other kinds of Bohmian dynamics, the simplest involve phase factors
associated with non-contractible loops in $\gencon$,} forming a
character\footnote{By a \emph{character} of a group we refer to what is
sometimes called a unitary multiplicative character, i.e., a one-
dimensional unitary representation of the group.} of the fundamental group
$\fund{\gencon}$. In other cases, \zz{the topological factors} are given by
matrices \zz{or endomorphisms}, forming a \z{unitary representation of
$\fund{\gencon}$} or, in the case of a vector bundle, a holonomy-twisted
representation (see the end of \Sect{\ref{s:bundlevalued}} for the
definition). As we shall explain, the dynamics of bosons is an
``immediate'' one, but not the dynamics of fermions (except when using a
certain \zz{not entirely natural} vector bundle). The Aharonov--Bohm
effect can be regarded as an example of a non-immediate dynamics on the
\zz{accessible region of 3-space.}
It is not obvious what ``other kinds of Bohmian dynamics'' should mean. We will investigate one approach here, while \zz{others} will be studied in forthcoming works. The present approach is based on considering wave functions $\psi$ that are defined not on the configuration space $\gencon$ but on its universal covering space $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$. We then \zz{investigate} which kinds of periodicity conditions, relating the values on different levels of the covering fiber by a topological factor, will ensure that the Bohmian velocity vector field associated with $\psi$ is projectable from $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$ to $\gencon$. This is carried out in \Sect{\ref{sec:covering}} for scalar wave functions and in \Sect{\ref{s:periodic2}} for wave functions with values in a complex vector space (such as \z{a} spin-space) or a complex vector bundle. In the case of vector bundles, we derive a novel kind of topological \z{factor}, given by a holonomy-twisted \z{representation} of $\fund {\gencon}$.
The notion that multiply-connected spaces give rise to different topological factors is not new. The most common approach is based on path integrals and began
largely with the work of Schulman \cite{S68,Sch71} and Laidlaw
and DeWitt \cite{DL71}; see \cite{Sch81} for details. Nelson \cite{Nel85} derives the topological phase factors for scalar wave functions from stochastic mechanics. There is also the current algebra approach of Goldin, Menikoff, and Sharp \cite{GMS81}.
\section{Bohmian Mechanics in Riemannian manifolds} \label{sec:bm} \label{sec:immediate}
\renewcommand{\ensuremath{3}}{\ensuremath{3}}
Bohmian mechanics can be formulated by appealing only to the Riemannian structure $g$ of the configuration $\gencon$ space of a physical system: the state of the system in Bohmian mechanics\ is given by the pair $(Q, \ensuremath{\psi})$; $Q \in \gencon $ is the configuration of \zz{the} system and $\psi$ is a (standard quantum mechanical) wave function\ on the configuration space $\gencon$, taking values in some \emph{Hermitian vector space} $\ensuremath{W}$, i.e., a finite-dimensional complex vector space endowed with a positive-definite Hermitian (i.e., conjugate-symmetric and sesqui-linear) inner product $(\,\cdot\,,\,\cdot\,)$.
The state of the system changes according to the guiding equation and Schr\"odinger's equation{} \cite {DGZ92}:
\begin{subequations} \label{ie:re} \begin{align}
\sdud{Q_t}{t} &= v^{\psi_t}(Q_t)\label{ie:rbe} \\
i \ensuremath{\hbar} \dud{\psi_t}{t} &= - \tfrac{\ensuremath{\hbar}^2}{2} \ensuremath{\Delta} \psi_t
+ V \psi_t
\, ,\label{ie:rse} \end{align} \end{subequations} where the Bohmian velocity vector field $v^{\psi}$ associated \zz{with} the wave function\ $\psi$ is \begin{equation} \label{ie:rbv}
v^{\psi} \ensuremath{:=} \ensuremath{\hbar}\, \ensuremath{\mathrm{Im}} \frac{\inpr{\psi}{\nabla
\psi}}{\inpr{\psi}{\psi}}. \end{equation} In the above equations $\Delta$ and $\nabla$ are, respectively, the Laplace-Beltrami operator and the gradient on the configuration space equipped with this Riemannian structure; $V$ is the potential function with values \z{given by} Hermitian matrices (endomorphisms of $\ensuremath{W}$). Thus, given $\ensuremath{\mathcal{Q}}$, $\ensuremath{W}$, and $V$, we \z{have specified} a Bohmian dynamics, the \emph{immediate Bohmian dynamics}.\footnote{Since the law of motion involves a derivative
of $\psi$, the merely measurable functions in $L^2(\gencon)$
will of course not be \z{adequate} for defining trajectories.
However, we will leave aside the \z{question,} from which dense
subspace of $L^2(\gencon)$ \z{should one} choose $\psi$. For a discussion of the global existence question of Bohmian trajectories in $\mathbb{R}^{3N}$, see \cite{BDGPZ95,TT04}.}
The empirical agreement between Bohmian mechanics\ and standard quantum mechanics is grounded in equivariance \cite{DGZ92,DGZ03}. In Bohmian mechanics, if the configuration is initially random and distributed according to $|\ensuremath{\psi}_0|^2$, then the evolution is such that the configuration at time $t$ will be distributed according to $|\ensuremath{\psi}_t|^2$. This property is called \z{the}
equivariance of the $|\ensuremath{\psi}|^2$ distribution. It follows from comparing the transport equation arising from \eqref{ie:rbe} \begin{equation}\label{continuity}
\dud{\rho_t}{t} =- \nabla \cdot (\rho_t v^{\ensuremath{\psi}_t}) \end{equation} for the distribution $\rho_t$ of the configuration $Q_t$, where $v^\psi = (v_1^\psi, \ldots, v_N^\psi)$, to \z{the quantum continuity equation} \begin{equation}\label{dpsi2dt}
\dud{|\ensuremath{\psi}_t|^2}{t} =- \nabla \cdot (|\ensuremath{\psi}_t|^2 v^{\ensuremath{\psi}_t}), \end{equation}
which is a consequence of Schr\"odinger's equation \eqref{ie:rse}. A rigorous proof of equivariance requires showing that almost all (with respect to the $|\ensuremath{\psi}|^2$ distribution) solutions of \eqref{ie:rbe} exist for all times. This was done in \cite{BDGPZ95,TT04}. A more comprehensive introduction to Bohmian mechanics\ may be found in \cite{Gol01, survey, DGZ96}.
An important example \zz{(with, say, $\ensuremath{W}=\mathbb{C}$) is that} of several particles moving in \z{a} Riemannian manifold $M$, a \z{possibly} curved physical space. Then the configuration space for $N$ \zz{distinguishable} particles is $\ensuremath{\mathcal{Q}} \ensuremath{:=} M^{N}$. Let the masses of the particles be $m_i$ and the metric of $M$ be $g$. Then the relevant metric on $M^N$ is \[
g^N(v_1 \oplus \cdots \oplus v_N, w_1 \oplus \cdots \oplus w_N )
\ensuremath{:=} \sum\limts_{i=1}^N m_i g(v_i,w_i). \] Using $g^N$ allows us to write \eqref{ie:rbv} and \eqref{ie:rbe} instead of the equivalent equations \begin{equation}\label{e:bohm02}
\sdud{\boldsymbol{Q}_{k}}{t} = \qmu{k} \ensuremath{\mathrm{Im}}
\frac {\inpr{\psi}{\nabla_{k} \psi}}{ \inpr{\psi}{\psi}}
(\boldsymbol{Q}_1,\ldots, \boldsymbol{Q}_N),
\quad k= 1,\ldots,N \end{equation} \begin{equation}\label{e:sch02}
\seq{N}, \end{equation} where $\boldsymbol{Q}_k$, the $k^{th}$ component of $Q$, lies in $M$, and $\nabla_k$ and $\ensuremath{\Delta}_k$ are the gradient and the Laplacian with respect to $g$, acting on the $k^{th}$ factor of $M^N$. \zz{Another important} example \cite{DL71} is that of $N$ identical particles in $ \rvarn{3}$, for which the natural configuration space is the \z{set $\ensuremath{{}^N\mspace{-1.0mu}\rvarn{\dd}}$ of} all $N$-element subsets of $\rvarn{3}$, \begin{equation}
\ensuremath{{}^N\mspace{-1.0mu}\rvarn{\dd}} := \{S| S \subseteq \rvarn{3}, |S| = N\} \,, \end{equation} which inherits a Riemannian metric from $\rvarn{3}$. \zz{Spin is incorporated by choosing for $\ensuremath{W}$ \z{a} suitable spin space \cite{Bell66}. For one particle moving in \rvarn{3}, we may take \ensuremath{W}\ to be} a complex, irreducible representation space of $SU(2)$, the universal covering group\footnote{The universal covering space of a Lie group is again a
Lie group, the \emph{universal covering group}. It should be
distinguished from another group also called the \emph{covering
group}: the group $Cov(\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}},\ensuremath{\mathcal{Q}})$ of the covering (or deck)
transformations of the universal covering space $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$ of a
manifold $\gencon$, which will play an important role later.} of the rotation group $SO(3)$. If it is the spin-$s$ representation then $\ensuremath{W} = \ensuremath{\cmplx^{2s +1}}$.
More generally, we can consider a Bohmian dynamics for wave function s taking values in a complex vector bundle $E$ over the Riemannian manifold \ensuremath{\mathcal{Q}}. That is, the value space then depends on the configuration, and wave function s become sections of the vector bundle. Such a case occurs for identical particles with spin $s$, where the bundle $E$ of spin spaces over the configuration space $\gencon = \ensuremath{{}^N\mspace{-1.0mu}\rvarn{\dd}}$ consists of the $(2s+1)^N$-dimensional spaces \begin{equation}\label{spinbundle}
E_q = \bigotimes_{\ensuremath{\boldsymbol{q}} \in q} \ensuremath{\mathbb{C}}^{2s+1} \,, \quad q \in \gencon\,. \end{equation} For a detailed discussion of this bundle, of why this is the right bundle, and of the notion of a tensor product over an arbitrary index set, \x{see \cite{topid2}}.
We introduce now some notation and terminology.
\begin{defn}
A \emph{Hermitian\ vector bundle}, \z{or} \emph{Hermitian\ bundle}, \zz{over
$\gencon$} is a finite-dimensional complex vector bundle $E$ \zz{over $\gencon$}
with a connection and a positive-definite, Hermitian local inner
\zz{product $(\,\cdot\,,\,\cdot\,)=(\,\cdot\,,\,\cdot\,)_q$ on $E_q$,
the fiber of $E$ over $q\in\gencon$,} which is parallel. \end{defn}
Our bundle, the one of which $\psi$ is a section, will always be a Hermitian\ bundle. Note that since a Hermitian\ bundle consists of a vector bundle and a connection, it can be nontrivial even if the vector bundle is trivial: namely, if the connection is nontrivial. The \emph{trivial Hermitian\ bundle} $\gencon \times \ensuremath{W}$, in contrast, consists of the trivial vector bundle with the trivial connection, whose parallel transport $P_\beta$\zz{, in general a unitary endomorphism from $E_q$ to $E_{q'}$ for $\beta$ a path from $q$ to $q'$,} is always the identity on $\ensuremath{W}$.
The \z{case of a $W$-valued function} $\psi : \gencon \to \ensuremath{W}$ corresponds to the trivial Hermitian\ bundle $\gencon \times \ensuremath{W}$.
The global inner product on the Hilbert space of wave function s is the local inner product integrated against the Riemannian volume measure associated with \x{the metric $g$ of $\gencon$,} \[
\langle \phi, \psi \rangle = \int_{\ensuremath{\mathcal{Q}}} dq \, (\phi(q),
\psi(q))\,. \] The Hilbert space equipped with this inner product, denoted $L^2(\ensuremath{\mathcal{Q}},E)$, contains the square-integrable, measurable (not necessarily smooth) sections of $E$ modulo equality almost everywhere. The covariant derivative $D\psi$ of a section $\psi$ is an ``$E$-valued 1-form,'' i.e., a section of $\mathbb{C} T\ensuremath{\mathcal{Q}}^* \otimes E$ (with $T\gencon^* $ the cotangent bundle), while we write $\nabla \psi$ for the section of $\mathbb{C} T\ensuremath{\mathcal{Q}} \otimes E$ metrically equivalent to $D\psi$. The potential $V$ is now a self-adjoint section of the endomorphism bundle $E \otimes E^*$ acting on the vector bundle's fibers. The equations defining the Bohmian dynamics are, {\em mutatis mutandis}, the same equations \eqref{ie:re} \zz{and \eqref{ie:rbv}} as before.
We wish to introduce now further Bohmian dynamics beyond the immediate one. To this end, we will consider wave functions on \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}{}, the universal covering space of $\gencon$. This idea is rather standard in the literature on quantum mechanics in multiply-connected spaces \cite{DL71, D72, LM77, Mor92, HM96}. However, the complete \zz{specification} of the possibilities that we give in \Sect{\ref{s:periodic2}} includes some, corresponding to what we call \emph{holonomy-twisted representations} of $\fund{\gencon}$, that have not \z{yet} been considered. \zz{Each possibility} has locally the same Hamiltonian $\rawH$, with the same potential $V$, and each possibility is equally well defined and equally reasonable. While in orthodox quantum mechanics it may seem \zz{more or less axiomatic that} the configuration space $\gencon$ is the space on which $\psi_t$ is defined, $\gencon$ appears in Bohmian mechanics also in another role: as the space in which $Q_t$ moves. It is therefore less surprising from the Bohmian viewpoint, and easier to accept, that $\psi_t$ is defined not on $\gencon$ but on $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$. In the next section all wave function s will be complex-valued; in \Sect{\ref{s:periodic2}} we shall consider wave functions with higher-dimensional value spaces.
\section{Scalar Wave Functions on the Covering Space} \label{sec:covering}
The motion of the configuration $Q_t$ in \ensuremath{\mathcal{Q}}\ is determined by a velocity vector field $v_t$ on \ensuremath{\mathcal{Q}}, which may arise from a wave function $\psi$ not on \ensuremath{\mathcal{Q}}\ but instead on \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}\ , the universal covering space of $\gencon$, in the following way: Suppose we are given a complex-valued map $\gamma$ on the covering group $\ensuremath{Cov(\covspa, \gencon)}$, \zz{$\gamma: \ensuremath{Cov(\covspa, \gencon)} \to \ensuremath{\mathbb{C}}$, and suppose} that a wave function $\psi: \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}} \to \ensuremath{\mathbb{C}}$ satisfies the \emph{periodicity condition associated with the topological factors
$\gamma$}, i.e., \begin{equation} \label{e:percon}
\psi (\ensuremath{\sigma} \ensuremath{\hat{q}}) = \ensuremath{\gamma_{\deckt}} \psi(\ensuremath{\hat{q}}) \end{equation} for every $\ensuremath{\hat{q}} \in \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$ and $\ensuremath{\sigma} \in \ensuremath{Cov(\covspa, \gencon)}$. For \eqref{e:percon} to be possible for a $\psi$ that does not identically vanish, $\gamma$ must be a representation of the covering group, as was first emphasized in \cite{D72}. To see this, let $\deck1$, $\deck2 \in \ensuremath{Cov(\covspa, \gencon)}$. Then we have the following equalities \begin{equation}\label{e:cccargument} \conc{\deck1 \deck2} \psi (\ensuremath{\hat{q}}) = \psi(\deck1 \deck2 \ensuremath{\hat{q}}) = \conc{\deck1} \psi(\deck2 \ensuremath{\hat{q}}) = \conc{\deck1}\conc{\deck2} \psi(\ensuremath{\hat{q}}). \end{equation} We \z{thus obtain} the fundamental relation \begin{equation} \label{e:ccc}
\conc{\deck1 \deck2} = \conc{\deck1}\conc{\deck2}, \end{equation} establishing \z{(since $\gamma_{\ensuremath{\mathrm{Id}}} =1$)} that $\gamma$ is a representation.
Let \fund{\ensuremath{\mathcal{Q}}, \ensuremath{q}} denote the \emph{fundamental group \zz{of $\gencon$} at a point} \ensuremath{q}{} and let $\ensuremath{\gr }$ be \zz{the} covering map (a local diffeomorphism) $\ensuremath{\gr }: \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}} \to \ensuremath{\mathcal{Q}}$, also called the projection (the \emph{covering fiber} for $\ensuremath{q}\ \in \ensuremath{\mathcal{Q}}$ is the set $\ensuremath{\gr }^{-1}(\ensuremath{q})$ of points in \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}\ that project to \ensuremath{q}\ under $\ensuremath{\gr }$). The 1-dimensional representations of the covering group are, \z{via the canonical isomorphisms $\varphi_{\ensuremath{\hat{q}}}: \ensuremath{Cov(\covspa, \gencon)} \to \fund{\ensuremath{\mathcal{Q}}, q},\ \ensuremath{\hat{q}}\in \ensuremath{\gr }^{-1}(\ensuremath{q})$,}
in canonical correspondence with the 1-dimensional representations of any fundamental group \fund{\ensuremath{\mathcal{Q}}, q}: \z{The different} isomorphisms \z{$\varphi_{\ensuremath{\hat{q}}},\ \ensuremath{\hat{q}}\in \ensuremath{\gr }^{-1}(\ensuremath{q})$,} will \z{transform a representation} of \fund{\ensuremath{\mathcal{Q}}, q} into \z{representations of $\ensuremath{Cov(\covspa, \gencon)}$ that are conjugate. But the} 1-dimensional representations are homomorphisms to the \emph{abelian} multiplicative group of $\ensuremath{\mathbb{C}}$ and \z{are} thus invariant under conjugation.
{}From \eqref{e:percon} it follows that $\nabla \psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) = \gamma_\ensuremath{\sigma} \, \ensuremath{\sigma}^* \nabla \psi(\ensuremath{\hat{q}})$, where $\ensuremath{\sigma}^*$ is the (push-forward) action of $\ensuremath{\sigma}$ on tangent vectors, using that $\ensuremath{\sigma}$ is an isometry. Thus, the velocity field $\hat{v}^{\psi}$ on \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}\ associated with $\psi$ according to \begin{equation}\label{vhatdef}
\hat{v}^\psi (\ensuremath{\hat{q}}) := \hbar \, \ensuremath{\mathrm{Im}} \, \frac{ \nabla
\psi}{ \psi} (\ensuremath{\hat{q}}) \end{equation} is projectable, i.e., \begin{equation}\label{vhatprojectable}
\hat{v}^\psi (\ensuremath{\sigma}\ensuremath{\hat{q}}) = \ensuremath{\sigma}^* \hat{v}^\psi (\ensuremath{\hat{q}}), \end{equation} and therefore gives rise to a velocity field $v^\psi$ on \ensuremath{\mathcal{Q}}, \begin{equation}
v^\psi(q) = \ensuremath{\gr }^* \, \hat{v}^\psi (\ensuremath{\hat{q}}) \end{equation} where $\ensuremath{\hat{q}}$ is an arbitrary element of $\ensuremath{\gr }^{-1}(q)$.
If we let $\psi$ evolve according to the Schr\"odinger equation on \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}, \begin{equation}\label{e:sch04}
i\hbar \frac{\partial \psi}{\partial t}(\ensuremath{\hat{q}}) = - \tfrac{\hbar^2} {2} \ensuremath{\Delta}
\psi(\ensuremath{\hat{q}}) + \widehat{V}(\ensuremath{\hat{q}}) \psi(\ensuremath{\hat{q}}) \end{equation} with $\widehat{V}$ the lift of the potential $V$ on $\ensuremath{\mathcal{Q}}$, then the periodicity condition \eqref{e:percon} is preserved by the evolution, since, according to \begin{equation}
i\hbar\frac{\partial \psi}{\partial t}(\ensuremath{\sigma} \ensuremath{\hat{q}})
\stackrel{\eqref{e:sch04}}{=} -\tfrac{\hbar^2}{2} \ensuremath{\Delta} \psi(\ensuremath{\sigma}
\ensuremath{\hat{q}}) + \widehat{V}(\ensuremath{\sigma} \ensuremath{\hat{q}}) \psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) =
-\tfrac{\hbar^2}{2} \ensuremath{\Delta} \psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) + \widehat{V}(\ensuremath{\hat{q}})
\psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) \end{equation} (note the different arguments in the potential), the functions $\psi \circ \ensuremath{\sigma}$ and $\gamma_\ensuremath{\sigma} \psi$ satisfy the same evolution equation \eqref{e:sch04} with, by \eqref{e:percon}, the same initial condition, and thus coincide at all times.
\z{Therefore} we can let the Bohmian configuration $Q_t$ move according to $v^{\psi_t}$, \begin{equation}\label{e:bohm04}
\frac{dQ_t}{dt} = v^{\psi_t}(Q_t) = \hbar\, \ensuremath{\gr }^* \Bigl( \ensuremath{\mathrm{Im}}\,
\frac{ \nabla \psi}{
\psi}\Bigr)(Q_t) = \hbar\, \ensuremath{\gr }^* \Bigl( \ensuremath{\mathrm{Im}}\,
\frac{ \nabla \psi}{
\psi}\Big|_{\ensuremath{\hat{q}} \in \ensuremath{\gr }^{-1}(Q_t)} \Bigr). \end{equation} One can also view the motion in this way: Given $Q_0$, choose $\widehat{Q}_0 \in \ensuremath{\gr }^{-1}(Q_0)$, let $\widehat{Q}_t$ move in \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}\ according to $\hat{v}^{\psi_t}$, and set $Q_t = \ensuremath{\gr }(\widehat{Q}_t)$. Then the motion of $Q_t$ is independent of the choice of $\widehat{Q}_0$ in the fiber over $Q_0$, and obeys \eqref{e:bohm04}.
If, as we shall assume from now on, $|\gamma_\ensuremath{\sigma}|=1$ for all $\ensuremath{\sigma} \in \ensuremath{Cov(\covspa, \gencon)}$, i.e., if $\gamma$ is a \emph{unitary} representation (in $\ensuremath{\mathbb{C}}$) or a \emph{character}, then the motion \eqref{e:bohm04} also has an equivariant probability distribution, namely \begin{equation}\label{e:equi04}
\rho(q) = |\psi(\ensuremath{\hat{q}})|^2. \end{equation} To see this, note that we have \begin{equation}\label{projectablepsi2}
|\psi(\ensuremath{\sigma} \ensuremath{\hat{q}})|^2 \stackrel{\eqref{e:percon}}{=}
|\gamma_\ensuremath{\sigma}|^2 |\psi(\ensuremath{\hat{q}})|^2 = |\psi(\ensuremath{\hat{q}})|^2, \end{equation}
so that the function $|\psi(\ensuremath{\hat{q}})|^2$ is projectable to a function on
\ensuremath{\mathcal{Q}}\ which we call $|\psi|^2(q)$ in this paragraph. From \eqref{e:sch04} we have \z{that} \[
\frac{\partial |\psi_t(\ensuremath{\hat{q}})|^2}{\partial t} = - \nabla \cdot \Bigl(
|\psi_t(\ensuremath{\hat{q}})|^2 \, \hat{v}^{\psi_t}(\ensuremath{\hat{q}}) \Bigr) \] and, by projection, \z{that} \[
\frac{\partial |\psi_t|^2(q)}{\partial t} = - \nabla \cdot \Bigl(
|\psi_t|^2 (q)\, v^{\psi_t}(q) \Bigr), \] which coincides with the transport equation for a probability density $\rho$ on \ensuremath{\mathcal{Q}}, \[
\frac{\partial \rho_t(q)}{\partial t} = - \nabla \cdot \Bigl(
\rho_t(q) \, v^{\psi_t}(q) \Bigr). \] Hence, \begin{equation}
\rho_t(q) = |\psi_t|^2(q) \end{equation} for all times if \z{it is} so initially; this is equivariance.
\zz{The relevant} wave functions are those with \begin{equation}
\int_\ensuremath{\mathcal{Q}} dq \, |\psi(\ensuremath{\hat{q}})|^2 = 1 \end{equation} where the choice of $\ensuremath{\hat{q}} \in \ensuremath{\gr }^{-1}(q)$ is arbitrary by \eqref{projectablepsi2}. The relevant Hilbert space, which we denote $L^2(\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}},\gamma)$, thus \z{consists of} the measurable functions $\psi$ on $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$ (modulo changes on null sets) satisfying \eqref{e:percon} with \begin{equation}
\int_\ensuremath{\mathcal{Q}} dq \, |\psi(\ensuremath{\hat{q}})|^2 < \infty. \end{equation} It is a Hilbert space with the scalar product \begin{equation}
\langle \phi,\psi \rangle = \int_\ensuremath{\mathcal{Q}} dq \, \overline{\phi (\ensuremath{\hat{q}})}
\, \psi(\ensuremath{\hat{q}}). \end{equation} Note that the value of the integrand at $q$ is independent of the choice of $\ensuremath{\hat{q}} \in \ensuremath{\gr }^{-1}(q)$ since, by \eqref{e:percon} and \z{the fact that}
$|\gamma_\ensuremath{\sigma}|=1$, \[
\overline{\phi(\ensuremath{\sigma} \ensuremath{\hat{q}})} \, \psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) =
\overline{\gamma_\ensuremath{\sigma} \, \phi(\ensuremath{\hat{q}})} \, \gamma_\ensuremath{\sigma} \,
\psi(\ensuremath{\hat{q}}) = \overline{\phi(\ensuremath{\hat{q}})} \, \psi(\ensuremath{\hat{q}}). \]
We summarize the results of our reasoning.
\begin{assertion}\label{a:scalar}
Given a Riemannian manifold $\ensuremath{\mathcal{Q}}$ and a smooth function
$V:\ensuremath{\mathcal{Q}} \to \ensuremath{\mathbb{R}}$, there is a Bohmian dynamics in \ensuremath{\mathcal{Q}}\ with
potential $V$ for each character \ensuremath{\gamma}\ of the fundamental group
\fund{\ensuremath{\mathcal{Q}}}; it is defined by \eqref{e:percon}, \eqref{e:sch04},
and \eqref{e:bohm04}, where the wave function $\psi_t$ lies in
$L^2(\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}},\gamma)$ and has norm one. \end{assertion} Assertion~\ref{a:scalar} provides as many \zz{dynamics} as there are characters of $\fund{\gencon}$ because different characters $\gamma' \neq \gamma$ always define different dynamics. \label{is:rem} In particular, for the trivial character $\gamma_\ensuremath{\sigma} =1$, we obtain the
immediate dynamics, as defined by \eqref{ie:rbv} and \eqref{ie:re}.
An important application of Assertion~\ref{a:scalar} is
provided by identical particles without spin. The natural
configuration space $\ensuremath{{}^N\mspace{-1.0mu}\rvarn{\dd}}$ \z{for} identical particles has fundamental group $S_N$, the group of
permutations of $N$ objects, which possesses two characters, the
trivial character, $\gamma_\sigma =1$, and the alternating
character, $\gamma_\sigma = \mathrm{sgn}(\sigma)= 1$ or $-1$
depending on whether $\sigma \in S_N$ is an even or an odd
permutation. The Bohmian
dynamics associated with the trivial character is that of bosons,
while the one associated with the alternating character is that of
fermions. However, in a two-dimensional world there would be more possibilities
since $\fund{^N \rvarn2}$ is the braid group, whose \zz{generators
$\sigma_i, \ i=1,\dots,N-1$,} are a certain subset of braids that exchange two particles and satisfy the \zz{defining relations} \begin{align*} \sigma_i \sigma_j &=\sigma_j\sigma_i \quad \hbox{for} \quad i\leq N-3, j \geq i + 2 , \\ \sigma_i\sigma_{i+1}\sigma_i &= \sigma_{i+1}\sigma_i \sigma_{i+1} \quad \hbox{for} \quad i\leq N-2. \end{align*} \zz{Thus, a character} of the braid group assigns the same complex number $e^{i \beta}$ to each generator, and therefore, according to Assertion~\ref {a:scalar}, \zz{each choice of $\beta$ corresponds to a Bohmian dynamics;} two-dimensional bosons correspond to $\beta = 0$ and two- dimensional fermions to $\beta = \pi$. The particles corresponding to the other possibilities are usually called \emph{anyons}. They were first suggested in \cite{LM77}, and their investigation began in earnest with \cite{GMS81, Wi82}. See \cite{Mor92} for some more details and references.
\section{Vector-Valued Wave Functions on the Covering Space} \label{s:periodic2} \label{s:vectorvalued}
\label{s:bundlevalued}
\zz{The analysis of \Sect{\ref{sec:covering}} can be carried over with little change to the case of vector-valued wave functions, $\psi(q)\in W$. In this case, however, the topological factors may be given by any endomorphisms $\Gamma_{\sigma}$ of $W$ that form a representation of $\ensuremath{Cov(\covspa, \gencon)}$ and need not be restricted to characters, a possibility first mentioned in \cite{Sch81}, Notes to Section 23.3. Rather than directly considering this case, we focus instead on one that is a bit more general and that will require a new sort of topological factor, that of wave functions that are sections of a vector bundle.} The topological factors \zz{for this case} will be expressed as \emph{periodicity sections}, i.e., parallel unitary sections of the endomorphism bundle indexed by the covering group \z{and} satisfying a certain composition law, or, equivalently, as \emph{holonomy-twisted representations} of $\fund{\gencon}$.
If $E$ is a vector bundle over \ensuremath{\mathcal{Q}}, then the lift of $E$, denoted by $\wh{E}$, is a vector bundle over \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}; the fiber space at \ensuremath{\hat{q}}\ is defined to be the fiber space of $E$ at $\ensuremath{q}$, $\wh{E}_{\ensuremath{\hat{q}}} \ensuremath{:=} E_{\ensuremath{q}}$, where $\ensuremath{q} = \ensuremath{\gr }(\ensuremath{\hat{q}})$. It is important to realize that with this construction, it makes sense to ask whether $v \in \wh{E}_{\ensuremath{\hat{q}}}$ is equal to $w \in \wh{E}_{\hat{r}}$ whenever $\ensuremath{\hat{q}}$ and $\hat{r}$ are elements of the same covering fiber. Equivalently, $\wh{E}$ is the pull-back of $E$ through $\ensuremath{\gr }: \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}} \to \gencon$. As a particular example, the lift of the tangent bundle of \ensuremath{\mathcal{Q}}\ to \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}\ is canonically isomorphic to the tangent bundle of \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}. Sections of $E$ or $E\otimes E^*$ can be lifted to sections of $\wh{E}$ respectively $\wh{E} \otimes \wh{E}^*$.
If $E$ is a Hermitian\ vector bundle, then so is $\wh{E}$. The wave function $\psi$ that we consider here is a section of $\wh{E}$, so that $\psi($\ensuremath{\hat{q}}$)$ is a vector in the $\ensuremath{\hat{q}}$-dependent Hermitian vector space $\wh{E}_{\ensuremath{\hat{q}}}$. $V$ is a section of the bundle $E \otimes E^*$, i.e., $V(q)$ is an element of $E_q \otimes E_q^*$. To indicate that every $V(q)$ is a Hermitian endomorphism of $E_q$, we say that $V$ is a \z{Hermitian section} of $E \otimes E^*$.
Since $\psi(\ensuremath{\sigma} \ensuremath{\hat{q}})$ and $\psi(\ensuremath{\hat{q}})$ lie in the same space $E_q = \wh{E}_{\ensuremath{\hat{q}}}= \wh{E}_{\ensuremath{\sigma} \ensuremath{\hat{q}}}$, a periodicity condition can be of the form \begin{equation}\label{e:percon06}
\psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) = \Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}}) \, \psi (\ensuremath{\hat{q}}) \end{equation} for $\ensuremath{\sigma} \in \ensuremath{Cov(\covspa, \gencon)}$, where $\Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}})$ is an endomorphism $E_q \to E_q$. By the same argument as in \eqref{e:cccargument}, the condition for \eqref{e:percon06} to be possible, if $\psi(\ensuremath{\hat{q}})$ can be any element of $\wh{E}_{\ensuremath{\hat{q}}}$, is the composition law \begin{equation}\label{e:compo06}
\Gamma_{\deck1 \deck2}(\ensuremath{\hat{q}}) = \Gamma_{\deck1} (\deck2 \ensuremath{\hat{q}}) \,
\Gamma_{\deck2} (\ensuremath{\hat{q}}). \end{equation} Note that this law differs from the one $\Gamma(\ensuremath{\hat{q}})$ would satisfy if it were a representation, which reads $\Gamma_{\sigma_1 \sigma_2} (\ensuremath{\hat{q}}) = \Gamma_{\sigma_1} (\ensuremath{\hat{q}}) \, \Gamma_{\sigma_2} (\ensuremath{\hat{q}})$, \z {since in general $\Gamma (\sigma\ensuremath{\hat{q}})$ need not be the same as $\Gamma (\ensuremath{\hat{q}})$} .
For periodicity \eqref{e:percon06} to be preserved under the Schr\"odinger evolution, \begin{equation}\label{e:sch06}
i\hbar \frac{\partial \psi}{\partial t} (\ensuremath{\hat{q}}) = -\tfrac {\hbar^2}
{2} \ensuremath{\Delta} \psi(\ensuremath{\hat{q}}) + \wh{V}(\ensuremath{\hat{q}}) \, \psi(\ensuremath{\hat{q}}), \end{equation} we need that multiplication by $\Gamma_\ensuremath{\sigma} (\ensuremath{\hat{q}})$ \z{commute} with the Hamiltonian. Observe that \begin{equation}\label{HGamma}
[H,\Gamma_\ensuremath{\sigma}]\psi(\ensuremath{\hat{q}}) = -\tfrac{\hbar^2}{2} (\ensuremath{\Delta}
\Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}})) \psi(\ensuremath{\hat{q}}) - \hbar^2 (\nabla
\Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}})) \cdot (\nabla \psi(\ensuremath{\hat{q}})) +
[\wh{V}(\ensuremath{\hat{q}}),\Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}})] \, \psi(\ensuremath{\hat{q}}). \end{equation} Since we can choose $\psi$ such that, for any one particular $\ensuremath{\hat{q}}$, $\psi(\ensuremath{\hat{q}})=0$ and $\nabla \psi(\ensuremath{\hat{q}})$ is any element of $\mathbb{C} T_ {\ensuremath{\hat{q}}} \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}} \otimes E_q$ we like, we must have that \begin{equation}\label{e:parallel06}
\nabla \Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}}) =0 \end{equation} for all $\ensuremath{\sigma}\in \ensuremath{Cov(\covspa, \gencon)}$ and all $\ensuremath{\hat{q}} \in \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$, \z{i.e., that $\Gamma_\sigma$ is parallel.} Inserting this in \eqref{HGamma}, the first two terms on the right hand side vanish. Since we can choose for $\psi(\ensuremath{\hat{q}})$ any element of $E_q$ we like, we must have that \begin{equation}\label{e:commute06}
[\wh{V}(\ensuremath{\hat{q}}),\Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}})]=0 \end{equation} for all $\ensuremath{\sigma}\in \ensuremath{Cov(\covspa, \gencon)}$ and all $\ensuremath{\hat{q}} \in \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$. Conversely, assuming \eqref{e:parallel06} and \eqref{e:commute06}, we obtain that $\Gamma_\ensuremath{\sigma}$ commutes with $H$ for every $\ensuremath{\sigma}\in \ensuremath{Cov(\covspa, \gencon)}$, so that the periodicity \eqref{e:percon06} is preserved.
\begin{comment} In this case, we have for every $q$ a unitary representation $\Gamma(q)$ of $\ensuremath{Cov(\covspa, \gencon)}$ on $E_q$ that commutes with $V(q)$; in addition, by \eqref{e:parallel06}, $\Gamma$ is parallel (covariantly constant). Another way of viewing $\Gamma$ is this: Denoting the unitary group of $E_q$ by $U(E_q)$, we can form the group bundle $U(E)$, whose fiber at $q$ is the group $U(E_q)$. The sections of this group bundle form an (infinite-dimensional) group under pointwise multiplication, and the parallel sections form a (finite-dimensional) subgroup. We say that a section $A$ of $E \otimes E^*$ \emph{commutes pointwise with $V$}, or simply \emph{commutes with $V$}, if at every $q \in \ensuremath{\mathcal{Q}}$, $A(q)$ and $V(q)$ commute. The parallel sections of $U(E)$ that commute with $V$ form a subgroup of the group of all parallel sections of $U(E)$, since commuting with $V$ is inherited by products. $\Gamma$ is, by \eqref{e:parallel06} and \eqref{e:commute06}, an element of that subgroup. \end{comment}
{}From \eqref{e:percon06} and \eqref{e:parallel06} it follows that $\nabla \psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) = (\ensuremath{\sigma}^* \otimes \Gamma_\ensuremath{\sigma} (\ensuremath{\hat{q}})) \nabla \psi(\ensuremath{\hat{q}})$. If every $\Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}})$ is \emph{unitary}, as we assume from now on, the velocity field $\hat{v}^\psi$ on \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}\ associated with $\psi$ according to \begin{equation}
\hat{v}^\psi (\ensuremath{\hat{q}}) := \hbar \, \ensuremath{\mathrm{Im}} \,
\frac{(\psi,\nabla\psi)}{(\psi,\psi)} (\ensuremath{\hat{q}}) \end{equation} is projectable, $\hat{v}^\psi(\ensuremath{\sigma} \ensuremath{\hat{q}}) = \ensuremath{\sigma}^* \hat{v}^\psi(\ensuremath{\hat{q}})$, and gives rise to a velocity field $v^\psi$ on \ensuremath{\mathcal{Q}}. We let the configuration move according to $v^{\psi_t}$, \begin{equation}\label{e:bohm06}
\frac{dQ_t}{dt} = v^{\psi_t}(Q_t) = \hbar \, \ensuremath{\gr }^* \Bigl( \ensuremath{\mathrm{Im}} \,
\frac{(\psi,\nabla \psi)}{(\psi,\psi)} \Bigr) (Q_t). \end{equation}
\begin{defn}
Let $E$ be a Hermitian\ bundle over the manifold $\gencon$. A
\emph{periodicity section} $\Gamma$ over $E$ is a family indexed by
$\ensuremath{Cov(\covspa, \gencon)}$ of unitary parallel sections $\Gamma_\sigma$ of $\wh{E}
\otimes \wh{E}^*$ satisfying the composition law \eqref{e:compo06}. \end{defn}
Since $\Gamma_\ensuremath{\sigma}(\ensuremath{\hat{q}})$ is unitary, one sees as before that the probability distribution \begin{equation}\label{e:equi06}
\rho(q) = (\psi(\ensuremath{\hat{q}}),\psi(\ensuremath{\hat{q}})) \end{equation} does not depend on the choice of $\ensuremath{\hat{q}} \in \ensuremath{\gr }^{-1}(q)$ and is equivariant.
As usual, we define for any periodicity section $\Gamma$ the Hilbert space $L^2(\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}, \wh{E}, \Gamma)$ to be the set of measurable sections $\psi$ of $\wh{E}$ (modulo changes on null sets) satisfying \eqref{e:percon06} with \begin{equation}
\int_\ensuremath{\mathcal{Q}} dq \, (\psi(\ensuremath{\hat{q}}),\psi(\ensuremath{\hat{q}})) < \infty, \end{equation} endowed with the scalar product \begin{equation}
\langle \phi, \psi \rangle = \int_\ensuremath{\mathcal{Q}} dq \,
(\phi(\ensuremath{\hat{q}}),\psi(\ensuremath{\hat{q}})). \end{equation} As before, the value of the integrand at $q$ is independent of the choice of $\ensuremath{\hat{q}} \in \ensuremath{\gr }^{-1}(q)$.
We summarize the results of our reasoning.
\begin{assertion}\label{a:bundle}
Given a Hermitian\ bundle $E$ over the Riemannian manifold $\ensuremath{\mathcal{Q}}$ and
a \z{Hermitian section} $V$ of $E \otimes E^*$, there is a
Bohmian dynamics for each periodicity section $\Gamma$ commuting
(pointwise) with $\wh{V}$ \z{(cf. (\ref{e:commute06}))}; it is defined by \eqref{e:percon06},
\eqref{e:sch06}, and \eqref{e:bohm06}, where the wave function
$\psi_t$ lies in $L^2(\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}, \wh{E}, \Gamma)$ and has norm 1. \end{assertion}
Every character $\gamma$ of \z{$\ensuremath{Cov(\covspa, \gencon)}$ (or of $\fund{\gencon}$)} defines a periodicity section by setting \begin{equation}\label{Gammagamma2} \Gamma_\sigma (\ensuremath{\hat{q}}) := \gamma_\sigma \ensuremath{\mathrm{Id}}_{\wh{E}_{\ensuremath{\hat{q}}}}. \end{equation} It commutes with every potential $V$. Conversely, a periodicity section $\Gamma$ that commutes with every potential must be such that every $\Gamma_\sigma (\ensuremath{\hat{q}})$ is a multiple of the identity, $\Gamma_\sigma (\ensuremath{\hat{q}}) = \gamma_\sigma
(\ensuremath{\hat{q}}) \, \ensuremath{\mathrm{Id}}_{\wh{E}_{\ensuremath{\hat{q}}}}$. By unitarity, $|\gamma_\sigma| =1$; by parallelity \eqref{e:parallel06}, $\gamma_\sigma (\ensuremath{\hat{q}}) = \gamma_\sigma$ must be constant; by the composition law \eqref{e:compo06}, $\gamma$ must be a homomorphism, and thus a character.
\z{We briefly indicate} how a periodicity section $\Gamma$ corresponds to something like a representation of $\fund{\gencon}$. Fix a $\ensuremath{\hat{q}} \in \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$. \z{Then $\ensuremath{Cov(\covspa, \gencon)}$ can be identified with $\fund{\gencon}=\fund{\gencon,\ensuremath{\gr }(\ensuremath{\hat{q}})}$ via $\varphi_{\ensuremath{\hat{q}}}$.} Since the sections $\Gamma_\sigma$ of $\wh{E} \otimes \wh{E}^*$ are parallel, $\Gamma_\sigma(\hat{r})$ is determined for every $\hat{r}$ by $\Gamma_\sigma(\ensuremath{\hat{q}})$. \z{(Note in particular that the parallel transport $\Gamma_\sigma(\tau\ensuremath{\hat{q}})$ of $\Gamma_\sigma(\ensuremath{\hat{q}})$ from $\ensuremath{\hat{q}}$ to $\tau\ensuremath{\hat{q}}, \tau\in \ensuremath{Cov(\covspa, \gencon)}$, may differ from $\Gamma_\sigma(\ensuremath{\hat{q}}) $.)} Thus, the periodicity section $\Gamma$ is completely determined by the endomorphisms $\Gamma_\sigma := \Gamma_\sigma(\ensuremath{\hat{q}})$ of $E_q$, $ \sigma \in \ensuremath{Cov(\covspa, \gencon)}$, which satisfy the composition law \begin{equation}\label{twistedrep}
\Gamma_{\sigma_1 \sigma_2} = \ensuremath{h}_{\alpha_2} \Gamma_ {\sigma_1}
\ensuremath{h}_{\alpha_2}^{-1} \Gamma_{\sigma_2}\,, \end{equation} \z{where $\alpha_2$} is any loop in $\gencon$ based at $\ensuremath{\gr }(\ensuremath{\hat{q}}) $ whose lift starting at $\ensuremath{\hat{q}}$ leads to $\sigma_2 \ensuremath{\hat{q}}$, and $\ensuremath{h}_{\alpha_2}$ is the associated holonomy endomorphism of $E_q$. Since \eqref{twistedrep} is not the composition law $\Gamma_{\sigma_1 \sigma_2} = \Gamma_{\sigma_1} \Gamma_{\sigma_2}$ of a representation, the $\Gamma_\sigma$ \z{form, not} a representation of $\fund{\gencon}$, \z{ but} what we call a \emph{holonomy-twisted representation}.
\zz{The situation where the wave function assumes values in a fixed Hermitian space $\ensuremath{W}$,} instead of a bundle, corresponds to the trivial Hermitian\ bundle $E = \ensuremath{\mathcal{Q}} \times \ensuremath{W}$ (i.e., with the trivial connection, for which parallel transport is the identity on $\ensuremath{W}$). Then, parallelity \eqref{e:parallel06} implies that $\Gamma_\ensuremath{\sigma} (\hat{r}) = \Gamma_\ensuremath{\sigma} (\hat{q})$ for any $\hat{r}, \hat{q} \in \ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$, or $\Gamma_\ensuremath{\sigma} (\hat{q}) = \Gamma_\ensuremath{\sigma}$, so that \eqref{e:compo06} becomes the usual \zz{composition law $\Gamma_{\ensuremath{\sigma}_1 \ensuremath{\sigma}_2} = \Gamma_{\ensuremath{\sigma}_1} \Gamma_{\ensuremath{\sigma}_2}$ and $\Gamma$ is a unitary representation of $\ensuremath{Cov(\covspa, \gencon)}$.}
\zz{The most important case of topological factors that are characters is provided by identical particles {\it with spin}. In fact, for this case, Assertion~\ref{a:bundle} entails the same conclusions we arrived at the end of Section \ref{sec:covering}, even for particles with spin. To understand how this comes about, consider the potential occurring in the Pauli equation for $N$ identical particles with spin, \begin{equation}\label{pauli2} V(q) = -\mu\sum_{\ensuremath{\boldsymbol{q}} \in q} \boldsymbol{B}(\ensuremath{\boldsymbol{q}}) \cdot \boldsymbol{\sigma}_{\ensuremath{\boldsymbol{q}}} \end{equation} on the spin bundle \eqref{spinbundle} over ${}^N \RRR^3$, with $\boldsymbol{\sigma}_{\ensuremath{\boldsymbol{q}}}$ the vector of spin matrices acting on the spin space of the particle at $\ensuremath{\boldsymbol{q}}$. Clearly, the algebra generated by $\{V(q)\}$ arising from all possible choices of the magnetic field $\boldsymbol{B}$ is $\mathrm{End}(E_q)$. Thus the only holonomy-twisted representations that define a dynamics for all magnetic fields are those given by a character.\footnote{In fact, it can be shown \cite {topid2} that the only holonomy-twisted representations for a magnetic field $\boldsymbol{B}$ that is not parallel must be a character.}}
An example of a topological factor that is not a character is provided by the Aharonov--Casher variant \cite{AC84} of the Aharonov--Bohm effect, according to which a neutral spin-1/2 particle that carries a magnetic moment $\mu$ acquires a nontrivial phase while encircling a charged wire $\mathcal{C} $. A way of understanding how this effect comes about is in terms of the non-relativistic Hamiltonian $\rawH$ based on a nontrivial connection $\nabla = \nabla_\mathrm{trivial} - \tfrac{i\mu}{\hbar} \boldsymbol{E} \times \boldsymbol{\sigma}$ on the vector bundle $\mathbb{R}^3 \times \mathbb{C}^2$. Suppose the charge density $\varrho(\ensuremath{\boldsymbol{q}})$ is invariant under translations in the direction $\boldsymbol{e}\in \mathbb{R}^3$, $\boldsymbol{e}^2=1$ in which the wire is oriented. Then the charge per unit length $\lambda$ is given by the integral \begin{equation} \lambda = \int_D \varrho(\ensuremath{\boldsymbol{q}})\, dA \end{equation} over the cross-section disk $D$ in any plane perpendicular to $\boldsymbol{e}$. The restriction of this connection, outside of $\mathcal{C} $, to any plane $\Sigma$ orthogonal to the wire turns out to be flat\footnote{The curvature is $\Omega = d_\mathrm{trivial} \boldsymbol{\omega} + \boldsymbol{\omega} \wedge \boldsymbol{\omega} $ with $\boldsymbol{\omega} = -i\frac{\mu}{\hbar} \boldsymbol{E} \times \boldsymbol{\sigma}$. The 2-form $\Omega$ is dual to the vector $\nabla_\mathrm{trivial} \times\boldsymbol{\omega} + \boldsymbol{\omega} \times \boldsymbol{\omega} =i\frac{\mu}{\hbar} (\nabla\cdot\boldsymbol{E})\boldsymbol{\sigma} - i\frac{\mu}{\hbar}(\boldsymbol{\sigma}\cdot\nabla)\boldsymbol{E} - 2i (\frac{\mu}{\hbar})^2(\boldsymbol{\sigma} \cdot \boldsymbol{E}) \boldsymbol{E}.$ Outside the wire, the first term vanishes and, noting that $\boldsymbol{E}\cdot\boldsymbol{e} =0,$ the other two terms have vanishing component in the direction of $\boldsymbol{e}$ and thus vanish when integrated over any region within an orthogonal plane.} so that its restriction to the intersection $\gencon$ of $\mathbb{R}^3\setminus\mathcal{C}$ with the orthogonal plane can be replaced, as in the Aharonov--Bohm case, by the trivial connection if we introduce a periodicity condition on the wave function with the topological factor \begin{equation} \Gamma_1 = \exp \Bigl(-\frac{4\pi i\mu\lambda}{\hbar}\, \boldsymbol{e}\cdot\boldsymbol{\sigma} \Bigr) \,. \end{equation} \zz{In this way} we obtain a representation $\Gamma : \fund{\gencon} \to SU(2)$ that is not given by a character.
Another example of a topological factor that is not a character and which can be generalized to a nonabelian representation is provided by a higher-dimensional version of the Aharonov--Bohm effect: one may replace the \z{vector potential} in the Aharonov--Bohm setting by a non-abelian gauge field (\`a la Yang--Mills) \z{whose field strength (curvature) vanishes} outside a cylinder $\mathcal{C}$ but not inside; the value space $\ensuremath{W}$ (now corresponding not to spin but to, say, quark color) has dimension greater than one, and the difference between two wave packets that have passed $\mathcal{C}$ on different sides is given in general, not by a phase, but by a unitary endomorphism $\Gamma$ of $\ensuremath{W}$. In this example, involving one \x{cylinder}, the representation $\Gamma$, though given by matrices that are not multiples of the identity, is nonetheless abelian, since $\fund{\gencon} \cong \mathbb{Z}$ is an abelian group. However, when two or more cylinders are considered, we obtain a non-abelian representation $\Gamma$, since when $\gencon$ is $\mathbb{R}^3$ minus two disjoint solid cylinders its fundamental group is isomorphic to the non-abelian group $\mathbb{Z} \ast \mathbb{Z}$, where $\ast$ denotes the free product of groups, generated by loops $\sigma_1$ and $\sigma_2$ surrounding one or the other of the cylinders. One can easily arrange that the matrices $\Gamma_{\sigma_i}$ corresponding to loops $ \sigma_i$, $i=1,2$, fail to commute, so that $\Gamma$ is nonabelian.
Our last example involves a holonomy-twisted representation $\Gamma$ that is not a representation in the ordinary sense. Consider $N$ fermions, each as in the previous examples, moving in $M=\mathbb{R}^3\setminus \cup_i\mathcal{C}_i$, where $\mathcal{C}_i$ are one or more disjoint solid cylinders. More generally, consider $N$ fermions, each having 3-dimensional configuration space $M$ and value space $W$ (which may incorporate spin or ``color'' or both). Then the configuration space $\gencon$ for the $N$ fermions is the set ${}^N\! M$ of all $N$-element subsets of $M$, \x{with universal covering space $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}=\widehat {{}^N\! M} = {\widehat{M}}^N\setminus \Delta$ with $\Delta$} the extended diagonal, the set of points in ${\widehat{M}}^N$ whose projection to $M^N$ lies in its coincidence set. Every diffeomorphism $\sigma\in Cov(\widehat{{}^N\! M}, {}^N\! M)$ can be expressed as a product \begin{equation}\label{prod} \sigma=p\tilde\sigma \end{equation} where $p \in S_N$ and $\tilde\sigma = (\sigma^{(1)},\dots,\sigma^{(N)})\in Cov(\widehat{M},M)^N $ and these act on $\ensuremath{\hat{q}}=(\hat\ensuremath{\boldsymbol{q}}_1,\dots, \hat\ensuremath{\boldsymbol{q}}_N)$ $\in\widehat{M}^N$ as follows: \begin{equation}\label{tildesigmaq} \tilde\sigma\ensuremath{\hat{q}}=(\sigma^{(1)}\hat\ensuremath{\boldsymbol{q}}_1,\dots, \sigma^{(N)}\hat\ensuremath{\boldsymbol{q}}_N) \end{equation} and \begin{equation}\label{pq} p\ensuremath{\hat{q}}=(\hat\ensuremath{\boldsymbol{q}}_{p^{-1}(1)},\dots, \hat\ensuremath{\boldsymbol{q}}_{p^{-1}(N)}). \end{equation} Thus \begin{equation}\label{sigmaq} \sigma\ensuremath{\hat{q}}=(\sigma^{(p^{-1}(1))}\hat\ensuremath{\boldsymbol{q}}_{p^{-1}(1)},\dots, \sigma^{(p^{-1}(N))}\hat\ensuremath{\boldsymbol{q}}_{p^{-1}(N)}). \end{equation} Moreover, the representation (\ref{prod}) of $\sigma$ is unique. Thus, since \begin{equation}\label{sdp} \sigma_1\sigma_2=p_1\tilde\sigma_1p_2\tilde\sigma_2=(p_1p_2)(p_2^{-1}\tilde\sigma_1p_2\tilde\sigma_2) \end{equation} with $p_2^{-1}\tilde\sigma_1p_2=(\sigma_1^{(p_2(1))},\dots,\sigma_1^{(p_2(N))})\in Cov(\widehat{M},M)^N$, we find that $Cov(\widehat{{}^N\! M}, {}^N\! M)$ is a semidirect product of $S_N$ and $Cov(\widehat{M},M)^N$, with product given by \begin{equation}\label{sprod} \sigma_1\sigma_2=(p_1,\tilde\sigma_1)(p_2,\tilde\sigma_2)=(p_1p_2,p_2^{-1}\tilde\sigma_1 p_2\tilde\sigma_2). \end{equation}
Wave functions for the $N$ fermions are sections of the lift $\widehat E$ to $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$ of the bundle $E$ over $\gencon$ with fiber \begin{equation}\label{nw} E_q=\bigotimes_{\ensuremath{\boldsymbol{q}} \in q} W \end{equation} and (nontrivial) connection inherited from the trivial connection on $M\times W$. If the dynamics for $N=1$ involves wave functions on $\widehat{M}$ obeying (\ref {e:percon06}) with topological factor $\Gamma_\ensuremath{\sigma}(\hat{\boldsymbol{q}})=\Gamma_\ensuremath{\sigma}$ given by a unitary \x{representation of $\fund{M}$ (i.e., independent of $\hat{\boldsymbol{q}}$), then} the $N$ fermion wave \x{function obeys} (\ref{e:percon06}) with topological factor
\begin{equation}\label{biggamma} \Gamma_{\sigma}(\ensuremath{\hat{q}})=\mathrm{sgn}(p)\bigotimes_{\ensuremath{\boldsymbol{q}} \in \pi(\ensuremath{\hat{q}})}\Gamma_{\sigma^{(i_{\ensuremath{\hat{q}}}(\ensuremath{\boldsymbol{q}}))}} \equiv \mathrm{sgn}(p)\Gamma_{\tilde\sigma}(\ensuremath{\hat{q}}) \end{equation} where for $\ensuremath{\hat{q}}=(\hat\ensuremath{\boldsymbol{q}}_1,\dots,\hat\ensuremath{\boldsymbol{q}}_N), \ \pi(\ensuremath{\hat{q}})=\{\pi_M(\hat\ensuremath{\boldsymbol{q}}_1),\dots,\pi_M(\hat\ensuremath{\boldsymbol{q}}_N)\}$ and $i_{\ensuremath{\hat{q}}}(\pi_M(\hat\ensuremath{\boldsymbol{q}}_j))=j$. Since
\begin{equation}\label{prod2} \Gamma_{\tilde\sigma_1 \tilde\sigma_2}(\ensuremath{\hat{q}}) = \Gamma_{\tilde\sigma_1} (\ensuremath{\hat{q}}) \, \Gamma_{\tilde\sigma_2} (\ensuremath{\hat{q}}) \end{equation} we find, using (\ref{sprod}) and (\ref{prod2}), that
\begin{subequations}\label{htr} \begin{align} \Gamma_{\sigma_1 \sigma_2}(\ensuremath{\hat{q}})&= \mathrm{sgn}(p_1p_2)\Gamma_{p_2^{-1}\tilde\sigma_1 p_2\tilde\sigma_2}(\ensuremath{\hat{q}})\\ &=\mathrm{sgn}(p_1)\Gamma_{p_2^{-1}\tilde\sigma_1 p_2}(\ensuremath{\hat{q}})\mathrm{sgn}(p_2)\Gamma_{\tilde\sigma_2} (\ensuremath{\hat{q}})\\ &=P_2\Gamma_{\sigma_1}(\ensuremath{\hat{q}})P_2^{-1}\Gamma_{\sigma_2} (\ensuremath{\hat{q}}), \end{align} \end{subequations} which agrees with (\ref{twistedrep}) since the holonomy on the bundle $E$ is given by permutations $P$ acting on the tensor product (\ref{nw}).
\section{Conclusions} \label{sec:conclusions}
We have \zz{investigated} the possible quantum theories on a topologically nontrivial configuration space $\gencon$ from the point of view of Bohmian mechanics, which is fundamentally concerned with the motion of matter in physical space, represented by the evolution of a point in configuration space.
Our goal was \zz{to find} all Bohmian dynamics in \ensuremath{\mathcal{Q}}, where the wave functions may be sections of a Hermitian\ vector bundle $E$. What ``all'' Bohmian dynamics means is not obvious; we have followed one approach to what it can mean; other approaches will be described in \zz{future} works. The present approach uses \z{ wave functions $\psi$} that are defined on the universal covering space $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$ of \ensuremath{\mathcal{Q}}\ and satisfy a periodicity condition ensuring that the Bohmian velocity vector field on $\ensuremath{\widehat{\ensuremath{\mathcal{Q}}}}$ defined in terms of $\psi$ can be projected to \ensuremath{\mathcal{Q}}. We have arrived in this way at a natural class of Bohmian dynamics beyond the immediate Bohmian dynamics. Such a dynamics is defined by a potential and some information encoded in ``topological factors,'' which form either a character (one-dimensional unitary representation) of the fundamental group of \z{the} configuration space, $\fund{\gencon}$, or a more general algebraic-geometrical object, a \zz{holonomy-twisted representation} $\Gamma$. Only those dynamics associated with characters are compatible with \emph{every} potential, as one would desire for \z{what} could be \z{considered} a version of quantum mechanics in $\gencon$. We have thus arrived at the known fact that for every character of $\fund{\gencon}$ there is a version of quantum mechanics in $\gencon$. A consequence, which will be discussed in detail in a sister paper \cite{topid2}, is the symmetrization postulate for identical particles. These different quantum theories emerge naturally when one contemplates the possibilities for defining a Bohmian dynamics in $\gencon$.
\section*{Acknowledgments}
We thank Kai-Uwe Bux (Cornell University), Frank Loose (Eberhard-Karls-Universit\"at T\"ubingen, Germany) and Penny Smith (Lehigh University) for helpful discussions.
R.T.\ gratefully acknowledges support by the German National Science Foundation (DFG) through its Priority Program ``Interacting Stochastic Systems of High Complexity'', by INFN, and by the European Commission through its 6th Framework Programme ``Structuring the European Research Area'' and the contract Nr. RITA-CT-2004-505493 for the provision of Transnational Access implemented as Specific Support Action. The work of S.~Goldstein was supported in part by NSF Grant DMS-0504504. N.Z.\ gratefully acknowledges support by INFN.
We appreciate the hospitality that some of us have enjoyed, on more than one occasion, at the Mathematisches Institut of Ludwig-Maximilians-Universit\"at M\"unchen (Germany), the Dipartimento di Fisica of Universit\`a di Genova (Italy), the Institut des Hautes \'Etudes Scientifiques in Bures-sur-Yvette (France), and the Mathematics Department of Rutgers University (USA).
Finally we would like to thank an anonymous referee for helpful criticisms on an earlier version of this article.
\end{document} | arXiv |
Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion
DCDS-B Home
The coupled 1:2 resonance in a symmetric case and parametric amplification model
doi: 10.3934/dcdsb.2020221
Mean-square delay-distribution-dependent exponential synchronization of chaotic neural networks with mixed random time-varying delays and restricted disturbances
Quan Hai 1,2, and Shutang Liu 1,,
College of Control Science and Engineering, Shandong University, Jinan 250061, China
College of Mathematics Science Inner Mongolia Normal University Hohhot 010022, China
* Corresponding author: [email protected]
Received December 2019 Revised April 2020 Published July 2020
Fund Project: The first author is supported by NSF of china 61533011, U1806203
Figure(4)
This paper investigates the delay-distribution-dependent exponential synchronization problem for a class of chaotic neural networks with mixed random time-varying delays as well as restricted disturbances. Given the probability distribution of the time-varying delay, stochastic variable that satisfying Bernoulli distribution is formulated to produce a new system which includes the information of the probability distribution. Based on the Lyapunov-Krasovskii functional method, the Jensen's integral inequality theory and linear matrix inequality (LMI) technique, several delay-distribution-dependent sufficient conditions are developed to guarantee that the chaotic neural networks with mixed random time-varying delays are exponentially synchronized in mean square. Furthermore, the derived results are given in terms of simplified LMI, which can be straightforwardly solved by Matlab. Finally, two numerical examples are proposed to demonstrate the feasibility and the effectiveness of the presented synchronization scheme.
Keywords: Exponential synchronization, stochastic neural networks, mixed delays, disturbance constraints, linear matrix inequality.
Mathematics Subject Classification: Primary: 93B36, 93B52, 93C10, 93C55.
Citation: Quan Hai, Shutang Liu. Mean-square delay-distribution-dependent exponential synchronization of chaotic neural networks with mixed random time-varying delays and restricted disturbances. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020221
P. Balasubramaniam, V. Vembarasan and R. Rakkiyappan, Delay-dependent robust exponential state estimation of Markovian jumping fuzzy Hopfield neural networks with mixed random time-varying delays, Commun. Nonlinear Sci. Numer. Simul., 16 (2011), 2109-2129. doi: 10.1016/j.cnsns.2010.08.024. Google Scholar
H. Bao and J. Cao, Delay-distibution-dependent state estimation for discrete-time stochastic neural networks with random delay, Neural Network, 24 (2011), 19-28. Google Scholar
J. Cai, L. Shen and F. Wu, Adaptive control of a class of non-linear systems preceded by backlash-like hysteresis, Math. Struct. Comput. Sci., 24 (2014), e240504, 14 pp. doi: 10.1017/S0960129512000473. Google Scholar
A. Cichoki and R. Unbehauen, Neural Networks for Optimization and Signal Processing, John Wiley and Sons, 2003. Google Scholar
Q. Gan and Y. Liang, Synchronization of chaotic neural networks with time delay in the leakage term and parametric uncertainties based on sampled-data control, J. Franklin Inst., 349 (2012), 1955-1971. doi: 10.1016/j.jfranklin.2012.05.001. Google Scholar
M. Gilli, Strange attractors in delayed cellular neural networks, IEE Trans. Circuits Syst. I: Fundam. Theor. Appl., 40 (1993), 849-853. doi: 10.1109/81.251826. Google Scholar
K. Gu, V. L. Kharitonov and J. Chen, Stability of Time-Delay Systems, Birkhäuser Boston, Inc., Boston, MA, 2003. doi: 10.1007/978-1-4612-0039-0. Google Scholar
B. Hu, Q. Song, K. Li, Z. Zhao, Y. Liu and Fuad E. Alsaadi, Global $\mu$-synchronization of impulsive complex-valued neural networks with leakage delay and mixed time-varying delays, Neurocomputing, 307 (2018), 106-116. Google Scholar
T. Kwork and K. A. Smith, A unified framework for chaotic neural networks approaches to combinatorial optimization, IEE Trans. Neural Netw., 10 (1999), 978-981. Google Scholar
X. Li and S. Song, Research on synchronization of chaotic delayed neural networks with stochastic perturbation using impulsive control method, Commun. Nonlinear Sci. Numer. Simul., 19 (2014), 3892-3900. doi: 10.1016/j.cnsns.2013.12.012. Google Scholar
J.-N. Li, Y.-J. Su and C.-L. Wen, Stochastic reliable control of a class of networked control systems with actuator faults and input saturation, Int. J. Control, Autom., Syst., 12 (2014), 564-571. doi: 10.1007/s12555-013-0371-7. Google Scholar
Z. X. Liu, H. W. Yang and F. W. Chen, Mean square exponential synchronization of stochastic neutral type chaotic neural networks with mixed delay, International Journal of Mathematical and Computational Sciences, 8 (2011), 1298-1303. Google Scholar
H. Lu, Chaotic attractors in delayed neural networks, Phys. Lett. A, 298 (2002), 109-116. doi: 10.1016/S0375-9601(02)00538-8. Google Scholar
G. Nagamani and S. Ramasamy, Stochastic deissativity and passivity analysis for discrete-time neural networks with probabilistic time-varying delays in leakage term, Appl. Math. Comput., 289 (2016), 237-257. doi: 10.1016/j.amc.2016.05.004. Google Scholar
J. Nilsson, B. Bernhardsson and B. Wittenmark, Stochastic analysis and control of real-time systems with random time delays, Automatica J. IFAC, 34 (1998), 57-64. doi: 10.1016/S0005-1098(97)00170-2. Google Scholar
C. Peng and Y.-C. Tian, Improved delay-dependent robust stability criteria for uncertain systems with interval time-varing delay, IET Control Theory Appl., 2 (2008), 752-761. doi: 10.1049/iet-cta:20070362. Google Scholar
A. Pratap, R. Raja, Ji nde Cao, G. Rajchakit and Fuad E. Alsaadi, Further synchronization in finite time analysis for time-varying delayed fractional order memristive competitive neural networks with leakage delay, Neurocomputing, 317 (2018), 110-126. doi: 10.1016/j.neucom.2018.08.016. Google Scholar
Y. Tang, J. Fang and Q. Miao, On the exponential synchronization of stochastic jumping chaotic neural networks with mixed delays and sector-bounded nonlinearities, Neurocomputing, 721 (2009), 694-701. Google Scholar
G. Velmurugan, R. Rakkiyappan and J. Cao, Finite-time synchronization of fractional-order memristor-based neural networks with time delays, Neural Netw., 73 (2016), 36-46. Google Scholar
J. Wang, K. Shi, Q. Huang, S. Zhong and D. Zhang, Stochastic switched sampled-data control for synchronization of delayed chaotic neural networks with packet dropout, Appl. Math. Comput., 335 (2018), 211-230. doi: 10.1016/j.amc.2018.04.038. Google Scholar
Z. Wang, H. Shu, Y. Liu, D. W. C. Ho and X. Liu, Robust stability analysis of generalized neural networks with dicrete and distributed time delays, Chaos Solitons Fractals, 30 (2006), 886-896. doi: 10.1016/j.chaos.2005.08.166. Google Scholar
W. Wang, M. Yu, X. Luo, L. Liu, M. Yuan and W. Zhao, Synchronization of memristive BAM neural networks with leakage delay and additive time-varying delay components via sampled-data control, Chaos Solitons Fractals, 104 (2017), 84-97. doi: 10.1016/j.chaos.2017.08.011. Google Scholar
H. Wu, X. Zhang, R. Li and R. Yao, Finite-time synchronization of chaotic neural networks with mixed time-varying delays and stochastic disturbance, Memetic Comp., 7 (2015), 231-240. doi: 10.1007/s12293-014-0150-x. Google Scholar
S. Xu and T. Chen, Robust $H_{\infty}$ control for uncertain stochastic systems with state delay, IEEE Trans. Automat. Control, 47 (2002), 2089-2094. doi: 10.1109/TAC.2002.805670. Google Scholar
X. Zhang, X. Lv and X. Li, Sampled-data-based lag synchronization of chaotic delayed neural networks with impulsive control, Nonlinear Dyn., 90 (2017), 2199-2207. doi: 10.1007/s11071-017-3795-4. Google Scholar
C.-D. Zheng, Z. Wei and Z. Wang, Robustly adaptive synchronization for stochastic Markovian neural networks of neutral type with mixed mode-dependent delays, Neurocomputing, 171 (2016), 1254-1264. doi: 10.1016/j.neucom.2015.07.066. Google Scholar
F. Zou and J. A. Nossek, Bifurcation and chaos in cellular neural networks, IEE Trans. Circuits Syst. I: Fundam. Theor. Appl., 40 (1993), 166-173. doi: 10.1109/81.222797. Google Scholar
Figure 1. (a) Chaotic behavior of the neural networks (3). (b) Chaotic behavior of the neural networks (4) without control input $ u(t) $
Figure 2. (a) Chaotic behavior of the neural networks (4). (b)-(c) State trajectories of the neural networks (3) and (4). (d) Synchronization error trajectories of the state variables between the neural networks (3) and (4)
Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021001
Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021006
Yuan Tan, Qingyuan Cao, Lan Li, Tianshi Hu, Min Su. A chance-constrained stochastic model predictive control problem with disturbance feedback. Journal of Industrial & Management Optimization, 2021, 17 (1) : 67-79. doi: 10.3934/jimo.2019099
Junkee Jeon. Finite horizon portfolio selection problems with stochastic borrowing constraints. Journal of Industrial & Management Optimization, 2021, 17 (2) : 733-763. doi: 10.3934/jimo.2019132
Zhimin Li, Tailei Zhang, Xiuqing Li. Threshold dynamics of stochastic models with time delays: A case study for Yunnan, China. Electronic Research Archive, 2021, 29 (1) : 1661-1679. doi: 10.3934/era.2020085
Xiaoxian Tang, Jie Wang. Bistability of sequestration networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1337-1357. doi: 10.3934/dcdsb.2020165
Xuemei Chen, Julia Dobrosotskaya. Inpainting via sparse recovery with directional constraints. Mathematical Foundations of Computing, 2020, 3 (4) : 229-247. doi: 10.3934/mfc.2020025
Manuel de León, Víctor M. Jiménez, Manuel Lainz. Contact Hamiltonian and Lagrangian systems with nonholonomic constraints. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2021001
Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012
Shengxin Zhu, Tongxiang Gu, Xingping Liu. AIMS: Average information matrix splitting. Mathematical Foundations of Computing, 2020, 3 (4) : 301-308. doi: 10.3934/mfc.2020012
Nan Zhang, Linyi Qian, Zhuo Jin, Wei Wang. Optimal stop-loss reinsurance with joint utility constraints. Journal of Industrial & Management Optimization, 2021, 17 (2) : 841-868. doi: 10.3934/jimo.2020001
D. R. Michiel Renger, Johannes Zimmer. Orthogonality of fluxes in general nonlinear reaction networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 205-217. doi: 10.3934/dcdss.2020346
Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344
Pedro Aceves-Sanchez, Benjamin Aymard, Diane Peurichard, Pol Kennel, Anne Lorsignol, Franck Plouraboué, Louis Casteilla, Pierre Degond. A new model for the emergence of blood capillary networks. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2021001
Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321
Shigui Ruan. Nonlinear dynamics in tumor-immune system interaction models with delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 541-602. doi: 10.3934/dcdsb.2020282
Yicheng Liu, Yipeng Chen, Jun Wu, Xiao Wang. Periodic consensus in network systems with general distributed processing delays. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2021002
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
2019 Impact Factor: 1.27
Quan Hai Shutang Liu | CommonCrawl |
Mathematica Meta
Does Mathematica support an unordered set (e.g. hashset) data structure?
Does Mathematica support any data structure representing an unordered set (without multiplicity) of Mathematica expressions? I would like this data structure to support $O(1)$ insertion, deletion, and membership testing.
Note that according to this answer, Associations only support $O(\log n)$ insertion and deletion.
David ZhangDavid Zhang
$\begingroup$ I think that for all practical purposes, you can consider Association to have O(1) complexity for all those operations. The point is that, implementing on the top-level any such structure would lead to constant factors far greater that log n for any sensible size of the set. You can also try System`Utilities`HashTable, described in e.g. this answer. $\endgroup$ – Leonid Shifrin Mar 19 '15 at 13:28
DeleteDuplicates is another way to build sets, or, more precisely, ordered lists with no duplicates (a friend of mine calls such things "suits," I think a brilliant name). Note the last little minitest here fails:
{DeleteDuplicates[{}],
DeleteDuplicates[{1}] == DeleteDuplicates[{1, 1}],
DeleteDuplicates[{2, 1, 3, 1, 2, 3, 3, 2, 2, 1}] ==
DeleteDuplicates[{1, 2, 3}]}
{{}, True, False}
Easy to fix by composing with Sort:
ClearAll[set];
set = Sort@*DeleteDuplicates;
{set[{}],
set[{1}] == set[{1, 1}],
set[{2, 1, 3, 1, 2, 3, 3, 2, 2, 1}] == set[{1, 2, 3}]}
{{}, True, True}
Here's a specific implementation of the ideas in Leonid's comment:
set[l_List] := Sort@Keys@Association@MapThread[Rule, {l, l}];
Keep or remove the Sort depending on whether you want a set or a suit. However, the Sort introduces more overhead, getting you away from (close to) O(1) perf, as you requested.
EDIT: The difference between set and suit can be important if you're trying to emulate a combinatorial function like Permutations. This function treats duplicate elements as identical, but it is also 'stable', meaning that it doesn't change the orders of inputs. If you try to emulate it using a set instead of a suit, you can get a scrambled answer. For instance, consider
Permutations[{1, 1, 1, 0, 0}] // TeXForm
\begin{array}{ccccc} 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1 \\ \end{array}
Let's make our own permutations that can take a collector function as an input, and try it out with suit to check that we get the same answer, in the same order, as with the built-in Permutations:
ClearAll[permutations, set, suit];
suit[l_List] := Keys@Association@MapThread[Rule, {l, l}];
permutations[collector_, {}] := {{}};
permutations[collector_, xs_List] :=
collector[
Flatten[
Table[
With[{
x = xs[[i]],
plucked = Join[xs[[;; i - 1]], xs[[i + 1 ;;]]]},
Prepend[x] /@ permutations[collector, plucked]],
{i, Length[xs]}],
1]];
permutations[suit, {1, 1, 1, 0, 0}] // TeXForm
Now try it with set as the collector:
permutations[set, {1, 1, 1, 0, 0}] // TeXForm
Reb.CabinReb.Cabin
Thanks for contributing an answer to Mathematica Stack Exchange!
Not the answer you're looking for? Browse other questions tagged data-structures or ask your own question.
Struct equivalent in Mathematica?
How to make use of Associations?
Question about designing a particular data structure
What is the fastest way to maintain a large set of expressions?
Implementing a dictionary data structure
Dataset vs an association of associations
Turning table into association for Classify
Implementing core data structures
Could 48V arc across this small gap?
lang-mma
Mathematica is a registered trademark of Wolfram Research, Inc. While the mark is used herein with the limited permission of Wolfram Research, Stack Exchange and this site disclaim all affiliation therewith. | CommonCrawl |
Applied Water Science
June 2018 , 8:88 | Cite as
Impacts of hydrogeochemical processes and anthropogenic activities on groundwater quality in the Upper Precambrian sedimentary aquifer of northwestern Burkina Faso
A. Sako
J. M. Yaro
O. Bamba
This study investigates the hydrogeochemical and anthropogenic factors that control groundwater quality in an Upper Precambrian sedimentary aquifer in the northwestern Burkina Faso. The raw data and statistical and geochemical modeling results were used to identify the sources of major ions in dug well, private borewell and tap water samples. Tap waters were classified as Ca–HCO3 and Ca–Mg–HCO3 types, reflecting the weathering of the local dolomitic limestones and silicate minerals. Dug well waters, with a direct contact with various sources of contamination, were classified as Ca–Na–K–HCO3 type. Two factors that explain 94% of the total variance suggested that water–rock interaction was the most important factor controlling the groundwater chemistry. Factor 1 had high loadings on pH, Ca2+, Mg2+, HCO3−, SO42− and TDS. These variables were also strongly correlated indicating their common geogenic sources. Based on the HCO3−/(HCO3− + SO42−) ratios (0.8–0.99), carbonic acid weathering appeared to control Ca2+, Mg2+, HCO3− and SO42− acquisition in the groundwater. With relatively lower Ca2+ and Mg2+ concentrations, the majority of dug well and borewell waters were soft to moderately hard, whereas tap waters were considered very hard. Thus, the dug well and, to a lesser extent, borewell waters are likely to have a low buffering capacity. Factor 2 had high loadings on Na+, NO3− and Cl−. The strong correlation between Na+ and NO3− and Cl− implied that factor 2 represented the anthropogenic contribution to the groundwater chemistry. In contrast, K+ had moderate loadings on factors 1 and 2, consistent with its geogenic and anthropogenic sources. The study demonstrated that waters from dug wells and borewells were bacteriologically unsafe for human consumption, and their low buffering capacity may favor mobility of potentially toxic heavy metals in the aquifer. Not only very hard tap waters have aesthetic inconvenient, but their consumption may also pose health problems.
Sedimentary aquifer Tap water Dug wells Borewells Water–rock interaction
Following severe droughts in 1970s, a massive internal migration from drier central plateau and northern regions toward a more humid northwestern Burkina Faso has put a tremendous pressure on the regional surface water resources (Kessler and Greerling 1994). The northwestern Burkina Faso has been also subject to adverse effects of climate changes such as erratic precipitations and decrease in seasonal surface water flow, and thus, surface water becomes an unreliable source for water supply. As a result, people have been heavily relying on groundwater for domestic water supply and livestock watering (Derouane and Dakoure 2006; Courtois et al. 2010; Huneau et al. 2011). Traditional hand-dug wells are the main sources of groundwater in the region. In order to meet the ever-increasing demands for water, hundreds of borewells, equipped with hand pumps, were drilled in the Kossi Province one of the four provinces in the northwestern Burkina Faso (Barry et al. 2005) and the site of the present study. The borewells draw groundwater from deep fractured sedimentary rocks, whereas the dug wells abstract shallow groundwater within weathered mantle layers (Collectif 1990).
Although groundwater constitutes an important asset for socioeconomic development of the northwestern Burkina Faso, the hydrogeochemical studies pertaining groundwater quality in this large transboundary aquifer are scanty. The local groundwater quality is likely to be controlled by both natural and anthropogenic factors. Water–rock interaction (i.e., chemical weathering and cation exchange processes) can be the most important natural factor that controls the groundwater quality (Fetter 1994; Appelo and Postma 2005; Li et al. 2016). In contrast, excessive use of fertilizer, non-protection of wells and poor sanitary conditions are potential sources of anthropogenic pollution (Groen et al. 1988; Li et al. 2017; Yameogo and Savadogo 2002; Huneau et al. 2011; Wu et al. 2017). The monitoring of the physicochemical and biological conditions of groundwater is necessary for an efficient water resource management and development of aquifer protection strategies. Therefore, the objectives of the present study were (1) to identify the hydrogeochemical processes and anthropogenic activities that govern the chemical composition of dug wells, private borewells and tap water provided by the public water supply system of an Upper Precambrian sedimentary aquifer, and (2) to evaluate the suitability of the groundwater for human consumption. The findings of this study will contribute to bridging the gap between anthropogenic factors and hydrogeochemical processes that control groundwater quality in a sedimentary and semi-urban setting.
The study area is located in the town of Nouna, the Kossi Province (Northwestern), 306 km of Ouagadougou the capital city of Burkina Faso (Fig. 1a). The area is part of a floodplain of the ephemeral Kossi River basin (Fig. 1b). This plain contains several ponds of variable sizes, separated by elevated zones (200–300 m a.s.l). The local climate is of the north-Sudanian type, characterized by a dry season (October–May) and a wet season (June–September). With an average annual rainfall of 887 mm, the Nouna commune falls in the so-called the Bread Basket of Burkina Faso, where subsistence and cotton farming and livestock bring a substantial income to the populations. As in the whole country, the plain has undergone a marked decrease in rainfall since the 1970s (~ 200 mm), putting a great pressure on water resources. Currently, rainfall is characterized by a great intra- and inter-annual irregularity (Frappart et al. 2009).
a Geographical map of Burkina Faso; b geomorphological map of the Kossi floodplain, showing the study area; c groundwater sampling points superimposed on the simplified local lithological units. The lithology of tap water from the public water supply points may not correspond to their sampling lithology
The area is underlain by Upper Precambrian sedimentary rocks known as the southeast Taoudeni sedimentary formations shared by Mali and Burkina Faso. These formations are essentially made of an alternation of pink siltstones and argillites with glauconite and dolomitic limestone lenses capped with silexite (Ouédraogo 1998). As in the crystalline basement areas that make up 80% of Burkina Faso, two types of discontinuous aquifers are encountered in the study area. A shallow (5–20 m) aquifer located in the weathered lateritic layer, which is superimposed on a deep aquifer within the joined sandstone layers in the sedimentary sequence (CIEH 1976; BILAN D'EAU 1993). The thickness of the deep aquifer is poorly known, and it varies according to the lithology. In contrast to crystalline basement aquifers, the high permeability (1.8 × 106 m/s) of sedimentary rocks makes the southeast Taoudeni sedimentary formations excellent aquifers, with an estimated storage coefficient of 1 × 10−4 and significant yields up to 100 m3/h (Gombert 1998). That is, the only two permanent watercourses in the country (i.e., the Mouhoun and Comoé rivers) are directly fed by springs originated from sedimentary aquifers (Talbaoui 2009).
The local groundwater recharge occurs through direct infiltration of rainwater and indirect infiltration of runoff via depressions, streams and alluvial valleys (Groen et al. 1988; Barry et al. 2005). The regional water table shows a seasonal variation of 1–2 m. The estimated total volume of groundwater in the Nouna commune is 0.4 million m3/year, whereas the renewable resource is about 0.5 million m3/year (MEE 2001). Consequently, groundwater resource development in the commune is very limited compared to the resource availability. More than half of the resources are used for domestic water supply and the remaining for livestock watering (MEE 2001). Poor sanitation, lack of an effective management of domestic wastes, inadequate protection of dug wells from surface runoff and animal droppings make the groundwater highly vulnerable to anthropogenic pollution.
Twenty groundwater samples were collected from six major wards of Nouna in dry season 2017 (Fig. 1c). Five samples were collected from representative private borewells (B1–B5), five from shallow hand-dug wells with large diameters (W1–W2), whereas 10 samples were collected from the public water supply system (P1–P10; Table 1). In order to obtain high water flow rates, the groundwater supplied by the public water supply system is abstracted from relatively deeper aquifers. The hand-dug well samples were drawn using a sterilized bucket and filtered through Millipore membrane (0.45 µm) into two sets of new high-density polyethylene bottles (HDP), whereas those of borewells and tap water were directly pumped through filter capsules into two sets of HDP. One set of the samples was acidified with ultrapure HNO3− (pH > 3), whereas the other set was left non-acidified. A third set of samples was collected and kept unfiltered and non-acidified in glass bottles for bacteriological counts. Electrical conductivity (EC), pH and total dissolved solids (TDS) were measured in the field using calibrated meters with standard solutions. The samples were put in ice box and taken to laboratory for major cation and anion analysis.
Physicochemical and bacteriological parameters of dug well (W1–W5), borewell (B3–B5) and tap water (P1–P10) samples
Physicochemical parameters
Bacteriological counts
Mg2+
Na+
K+
HCO3−
SO42−
NO3−
Cl−
(µs/C)
(mg/L)
(mg CaCO3/L)
1UFC/100 mL
> 1000
6.5 ≤ pH ≤ 8.5
1. Fecal coliforms
2. Total coliforms
3. Fecal streptococci
4. World Health Organization
In the laboratory, concentrations of Ca2+ and Mg2+ were estimated titrimetrically using 0.05 N EDTA and 0.01 N, whereas those of HCO3− and Cl− by H2SO4 and AgNO3 titration, respectively. Sodium and K+ concentrations were determined by flame photometric method (APHA 1995), and those of SO42− and NO3− by UV–Vis spectrophotometric technique. Total hardness (TH) was determined by EDTA complexometric titration method (WHO 1999). Analytical reagent grades and milli-Q water were used for the analyses. Two borewell samples (B1 and B2) had large charge balance errors (> ± 10%) and were not included in the data interpretation.
Nutrient MacConkey agar was used for total coliform bacterial count and Eosin for total fecal coliform. The petri dishes containing agar and diluted groundwater samples were incubated under appropriate conditions (time and temperature). The bacteriological counts per 100 mL were estimated from the MPN table (APHA9221D).
R-mode factor analysis (Wu et al. 2014) was used to assess the relationships between the physicochemical parameters of the groundwater, using SPSS package (version 20), whereas Visual MINTEQ (version 3.1) was used to calculate saturation indices (SI) of carbonate and evaporite minerals as well as partial CO2 pressure of the groundwater.
Groundwater constituents
The physicochemical data of groundwater highlighted distinct differences between shallow dug well, borewell and tap waters. A strong relationship (R2 = 0.96) between total cations TZ+ and total anions TZ− (Fig. 2a) implied that contribution of non-measured ions to charge balance was not significant. Furthermore, the relationship between EC and TDS (R2 = 0.96; Fig. 2b) suggested that the groundwaters were unlikely to contain substantial amounts of uncharged soluble compounds (e.g., silica, manganese, aluminum and iron) that may contribute to TDS contents (Datta and Tyagi 1996; Prasanna et al. 2011).
a Relationship of total anions (TZ−) to total cations (TZ+) of the groundwater samples; b relationship of electrical conductivity (EC) to total dissolved solids (TDS)
In overall, EC and TDS were low in the groundwaters (Table 1). This suggests the absence of salt in the recharge water and limited groundwater mineralization (Han and Liu 2004; Smedley et al. 2007; Huneau et al. 2011; Jeannin et al. 2016). Because of intense leaching, groundwaters drawn from the weathered mantle aquifer appeared to be less mineralized compared to those from the deep fractured aquifer. As result, ZT+ and TZ− were higher in tap waters (medians = 172 and 336 µeq/L; Table 2) from the deep aquifer than in dug wells from the weathered mantle aquifer. The differences in recharge flow paths could also be an explanation for the observed mineralization trends. Thus, weakly mineralized groundwaters are often associated with rapid recharge (i.e., younger residence time) of the shallow aquifers, whereas highly mineralized groundwaters (i.e., older residence time) have been attributed to paleo-recharge or slow circulation processes in deep aquifers (Fritz 1997; Stober and Bucher 1999; Cook et al. 2005; Bucher and Stober 2010; Armandine Les Lands et al. 2014).
Means medians, standard deviations (SD) and coefficients of variance (CV) of physicochemical parameters of the groundwater samples
Dug wells
Borewells
Public water supply points
% CV
EC (µS/cm)
TDS (mg/L)
TH (mg/L)
Ca2+ (mg/L)
Mg2+ (mg/L)
Na+ (mg/L)
K+ (mg/L)
HCO3− (mg/L)
SO42− (mg/L)
NO3− (mg/L)
Cl− (mg/L)
TZ+ (µeq/L)
TZ− (µeq/L)
The high coefficients of variance (CV > 50%; Table 2) and spatial distribution, illustrated by boxplots (Fig. 3), showed a heterogeneous abundance of most physicochemical parameters in dug wells. This is probably due to the sources and the nature of the recharge, the host rock geology, and the short residence time of the groundwater in the weathered mantle aquifer (Back and Hanshaw 1971). On contrary, groundwater composition of tap waters was remarkably homogeneous (CV < 50%; except Na+) and most variables had similar values for mean and median, reflecting primarily the long flow lines and dispersive mixing that may have smoothed out any temporal fluctuations in the groundwater composition (Mazor et al. 1993; Dhar et al. 2008). Although the majority of the samples had pH values within the World Health Organization (WHO 2006) guideline limit for drinking water (pH = 6.5–8.5), the dug well and borewell waters had lower pH (medians = 6.2 ± 0.2 and 6.4 ± 0.9) relative to those of tap waters (median = 7.8 ± 0.2). The high pH in tap waters relative to dug well waters is consistent with positive correlations between pH and the resident time usually observed in deeper aquifers (Morgenstern and Daughney 2012). Total hardness (TH) in the well waters had distribution patterns similar to those of pH, TDS and EC with TH ranging from 33 to 236 mg CaCO3/L. The dug well waters exhibited the lowest TH (median = 47 ± 24.6 mg CaCO3/L and CV = 44%), whereas the highest concentrations were observed in tap waters (median = 258 ± 18 mg CaCO3/L and CV = 18%).
a Boxplots of naturally affected physicochemical parameters in the Upper Precambrian sedimentary aquifer of the northwestern Burkina Faso. The tops and bottoms of the boxes represent the 75th and 25th percentiles, respectively. The horizontal line across the boxes indicates the median. The vertical lines from the tops and bottoms of the boxes extend to 90th and 10th percentiles, respectively. b Boxplots of anthropogenically affected physicochemical parameters in the Upper Precambrian sedimentary aquifer of the northwestern Burkina Faso
Again, the high TH in tap waters can be attributed to long residence time of groundwater in the deep fractured aquifer, leading to extended chemical weathering of dolomitic limestones (Frape et al. 1984). With hardness values largely exceeding the WHO guideline value for drinking water, the tap waters were categorized as very hard, while those of dug wells as soft to moderately hard. Soft waters, with low alkalinity and buffering capacity, may favor the mobility of potentially toxic heavy metals in the aquifer (De Schamphelaere and Janssen 2004; Kirby and Cravotta 2005). In contrast, hard waters require more soap to produce lather, and thus, it is unsuitable for domestic use (Srinivasa Rao and Jugran 2003). Some evidence has also indicated the role played by hard waters in heart diseases and prenatal mortality (Schroeder 1960; Agarwal and Jagetai 1997). Although such cases have not been reported in the present study area, the desirability of softer drinking water is evident among the local population. As a result, the water provided by the public water supply system should be treated before it gets to the consumers.
Sodium was the dominant cation in dug well waters followed by Ca2+, K+ and Mg2+, whereas cation abundance in borewell and tap waters was in decreasing order of Ca2+> Mg2+> K+>Na+ (Table 1). The low EC, TDS, HCO3− and TH contents observed in dug well and borewell waters suggest short contact times between groundwater and the aquifer minerals. This is consistent with the low K+ (except W5 and B4) concentrations in dug well and borewell waters relative to tap waters (8–11 mg/L). Potassium concentrations in groundwater up to 10 mg/L are attributed to orthoclase or clay weathering, whereas concentrations above 10 mg/L may indicate external sources of K+ abundance (Rail 2000). Bicarbonate, SO42− and NO3− were the dominant anions in the wells with the highest HCO3− and SO42− concentrations observed in tap waters. Although these ion concentrations in the groundwater were within the WHO permissible limits for drinking water, NO3− concentrations in dug well waters exceeded the natural nitrate concentrations (5–7 mg/L; Appelo and Postma 1999). None of dug well and borewell samples complied with the WHO guideline values for coliforms (Table 1), and hence, water from these wells requires treatment before human consumption.
Processes controlling groundwater chemistry
Chemical weathering, cation exchange, evaporation and antropogenical activities are the common hydrogeochemical processes that control groundwater chemistry. In order to shed light on these complex processes, statistical and geochemical techniques were used. Thus, the R-mode factor analysis, after varimax rotation (Kaiser 1960), produced two factors (with eigenvalues > 1) that explain 94% of the total variance (Fig. 4). With 63.4% of the total variance, factor 1 is the most important factor that influences the groundwater chemistry. This factor had high absolute loadings on Ca2+, Mg2+, TH, HCO3−, pH, SO 4 2−, EC and TDS and a moderate loading on K+. As expected, there were strong positive correlations between Mg2+ and Ca2+ (r = 0.96) and between TH and Ca2+ and Mg2+ (r = 097 and 0.99, respectively). The pH was also positively correlated with TH, Ca2+, Mg2+ and HCO3− (Table 3). That is, an increase in Ca2+, Mg2+ and HCO3− concentrations through chemical weathering will increase the groundwater pH. Therefore, it can be suggested that the factor 1 reflects water–rock interaction within the aquifer.
R-mode factor scores for factors 1 and 2 of the groundwater samples. Potassium is projected half way between geogenic and anthropogenic factors
The influence of water–rock interaction on the groundwater chemistry was examined through bivariate mixing plots of Na+-normalized Ca2+ versus Na+-normalized Mg2+ and Na+-normalized HCO3− on log–log scale (Fig. 5; Gaillardet et al. 1999). Gaillardet et al. (1999) used published data of well-characterized lithologies to determine silicate and carbonate end members. According to these authors, the carbonate end member is characterized by Ca/Na, Mg/Na and HCO3/Na ratios of 45 ± 25, 15 ± 10 and 90 ± 40 mg/L, respectively, whereas the chemistry of water draining silicate is characterized by Ca/Na = 0.3 ± 0.15 mg/L, Mg/Na = 0.24 ± 0.12 mg/L and HCO3/Na = 2 ± 1 mg/L. In the present study, the bivariate plots identified silicate weathering and carbonate dissolution as the two hydrogeochemical processes controlling the groundwater chemistry. Dug well and, to a lesser degree, borewell samples plotted closer to the silicate end member, while those of tap water tended toward the carbonate end member (Fig. 5a, b). Because of their proximity to the surface, dug well waters were closer to the evaporite dissolution end member than those of borewells and tap waters (Fig. 5).
Mixing diagrams of Ca/Na versus HCO3/Na and Ca/Na versus Mg/Na for silicate and carbonate minerals in the Upper Precambrian sedimentary aquifer of the northwestern Burkina Faso
Pearson's correlation matrix for selected physicochemical parameters of the groundwater samples (correlation coefficients ≥ 0.60 are in bold)
The extent of water–rock interaction was further assessed through the molar ratios of Mg2+/Ca2+ of the samples. All samples had Mg2+/Ca2+ ratios less than 2 (Table 4), indicating silicate weathering (Weaver et al. 1995). The average molar ratios of (Ca2+ + Mg2+)/TZ+ (0.44 and 0.48) in the borewells also exceeded those of tap waters (Na+ + K+)/TZ+ (0.13 and 0.04). This reflects weathering of dolomitic limestones in the source aquifer. In contrast, the average (Na+ + K+)/TZ+ ratio was slightly higher than that of (Ca2+ + Mg2+)/TZ+ in dug wells. Thus, the behavior of alkali and alkaline earth ions in the dug wells may be controlled by cation exchange between the groundwater and the clay minerals often encountered in the lateritic layers. The chloro-alkaline indices (CAI-1 and CAI-2) were used to study a possible ion exchange between the groundwater and the aquifer materials during the residence time and movement (Schoeller 1965; Marghade et al. 2012). The chloro-alkaline indices (all the ions are expressed in meq/L) were calculated as follows (Eqs. 1, 2):
$$ {\text{CAI}} - 1 = \frac{{{\text{Cl}}^{ - } - \left( {{\text{Na}}^{ + } + {\text{K}}^{ + } } \right)}}{{{\text{Cl}}^{ - } }} $$
$$ {\text{CAI}} - 2 = \frac{{{\text{Cl}}^{ - } - \left( {{\text{Na}}^{ + } + {\text{K}}^{ + } } \right)}}{{{\text{HCO}}_{3}^{ - } + {\text{SO}}_{4}^{2 - } + {\text{CO}}_{3}^{2 - } + {\text{NO}}_{3}^{ - } }} $$
Saturation indices and partial pressures of CO2 of carbonate and evaporite minerals of the groundwater samples (saturated and supersaturated indices are in bold)
Saturation indices
Hydrochemical ratios
Anhydrite
Mg/Ca
SO4/Cl
3.9 × 10−2
− 2.4
− 10.5
− 17
− 0.055
If both CAI-1 and CAI-2 are negative, Ca2+ and Mg2+ have been adsorbed onto the aquifer materials and Na+ or/and K+ are released in the groundwater (i.e., reverse ion exchange). In contrast, if the indices are positive, alkaline earth ions (Ca2+ and Mg2+) have been released in the groundwater and alkalis retained by the aquifer materials (i.e., direct ion exchange; Schoeller 1967). The Schoeller indices of the groundwater samples of the present study were negative (Fig. 6a), suggesting that reverse ion exchange could contribute to Na+ and K+ abundance in the wells. However, the linear plot (Fig. 6b) between Na+ + K+–Cl− and (Ca2+ + Mg2+)–(SO42− + HCO3−) showed a weak relationship (R2 = 0.132) and a slope of 1.214. This is far from the theoretical correlation (R2 > 90%) coefficient and slope of about − 1 (Fisher and Mullican 1997; Wen et al. 2005; Yidana and Yidana 2010). Therefore, it can be assumed that chemical weathering is the single most important hydrogeochemical process that controls distribution of Ca2+, Mg2+, SO42− and HCO3− in the groundwaters. The abundance of these ions in the groundwaters is a function of carbonate mineral distribution in the host aquifer materials.
a CAI-1 versus CAI-2 bivariate diagram and b weak linear relationship between Na + K–Cl and (Ca + Mg)–(SO4 + HCO3) and the groundwater samples
Thus, the relative high Ca2+, Mg2+, SO42− and HCO3− concentrations in tap waters corroborates the availability of carbonate minerals in deeper aquifers as well as longer residence times of the groundwater. As a result, samples from tap water were saturated with carbonate minerals (Table 4). Nevertheless, the calcite saturation indices did not correlate with TDS, Ca2+ and HCO3− suggesting that calcite did not continue to dissolve in the aquifer following its saturation (Fig. 7). In contrast, strong linear relationships existed between Ca2+ (R2 = 0.76), TDS (R2 = 0.61), Mg2+ (R2 = 0.78) and HCO3− (R2 = 0.73) and dolomite saturation indices. Similarly, gypsum saturation indices correlated well with TDS (R2 = 0.62; Fig. 7). This indicates that the groundwaters have the capacity to dissolve dolomite and gypsum, and the bulk of Ca2+, Mg2+ and SO42− concentrations is assumed to be from dissolution of these minerals.
a Relationship of TDS to dolomite saturation indices; b relationship of Ca2+ to dolomite saturation indices; c, d relationships of HCO3− and Mg2+ to dolomite saturation indices; e relationship of TDS to gypsum saturation indices
Only moderate positive correlations were observed between K+, pH, TDS, Ca2+ and Mg2+, which suggested that K+ were only partially influenced by chemical weathering. In addition to orthoclase dissolution, excessive application of KCl as a fertilizer may have contributed to K+ and Cl− loadings in the groundwater (Lee et al. 2005). Because the groundwaters were under-saturated with respect to gypsum, the low SO 4 −2 concentrations, particularly in dug wells (SO4/Cl > 1), indicated a possible sulfate reduction by microorganisms (Lavitt et al. 1977; Datta and Tyagi 1996). Further evidence to the microbial activities is highlighted by high bacterial counts and high partial pressures of CO2 (pCO2) in the dug wells (Tables 1, 4). That is, the calculated pCO2 of the groundwater were greater than that of the atmospheric pCO2 (10−3.4 atm) with the highest values observed in the dug wells. This suggests that infiltrating water into the aquifer via soil tends to have higher dissolved CO2 produced by organic matter decomposition and root respiration (Eq. 3). This biogeochemical process is likely to produce carbonic acid (H2CO3) in the groundwater (Eq. 4), which is responsible for mineral weathering (Eqs. 5, 6; Drever 1988).
$$ {\text{CH}}_{2} {\text{O}}\left( {\text{aq}} \right) + {\text{O}}_{2} \left( {\text{aq}} \right) \to {\mathbf{CO}}_{{\mathbf{2}}} \left( {\mathbf{g}} \right) + {\text{H}}_{2} {\text{O}} $$
$$ {\text{CO}}_{2} \left( {\text{g}} \right) + {\text{H}}_{2} {\text{O}} = {\mathbf{H}}_{{\mathbf{2}}} {\mathbf{CO}}_{{\mathbf{3}}} $$
$$ {\text{H}}_{2} {\text{CO}}_{3} = {\text{H}}^{ + } + {\text{HCO}}_{3} $$
$$ {\text{CaMg}}\left( {{\text{CO}}_{3} } \right)_{2} + 2{\mathbf{H}}_{{\mathbf{2}}} {\mathbf{CO}}_{{\mathbf{3}}} = {\text{Ca}} + {\text{Mg}} + 4{\text{HCO}}_{3} $$
The substantial decline in pCO2 followed by an increase in pH in tap water could be attributed to CO2 outgassing in deep aquifers (Subba et al. 2006). Another source of proton in the groundwater could be sulfide mineral oxidation (Eq. 7; Berner and Berner 1987; Sarin et al. 1989; Singh and Hasnain 2002).
$$ 2{\text{FeS}}_{2} + 7{\text{O}}_{2} + 2{\text{H}}_{2} {\text{O}} = 2{\text{Fe}}^{2 + } + 4{\text{SO}}_{4}^{2 - } + 4{\mathbf{H}}^{ + } $$
Carbonic acid and sulfide mineral oxidation weathering can be distinguished by the HCO3−/(HCO3− + SO42−) ratios (Pandey et al. 2001). The HCO3−/(HCO3− + SO42−) ratio equal to 1 indicates that carbonic acid is the main proton source for chemical weathering, whereas a ratio of 0.5 suggests that both carbonic acid and the proton from pyrite oxidation were responsible for the groundwater ion acquisition. In the present groundwater samples, HCO3−/(HO3− + SO42−) varied from 0.8 to 0.99, suggesting that carbonic acid weathering of carbonate, dolomite and gypsum controlled the abundance of Ca2+, Mg2+, HCO3− and SO42− in the groundwater.
Factor 2 had high loadings on Na+, Cl− and NO3−. Although Na+ may derived from silicate weathering (Meybeck 1987), halite dissolution, a strong positive correlation between Na+ and NO3−, an index of anthropogenic activities (David and Gentry 2000), implied that anthropogenic sources such as untreated sewage effluent had greatly contributed to Na+ loading into the groundwater system (Patterson 1997). According to Patterson (1997), laundry detergent powders provide up to 40% of Na+ in wastewater. The anthropogenic contribution to Na+ loading is further corroborated by its relative high concentrations in dug well waters, directly influenced by surface pollution, compared to tap waters from deeper aquifer. The strong relationship observed between Na+ and Cl− (r = 0.94) could be attributed to halite dissolution as all samples were under-saturated with respect to halite. However, if there were halite deposits within the aquifer sediments, one could expect to find localized saline waters (high TDS) in the groundwater. Instead, dug well waters, with relatively low TDS, exhibited the highest Cl− concentrations. Halite dissolution cannot therefore be the main source of Cl− in the groundwater. Furthermore, Cl concentration in rock-forming minerals (biotite) commonly found in the study area is thought to be very low, and that weathering is unlikely to be the source of Cl− in the groundwater. Atmospheric deposition (dust and rainfall) and decomposition of organic matter may be the primarily source of Cl− abundance in the present groundwater (Freeze and Cherry 1979). The atmospheric origin of Cl− was further supported by the low Cl/TZ− (< 1) of the groundwater, and hence, Cl− would be present as NaCl (Kortatsi et al. 2008). Thus, factor 2 reflects anthropogenic influence on the groundwater quality.
A moderate positive correlation between K+ and Cl− (r = 0.53) and between and Na+ (r = 0.49) implied that both geogenic and anthropogenic sources had contributed to K+ loading in the groundwater. Based on the water–rock interaction types, Piper triplot (Piper 1944; Fig. 8a) classified tap waters and the majority of borewell waters as Ca–HCO3 or Ca–Mg–HCO3 type, consistent with dissolution of dolomitic limestone and silicate (i.e., amphiboles, pyroxenes, olivine and biotite) minerals. The groundwaters from dug wells were characterized by weathering of aluminosilicate minerals and human activities (Ca–Na–K–HCO3). Furthermore, the Schoeller semi-logarithmic diagram (Schoeller 1962; Fig. 8b) discriminated samples with similar distribution patterns. With longer water–rock interaction, tap waters had higher Mg2+ and Ca2+, SO42− and HCO3− concentrations relative to dug well and borewell waters.
a Piper diagram displaying the dominant water types of the groundwaters; b Schöeller diagram showing major ion distribution patterns of the groundwaters. Tap waters are enriched in Mg2+ and Ca2+, while dug well waters tend to have high Na+ and K+ content. Bicarbonate is the dominant anion in the samples
Factor analysis techniques combined with geochemical modeling successfully identified the natural and anthropogenic factors affecting the groundwater quality in the Nouna sedimentary aquifer. Water–rock interaction (chemical weathering) is the major geochemical process that controls the groundwater chemistry followed by anthropogenic activities. Although all the ions had concentrations within the WHO permissible limits, NO3−, Cl− and, to a lesser degree, K+ were mainly derived from anthropogenic sources. The extent of the Cl− and K+ contamination was pronounced in the dug wells. Due to longer residence times and prolonged water–rock interaction in deeper aquifers, waters supplied by the public water supply system were very hard, whereas those of dug wells and borewells were soft to moderately hard. All dug well samples tested positive for coliform, and thus, they were not suitable for human consumption. In addition to urgent need to improve the general sanitation conditions in Nouna, the dug wells require special care so that the pollutants from various sources can be stopped. Future investigation that includes seasonal variations and heavy metal concentrations is planned.
We thank the Director general and laboratory staff of the Office National de l'Eau et de l'Assainissement (ONEA) in Ouagadougou for major cation and anion concentration analyses of the groundwater. We would like to thank Dr. Saga S. Sawadogo for producing Geological and groundwater sampling maps. Comments and suggestions from two anonymous reviewers greatly improved the manuscript.
Agarwal V, Jagetai M (1997) Hydrochemical assessment of groundwater quality in Udaipur city, Rajasthan, India. In: Proceedings of national conference on dimensions of environmental stress in India. Department of geology, MS University, Baroda, India, 1997, pp 151–154Google Scholar
APHA (1995) Standard methods for examination of water and waste water, 19th edn. APHA, Washington DCGoogle Scholar
Appelo CAJ, Postma D (1999) Geochemistry, groundwater and pollution. Balkema, RotterdamGoogle Scholar
Appelo CAJ, Postma D (2005) Geochemistry, groundwater and pollution, 2nd edn. Balkema, RotterdamCrossRefGoogle Scholar
Armandine Les Landes A, Aquilina L, Davy P, Vergnaud V, le Carlier C (2014) Time scales of regional circulation of saline fluids in continental aquifers (Armorican massif, Western France). Hydrol Earth Syst Sci Discuss 11:6599–6635. https://doi.org/10.5194/hess-19-1413-2015 CrossRefGoogle Scholar
Back W, Hanshaw BB (1971) Geochemical interpretations of groundwater flow systems. W. C. Bull 7:1008–1016. https://doi.org/10.1111/j.1752-1688.1971.tb05021.x Google Scholar
Barry B, Obuobie E, Andreini M, Andah W, Pluquet M (2005) The Volta river basin. Comprehensive assessment of water management in agriculture. International Water Management InstituteGoogle Scholar
Berner EK, Berner RA (1987) The global water cycle: geochemistry and environment. Prentice Hall, Englewood CliffsGoogle Scholar
BILAN D'EAU (1993) Carte Hydrogeologique du Burkina FasoGoogle Scholar
Bucher K, Stober I (2010) Fluids in the upper continental crust. Geofluids 10:241–253. https://doi.org/10.1111/j.1468-8123.2010.00279.x Google Scholar
CIEH (Comité Interafricain d'études hydrauliques) (1976) Carte de planification des ressources en eau souterraine: L'Afrique Soudano-SahélienneGoogle Scholar
Collectif (1990) L'hydrogéologie de l'Afrique de l'Ouest. Socle cristallin et cristallophyllien et sédimentaire ancien. Collection Maîtrise de l'Eau. Ministère français du développement et de la coopération, Paris, p 147Google Scholar
Cook P, Love A, Robinson N, Simmons C (2005) Groundwater ages in fractured rock aquifers. J Hydrol 308:284–301. https://doi.org/10.1016/j.jhydrol.2004.11.005 CrossRefGoogle Scholar
Courtois N, Lachassagne P, Wyns R, Blanchin R, Bougaïré FD, Somé S, Tapsoba A (2010) Large-scale mapping of hard-rock aquifer properties applied to Burkina Faso. Groundwater 48:269–283. https://doi.org/10.1111/j.1745-6584.2009.00620.x CrossRefGoogle Scholar
Datta PS, Tyagi SK (1996) Major ion chemistry of groundwater in Delhi area: chemical weathering processes and groundwater flow regime. J Geol Soc India 47:179–188Google Scholar
David MD, Gentry LE (2000) Anthropogenic inputs of nitrogen and phosphorus and riverine export for Illinois, USA. J Environ Qual 29:494–508. https://doi.org/10.2134/jeq2000.00472425002900020018x CrossRefGoogle Scholar
De Schamphelaere KAC, Janssen CR (2004) Effects of dissolved organic matter concentration and source, pH and water hardness on chronic toxicity of copper to Daphnia magna. Environ Toxicol Chem 23:1115–1122CrossRefGoogle Scholar
Derouane J, Dakoure D (2006) Etude hydrogéologique et modélisation mathématique du système aquifère du bassin sédimentaire de Taoudeni au Burkina Faso. Colloque international—Gestion des grands aquifères—30 mai–1er juin, Dijon, FranceGoogle Scholar
Dhar RK, Zheng Y, Stute M, van Geen A, Cheng Z, Shanewaz M, Shamsudduha M, Hoque MA, Rahman MW, Ahmed KM (2008) Temporal variability of groundwater chemistry in shallow and deep aquifers of Araihazar, Bangladesh. J Contam Hydrol 99:97–111. https://doi.org/10.1016/j.jconhyd.2008.03.007 CrossRefGoogle Scholar
Drever JI (1988) The geochemistry of natural waters. Prentice Hall, Englewood CliffsGoogle Scholar
Fetter CW (1994) Applied hydrogeology, 3rd edn. Macmillan College, New York, p 616Google Scholar
Fisher SR, Mullican WF (1997) Hydrogeochemical evaluation of sodium-sulphate and sodium-chloride groundwater beneath the northern Chihuahua desert, Trans-Pecos, Texas, USA. Hydrogeol J 5:4–16. https://doi.org/10.1007/s100400050102 CrossRefGoogle Scholar
Frape SK, Fritz P, McNutt RH (1984) Water-rock interaction and chemistry of groundwater from the Canadian Shield. Geochim Cosmochim Acta 48:1617–1627. https://doi.org/10.1016/0016-7037(84)90331-4 CrossRefGoogle Scholar
Frappart F, Hiernaux P, Guichard F, Mougin E, Kergoat L, Arjounin M, Lavenu F, Koité M, Paturel JE, Lebel T (2009) Rainfall regime across the Sahel band in the Gourma region, Mali. J Hydrol 375:128–142. https://doi.org/10.1016/j.jhydrol.2009.03.007 CrossRefGoogle Scholar
Freeze RA, Cherry JA (1979) Groundwater prentice. Englewood Cliffs, Englewood CliffsGoogle Scholar
Fritz P (1997) Saline groundwater and brines in crystalline rocks: the contributions of John Andrews and Jean-Charles Fontes to the solution of a hydrogeological and geochemical problem. Appl Geochem 12:851–856. https://doi.org/10.1016/S0883-2927(97)00074-7 CrossRefGoogle Scholar
Gaillardet J, Dupré B, Louvat P, Allègre CJ (1999) Global silicate weathering and CO2 consumption rates deduced from the chemistry of large rivers. Chem Geol 159(1–4):3–30. https://doi.org/10.1016/S0009-2541(99)00031-5 CrossRefGoogle Scholar
Gombert P (1998) Synthèse sur la géologie et l'hydrogéologie de la série sédimentaire du sud est du Burkina Faso. Rapport technique, Programme RESO, ATG, IWACO, OuagadougouGoogle Scholar
Groen J, Schuchmann JB, Geirnaert W (1988) The occurrence of high nitrate concentration in groundwater in villages in northwestern Burkina Faso. J Afr Earth Sci 7:999–1009. https://doi.org/10.1016/0899-5362(88)90013-9 CrossRefGoogle Scholar
Han G, Liu C (2004) Water geochemistry controlled by carbonate dissolution: a study of the river waters draining karst-dominated terrain, Guizhou Province, China. Chem Geol 204:1–21. https://doi.org/10.1016/j.chemgeo.2003.09.009 CrossRefGoogle Scholar
Huneau F, Dakouré D, Celle-Jeanton H, Vitvar J, Ito M, Traoré S, Compaoré NF, Jirakova H, Le Coustumer P (2011) Flow pattern and residence time of groundwater within the south-eastern Taoudeni sedimentary basin (Burkina Faso, Mali). J Hydrol 409:423–439. https://doi.org/10.1016/j.jhydrol.2011.08.043 CrossRefGoogle Scholar
Jeannin P-V, Hessenauer M, Malard A, Chapuis V (2016) Impact of global change on karst groundwater mineralization in the Jura Mountains. Sci Total Environ 541:1208–1221. https://doi.org/10.1016/j.scitotenv.2015.10.008 CrossRefGoogle Scholar
Kaiser HF (1960) The application of electronic computers to factor analysis. Educ Psychol Meas 20:141–151CrossRefGoogle Scholar
Kessler JJ, Greerling C (1994) Profil environnemental du Burkina Faso. Université Agronomique, Département de l'Aménagement de la Nature. Wageningen, les Pays Bas, pp 63Google Scholar
Kirby CS, Cravotta CA III (2005) Net alkalinity and net acidity 1: theoretical considerations. Appl Geochem 20:1920–1940CrossRefGoogle Scholar
Kortatsi BK, Tay CK, Anornu G, Hayford E, Dartey GA (2008) Hydrogeochemical evaluation of groundwater in the lower Offin basin, Ghana. Environ Geol 53(8):1651–1662. https://doi.org/10.1007/s00254-007-0772-0 CrossRefGoogle Scholar
Lavitt N, Acworth RI, Jankowski J (1977) Vertical hydrogeochemical zonation in a coastal section of the Botany Sands aquifer, Sydney, Australia. Hydrogeol J 5:4–74. https://doi.org/10.1007/s100400050117 Google Scholar
Lee JY, Choi JC, Yi MJ, Kim JW, Cheon JY, Choi YK, Choi MJ, Lee KK (2005) Potential groundwater contamination with toxic metals in and around an abandoned Zn mine, Korea. Water Air Soil Pollut 165:167–185. https://doi.org/10.1007/s11270-005-4637-4 CrossRefGoogle Scholar
Li P, Zhang Y, Yang N, Jing L, Yu P (2016) Major ion chemistry and quality assessment of groundwater in and around a mountainous tourist town of China. Expo Health 8(2):239–252. https://doi.org/10.1007/s12403-016-0198-6 CrossRefGoogle Scholar
Li P, Tian R, Xue C, Wu J (2017) Progress, opportunities and key fields for groundwater quality research under the impacts of human activities in China with a special focus on western China. Environ Sci Pollut Res 24(15):13224–13234. https://doi.org/10.1007/s11356-017-8753-7 CrossRefGoogle Scholar
Marghade D, Malpe DB, Zade AB (2012) Major ion chemistry of shallow groundwater of a fast growing city of Central India. Environ Monit Assess 184:2405–2418. https://doi.org/10.1007/s10661-011-2126-3 CrossRefGoogle Scholar
Mazor E, Drever JI, Finley J, Huntoon PW, Lundy DA (1993) Hydrochemical implications of groundwater mixing: an example from the southern Laramie basin, Wyoming. Water Res Res 29:193–205. https://doi.org/10.1029/92WR01680 CrossRefGoogle Scholar
MEE (Ministry of Water and the Environment) (2001) Etat des lieux des ressources en eau au Burkina Faso et de leur cadre de gestion. OuagadougouGoogle Scholar
Meybeck M (1987) Global chemical weathering of surficial rocks estimated from river dissolved loads. Am J Sci 287:401–428. https://doi.org/10.2475/ajs.287.5.401 CrossRefGoogle Scholar
Morgenstern U, Daughney CJ (2012) Groundwater age for identification of baseline groundwater quality and impacts of land-use intensification—The National Groundwater Monitoring Programme of New Zealand. J Hydrol 456–457:79–93. https://doi.org/10.1016/j.jhydrol.2012.06.010 CrossRefGoogle Scholar
Ouédraogo C (1998) Cartographie géologique de la region Sud-Ouest du Burkina Faso au 1/200000—synthèse géologique. AQUATER/BUMIGEBGoogle Scholar
Pandey SK, Singh AK, Hasnain SI (2001) Hydrochemical characteristics of meltwater draining from Pindari glacier, Kumon Himalaya. J Geol Soc India 57:519–527Google Scholar
Patterson RA (1997) "Domestic Wastewater and the Sodium Factor", site characterization and design of on-site septic systems. In: ASTM STP 1324, Bedinger MS, Johnson AI, Fleming JS (eds) American Society for Testing and Materials, 1997, pp 23–35Google Scholar
Piper AM (1944) A graphical procedure in the chemical interpretation of groundwater analysis. Trans Am Geo Union 25:914–928. https://doi.org/10.1029/TR025i006p00914 CrossRefGoogle Scholar
Prasanna MV, Chidambaram S, Shahul Hameed A, Srinivasamoorthy K (2011) Hydrogeochemical analysis and evaluation of groundwater quality in the Gadilam river basin, Tamil Nadu, India. J Earth Syst Sci 120:85–98. https://doi.org/10.1007/s12040-011-0004-6 CrossRefGoogle Scholar
Rail CD (2000) Groundwater contamination: sources and hydrology. CRC PressGoogle Scholar
Sarin MM, Krishnaswamy S, Dilli K, Somayajulu BLK, Moore WS (1989) Major ion chemistry of the Ganga-Brahmaputra river system: weathering processes and fluxes to the Bay of Bengal. Geochim Cosmochim Acta 53:997–1009. https://doi.org/10.1016/0016-7037(89)90205-6 CrossRefGoogle Scholar
Schoeller H (1962) Les Eaux Souterraines. Mason et Cie, Paris, p 642Google Scholar
Schoeller H (1965) Qualitative evaluation of groundwater resources. In: Methods and techniques of groundwater investigation and development. Water Resources Series No. 33, UNESCO, pp 44–52Google Scholar
Schoeller H (1967) Qualitative evaluation of ground water resources. In: Schoeller H (ed) Methods and techniques of groundwater investigation and development, Water Resource Series No. 33, UNESCO, Paris, pp 44–52Google Scholar
Schroeder HA (1960) Relations between hardness of water and death rates from certain chronic and degenerative diseases in the United States. J Chronic Diseases 12:586–591. https://doi.org/10.1016/0021-9681(60)90002-3 CrossRefGoogle Scholar
Singh AK, Hasnain SI (2002) Aspects of weathering and solute acquisition processes controlling chemistry of sub-alpine proglacial streams of Garhwal Himalaya, India. Hydrol Process 16:835–849. https://doi.org/10.1002/hyp.367 CrossRefGoogle Scholar
Smedley PL, Knudsen J, Maiga D (2007) Arsenic in groundwater from mineralized Proterozoic basement rocks of Burkina Faso. Appl Geochem 22:1074–1092. https://doi.org/10.1016/j.apgeochem.2007.01.001 CrossRefGoogle Scholar
Srinivasa Rao Y, Jugran DK (2003) Delineation of groundwater potential zones and zones of groundwater quality suitable for domestic purposes using remote sensing and GIS. Hydrol Sci J 48(5):821–833. https://doi.org/10.1623/hysj.48.5.821.51452 CrossRefGoogle Scholar
Stober I, Bucher K (1999) Deep groundwater in the crystalline basement of the Black Forest region. Appl Geochem 14:237–254. https://doi.org/10.1016/S0883-2927(98)00045-6 CrossRefGoogle Scholar
Subba RN, John Devadas D, Srinivasa Rao KV (2006) Interpretation of groundwater quality using principal component analysis from Anantapur district, Andhra Pradesh, India. Environ Geosci 13(4):239–259. https://doi.org/10.1306/eg.02090504043 CrossRefGoogle Scholar
Talbaoui M (2009) Etude des périmètres de protection des sources de Nasso et des forages ONEAI et ONEAII. Rapport de la mission de mars 2009, Programme de valorisation des ressources en eau de l'ouest VREO, p 24Google Scholar
Weaver TR, Frape SK, Cherry JA (1995) Recent cross-formational fluid flow and mixing in the shallow Michigan Basin. Geol Soci Am Bull 107:697–707. https://doi.org/10.1130/0016-7606(1995)107<0697:RCFFFA>2.3.CO;2CrossRefGoogle Scholar
Wen X, Wu Y, Zhang Y, Liu F (2005) Hydro-chemical characteristics and salinity of groundwater in the Ejina Basin, Northwestern China. Environ Geol 48:665–675. https://doi.org/10.1007/s00254-005-0001-7 CrossRefGoogle Scholar
WHO (1999) Determination of hardness of water. Method WHO/M/26, RIGoogle Scholar
WHO (2006) Guidelines for drinking water quality, 3rd edn. World Health Organization, 20 Avenue Appia, 1211 Geneva 27, Switzerland, pp 488–449Google Scholar
Wu J, Li P, Qian H, Duan Z, Zhang X (2014) Using correlation and multivariate statistical analysis to identify hydrogeochemical processes affecting the major ion chemistry of waters: case study in Laoheba phosphorite mine in Sichuan, China. Arab J Geosci 7(10):3973–3982. https://doi.org/10.1007/s12517-013-1057-4 CrossRefGoogle Scholar
Wu J, Wang L, Wang S, Tian R, Xue C, Feng W, Li Y (2017) Spatiotemporal variation of groundwater quality in an arid area experiencing long-term paper wastewater irrigation, northwest China. Environ Earth Sci 76(13):460. https://doi.org/10.1007/s12665-017-6787-2 CrossRefGoogle Scholar
Yameogo S, Savadogo AN (2002) Les Ouvrages de Captage de la ville de Ouagadougou Et Leur Vulnerabilite a la Pollution. In: Maiga AH, Pereira LS, Musy A (eds) Sustainable water resources management: health and productivity in hot climates. 5th inter-regional conference on environment and waterGoogle Scholar
Yidana SM, Yidana A (2010) Assessing ground- water quality using water quality index and multivariate statistical analysis—the Voltaian basin, Ghana. J Environ Earth Sci 59:1461–1473. https://doi.org/10.1007/s12665-009-0132-3 CrossRefGoogle Scholar
1.Université de DédougouDédougouBurkina Faso
2.Département des Sciences de la TerreUniversité Ouaga 1 Pr Joseph Ki-ZerboOuagadougou 09Burkina Faso
Sako, A., Yaro, J.M. & Bamba, O. Appl Water Sci (2018) 8: 88. https://doi.org/10.1007/s13201-018-0735-5
Received 29 January 2018
Accepted 22 May 2018
King Abdulaziz City for Science and Technology
Not logged in Not affiliated 54.237.249.90 | CommonCrawl |
Also see the Blog on Math Blogs
Mail to a friend · Print this article · Previous Columns
Tony Phillips' Take on Math in the Media
A monthly survey of math news
This month's topics:
Riemann hypothesis proved ... NOT
Geometry in the rodent hippocampus
Now that Fermat's Last Theorem and the Poincaré Conjecture have been disposed of, the Riemann Hypothesis stands pre-eminent among unresolved questions from the mathematical past. So it was exciting to read "Nigerian professor claims to have solved 156 year old maths problem" in the Daily Telegraph (Mark Molloy, November 17, 2015).
"Dr Opeyemi Enoch, from the Federal University in the ancient city of Oye Ekiti, believes he has solved one of the seven millennium problems in mathematics. The professor says he was able to find a solution to the Riemann Hypothesis first proposed by German mathematician Bernhard Riemann in 1859, which could earn him a $1m prize, in an interview with the BBC. However [his] solution to the problem has not yet been revealed."
Molloy's story goes back to a November 15 posting on the Vanguard news site: "Nigerian Scholar solves 156-Year-old problem in Maths" (by Rotimi Ojomoyela, datelined Ado-Ekiti). "The 156 years old Riemann Hypothesis, the most important problem in Mathematics has been successfully solved by Nigeria Scholar, Dr Opeyemi Enoch. With this breakthrough, Dr Enoch, who teaches at the Federal University, Oye Ekiti, ... has become the fourth egghead to resolve one of the seven Millennium Problems in Mathematics."
The story was almost immediately debunked locally: a post on the Nairaland Forum dated November 16: "Opeyemi Enoch Has NOT Solved The Riemann Hypothesis: Didnt Get Any $1M." The contributor checked Enoch's preprint. "If, indeed, this is Opeyemi Enoch that uploaded this paper, then he is guilty of very sloppy, very blatant plagiarism. To start with, the paper doesn't even have his name on it! It has the name of Werner Raab, who (I checked) published a paper online in 2013 that is almost line for line identical to the one on Academia.edu. I will admit that I didn't bother going through his paper (there are enough wrong proofs of RH on the Internet to drown in), but a quick overlook of the methods used (along with the fact that no one has heard of this paper in two years) suggests that there is a 0% chance that it could possibly be right."
But the zombie marched on. Next in This Is Africa (November 17) with the headline: "Nigerian academic solves 150 year-old math problem," the sub-head: "A Nigerian professor has reportedly solved a maths problem that has confounded mathematicians for over 150 years, scooping himself US$1 million in the process," and a thoughtful link to The Riemann Hypothesis For Dummies. And that "interview with the BBC" Molloy refers to. It's a "Newsday" podcast (dated November 17) with a question mark in the original written title: "Has a maths problem, which has gone more than 150 years without a solution, finally been solved by Nigerian academic Dr. Opeyemi Enoch?" No such discretion on the part of the BBC interviewer, e.g. "What are you going to do with your million dollars?" If you visit the site now, you will read: "(... The text and headline of this post have been amended from an earlier version, to make clear that the prize has not been awarded and his claims have not been verified.)" For a full post-mortem see the Aperiodic site, November 17 (Part 2, November 19), by Katie Steckles and Christian Lawson-Perfect, and George Dvorsky on Gizmondo: "Sorry, the Riemann Hypothesis Has Almost Certainly Not Been Solved."
"Clique topology reveals intrinsic geometric structure in neural correlations," by Chad Giusti, Eva Pastalkovac, Carina Curtob, and Vladimir Itskov, was published in PNAS, November 3, 2015. The work (online early) was picked up in a Press Trust of India wire feed reprinted in The Financial Express on October 21: "A newly-developed mathematical method can detect geometric structure in neural activity in the brain, scientists say." The PTI feed quotes Itskov (Mathematics Department, Penn State): "Previously, in order to understand this structure, scientists needed to relate neural activity to some specific external stimulus. Our method is the first to be able to reveal this structure without our knowing an external stimulus ahead of time."
Itskov and his team recorded spike trains from about 80 neurons in "the rodent hippocampus" during various behavioral conditions, and wrote the correlation of the output of neuron $i$ and neuron $j$ as the $(i,j)$ entry in a matrix $C$. Ranking these correlations by their strength orders the set of pairs of indices: $(i,j)\prec (k,l)$ if $C_{ij} > C_{kl}$, defining what the authors call the order complex of $C$.
"To our surprise, we found that the ordering of matrix entries encodes geometric features, such as dimension."
Three sample $5\times 5$ symmetric matrices, with an ordering of their off-diagonal entries: lowest order means highest correlation. Interpreting correlation as closeness, the second matrix is incompatible with a 1-dimensional distribution of points (this can easily be checked) and the third is incompatible with a 1- or 2-dimensional distribution (the authors thank the Penn State topologists Dmitri Burago and Anton Petrunin for a "simple proof" of this fact, given in the supplementary material). All four images with this item from Proceedings of the National Academy of Sciences, 112, 13455-13460.
For larger matrices, like the ones in these experiments, "the precise dimension may be difficult to discern in the presence of noise." But a closer analysis of the order complex allows the authors to distinguish between correlation matrices that have an underlying geometrical organization and those that come from random interconnections (as for example "the connectivity pattern observed in the fly olfactory system"). Their method uses persistent homology to analyze the order complex by representing it as "a nested sequence of graphs, where each subsequent graph includes an additional edge $(ij)$ corresponding to the next-largest matrix entry $C_{ij}$." In this complex the authors study the topology of the cliques, the all-to-all connected subgraphs, as more and more edges are added in.
"The order complex of [a $12\times 12$ correlation matrix] $A$ is represented as a sequence of binary adjacency matrices, indexed by the density $\rho$ of nonzero entries. (Bottom) Graphs corresponding to the adjacency matrices." The edges in the graphs for each $\rho$ join $i$ and $j$ if $A_{ij}$ appears in the corresponding matrix, i.e. if $A_{ij}\geq$ the cut-off producing that density. "Minimal examples of a 1-cycle (yellow square), a 2-cycle (red octahedron), and a 3-cycle (blue orthoplex) appear at $\rho$ = 0.1, 0.25, and 0.45, respectively."
Here is how persistent homology is used. A clique with $m+1$ vertices is interpreted as an $m$-simplex: a point ($m=0$), a line segment ($m=1$), a triangle, a tetrahedron, etc.; a collection of simplexes of the same dimension forms a cycle if all their boundaries match two by two. Two cycles are equivalent ("homologous") if together they form the complete boundary of a collection of simplexes of one higher dimension. The mth Betti number is the biggest number of non-homologous m-dimensional cycles. (The figure shows examples of 1, 2 and 3-dimensional cycles).
Since the nested sequence of graphs is parametrized by $0\leq \rho \leq 1$ (the proportion of the squares in the matrix that have been filled), the Betti numbers change as a function of $\rho$: as the cutoff gets lowered, new simplexes appear; they can create new cycles or they can fill in old cycles. This is the phenomenon of persistent homology. Our authors study the Betti curves, the graphs of $\beta_m(\rho)$ for $0\leq \rho \leq 1$. The main mathematical point of the article is that the statistics of their Betti curves can be used to separate random correlation matrices from those based on a geometrical structure.
How Betti curves distinguish between correlation matrices representing random connections, and ones representing an underlying geometric organization. Horizontal axis: $\rho$, vertical axis: number of independent cycles; yellow curve is $\beta_1(\rho)$, red $\beta_2(\rho)$, blue $\beta_3(\rho)$.
Top: statistics for a distribution of $100\times 100$ matrices representing random correlations; solid curves are the mean, shading indicates 99.5% confidence intervals.
Bottom: average Betti curves for a distribution of $100\times 100$ matrices organized following Euclidean geometry of dimensions 10, 50, 100, 1000, 10000 (higher curves for larger dimensions).
Note the difference in the vertical scales.
Back to the hippocampus. "Can clique topology be used to detect geometric organization from pairwise correlations in noisy neural data? To answer this question, we examined correlations of hippocampal place cells in rodents during spatial navigation in a 2D open field environment. ... As expected, the Betti curves from place cell data were in close agreement to those of geometric matrices." The authors repeated the experiment with animals in non-2D settings (wheel running, or REM sleep) and found "the Betti curves were again highly nonrandom, and consistent with geometric organization." "These findings suggest that geometric organization ... is a property of the underlying hippocampal network, and not merely a byproduct of spatially structured inputs."
tony at math.sunysb.edu | CommonCrawl |
\begin{definition}[Definition:Ring (Abstract Algebra)]
A '''ring''' $\struct {R, *, \circ}$ is a semiring in which $\struct {R, *}$ forms an abelian group.
That is, in addition to $\struct {R, *}$ being closed, associative and commutative under $*$, it also has an identity, and each element has an inverse.
\end{definition} | ProofWiki |
Hyperfactorial
In mathematics, and more specifically number theory, the hyperfactorial of a positive integer $n$ is the product of the numbers of the form $x^{x}$ from $1^{1}$ to $n^{n}$.
Definition
The hyperfactorial of a positive integer $n$ is the product of the numbers $1^{1},2^{2},\dots ,n^{n}$. That is,[1][2]
$H(n)=1^{1}\cdot 2^{2}\cdot \cdots n^{n}=\prod _{i=1}^{n}i^{i}=n^{n}H(n-1).$
Following the usual convention for the empty product, the hyperfactorial of 0 is 1. The sequence of hyperfactorials, beginning with $H(0)=1$, is:[1]
1, 1, 4, 108, 27648, 86400000, 4031078400000, 3319766398771200000, ... (sequence A002109 in the OEIS)
Interpolation and approximation
The hyperfactorials were studied beginning in the 19th century by Hermann Kinkelin[3][4] and James Whitbread Lee Glaisher.[5][4] As Kinkelin showed, just as the factorials can be continuously interpolated by the gamma function, the hyperfactorials can be continuously interpolated by the K-function.[3]
Glaisher provided an asymptotic formula for the hyperfactorials, analogous to Stirling's formula for the factorials:
$H(n)=An^{(6n^{2}+6n+1)/12}e^{-n^{2}/4}\left(1+{\frac {1}{720n^{2}}}-{\frac {1433}{7257600n^{4}}}+\cdots \right)\!,$
where $A\approx 1.28243$ is the Glaisher–Kinkelin constant.[2][5]
Other properties
According to an analogue of Wilson's theorem on the behavior of factorials modulo prime numbers, when $p$ is an odd prime number
$H(p-1)\equiv (-1)^{(p-1)/2}(p-1)!!{\pmod {p}},$
where $!!$ !!} is the notation for the double factorial.[4]
The hyperfactorials give the sequence of discriminants of Hermite polynomials in their probabilistic formulation.[1]
References
1. Sloane, N. J. A. (ed.), "Sequence A002109 (Hyperfactorials: Product_{k = 1..n} k^k)", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation
2. Alabdulmohsin, Ibrahim M. (2018), Summability Calculus: A Comprehensive Theory of Fractional Finite Sums, Cham: Springer, pp. 5–6, doi:10.1007/978-3-319-74648-7, ISBN 978-3-319-74647-0, MR 3752675, S2CID 119580816
3. Kinkelin, H. (1860), "Ueber eine mit der Gammafunction verwandte Transcendente und deren Anwendung auf die Integralrechung" [On a transcendental variation of the gamma function and its application to the integral calculus], Journal für die reine und angewandte Mathematik (in German), 1860 (57): 122–138, doi:10.1515/crll.1860.57.122, S2CID 120627417
4. Aebi, Christian; Cairns, Grant (2015), "Generalizations of Wilson's theorem for double-, hyper-, sub- and superfactorials", The American Mathematical Monthly, 122 (5): 433–443, doi:10.4169/amer.math.monthly.122.5.433, JSTOR 10.4169/amer.math.monthly.122.5.433, MR 3352802, S2CID 207521192
5. Glaisher, J. W. L. (1877), "On the product 11.22.33... nn", Messenger of Mathematics, 7: 43–47
External links
• Weisstein, Eric W., "Hyperfactorial", MathWorld
| Wikipedia |
MathOverflow Meta
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Can the Burgess-Hazen analysis of Predicative Arithmetic be extended to Transfinite Types?
Around page 300 of his book "Mathematical Thought and its Objects", Charles Parsons discusses the work of Edward Nelson, who believes that mathematical induction is impredicative, because it can be applied to formulas with quantifiers ranging over natural numbers, even though we conceive of natural numbers as objects belonging to all inductive formulas, including the formula we happen to be applying induction to. Nelson argues that if we reconstruct arithmetic along predicative lines, then we can only accept weak forms of induction that are interpretable in Robinson's Q, like induction on formulas with bounded quantifiers, and on this basis he accepts the totality of addition and multiplication, but not exponentiation.
Parsons agrees with Nelson that there's something impredicative about induction, but he believes that the totality of exponentiation is still predicative. This is based on a paper by Burgess and Hazen, "Predicative Logic and Formal Arithmetic": projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.ndjfl/1039293018
This paper is concerned with predicative second-order logic, which is like regular second-order logic, except we have a ramified theory of types, which breaks the comprehension schema into levels. The comprehension schema for level 0 sets only allows formulas that have no quantification over sets. The schema for level 1 sets allows quantification only over level 0 sets. For any natural number n, the schema for level n+1 allows quantification over sets of level n and below. Burgess and Hansen prove that predicative second-order logic plus the axiom of infinity implies Robinson's Q + induction on formulas with bounded quantifiers + the totality of exponentiation. This is the basis on which Parsons concludes that exponentiation is total from a predicative point of view.
But as Parson points out, there's no particular reason to stop at finite levels. We can define a comprehension schema for level ω sets, for instance, allowing quantifies to range over sets of finite level. And so on, going to bigger and bigger transfinite ordinals. This is analogous to the Feferman-Schutte analysis of predicative second-order arithmetic (except that Feferman and Schutte rely on a different notion of predicativity, known as "predicativity given the natural numbers", which accepts the natural numbers as a completed totality, in contrast to Nelson and Paraons who think of it only as a potential infinity). We allow a comprehension schema for level $\alpha$ sets as long as $\alpha$ is a transfinite ordinal that is "predicatively acceptable" in a well-defined sense using lower-level comprehension schemes. For starters, we can have comprehension for levels up to $\omega^3$, since as discussed above we can establish the totality of exponentiation using finite levels, and exponentiatial function arithmetic has proof-theoretic ordinal $\omega^3$. This process would presumably converge on some ordinal, akin to the Feferman-Schutte ordinal. And it would presumably allow us to establish a larger subsystem of first-order arithmetic than if we just stuck to finite levels as Burgess and Hazen did.
Parsons, who wrote his book in 2008, said that it was still an open problem as to what exactly that larger subsystem was, although he guesses that it won't be bigger than PRA. Has any progress been made on this since 2008, or was Paraons even mistaken about it being unsolved? Has it at least been shown that, say, the totality of superexponentiation is provable in this larger subsystem?
EDIT: @UlrikBuccholtz's answer points to a paper by Leivant which states that "predicative stratification in the polymorphic lambda calculus using levels $<\omega^\ell$ leads to definability of functions in Grzegorczyk's $\mathscr E_{\ell+4}$". I'm not that familiar with the lambda calculus, so can someone confirm that this implies that $EFA$ with predicative second-order logic with comprehension schemes for levels up to $<\omega^3$ proves that all the functions in Grzegorczyk's $\mathscr E_{7}$ are total? If that were true then the proof-theoretic ordinal of this system would be $\omega^7$, and then by similar methods, I think we can go to $\omega^{11}$, $\omega^{15}$, etc, all the way up to $\omega^\omega$, the proof-theoretic ordinal of $PRA$.
EDIT 2: As I discuss in this question, the Feferman-Schutte approach to extending the ramified hierarchy to transfinite levels seems to rely on some form of the omega rule, either the infinitary omega rule or the formalized omega rule. I don't know what the philosophical justification for invoking the omega rule is, but whatever it is, does it depend on the fact that Feferman and Schutte are analyzing "predicativity given the natural numbers", which takes the set of natural numbers as a completed totality, thereby justifying the omega rule somehow. If that's the case, then presumably we wouldn't be justified in using the oeega rule here, since the stricter notion of predicativity (as opposed to predicativity given thr natural numbers) that Parsons and Nelson espouse treats the natural numbers as only a potential infinity, leading to a skepticism of induction itself, let alone the omega rule.
So can anyone confirm that the omega rule is essential to how Feferman and Schutte extend the ramified hierarchy, and if so whether there's any other way to extend it in the context of h Burgess-Hazen analysis?
lo.logic proof-theory foundations ultrafinitism ordinal-analysis
122 silver badges33 bronze badges
Keshav SrinivasanKeshav Srinivasan
I'm not aware of anyone doing the setup exactly as you describe, although it is very likely that it has been done, because it is very similar to Kreisel's proposed method of analyzing finitism in Ordinal logics and the characterization of informal concepts of proof (of course, by many accounts he overestimated the reach of finitism and predicativity given the natural numbers).
However, I would suggest you take a look at Feferman and Strahm (2010), Unfolding of finitist arithmetic, where it is shown that the unfolding (in the sense of Feferman's unfolding program) of finitism is proof-theoretically equivalent to PRA (Primitive Recursive Arithmetic) and hence has proof-theoretic ordinal $\omega^\omega$.
The unfolding is relevant here because it gives a kind of predicative closure given certain base principles. For instance, Feferman and Strahm (2000), The unfolding of non-finitist arithmetic, show that the unfolding of a basic system NFA (of Non-Finitist Arithmetic) is proof-theoretically equivalent to predicative analysis and has proof-theoretic ordinal $\Gamma_0$.
Update: You may also be interested in the work of Leivant, in particular his paper with Danner, Stratified polymorphism and primitive recursion, where it is shown that predicative stratification in the polymorphic lambda calculus using levels $<\omega^\ell$ leads to definability of functions in Grzegorczyk's $\mathscr E_{\ell+4}$. But they don't study an autonomous system.
Ingo Blechschmidt
Ulrik BuchholtzUlrik Buchholtz
$\begingroup$ Are you aware that Feferman, Schutte, and Weyl are concerned with a different notion of predicativity than the one that Nelson and Parsons are dealing with? Feferman et al. are talking about "predicative given the natural numbers", i.e. we treat the set of natural numbers as a completed totality, but then we proceed predicatively after that. Nelson and Parsons are treating the natural numbers as a potential infinity, so they're just concerned with "predicativity", not "predicativity given the natural numbers". $\endgroup$ – Keshav Srinivasan Dec 1 '13 at 22:55
$\begingroup$ First, yes I'm aware that the Feferman-Schütte analysis of predicative concerns predicativity given the natural numbers. The unfolding of NFA is one way to approach that, and Feferman proposed that it should also be able to capture other notions of predicative closure, for instance of basic finitism (and in my dissertation, I study the unfolding of ID$_1$ which can model the predicative closure of one positive arithmetical inductive definition). $\endgroup$ – Ulrik Buchholtz Dec 2 '13 at 3:56
$\begingroup$ I'm also aware that there are various approaches to finitism; e.g., under Kreisel's analysis it comes out to be equivalent with PA! But the system FA of Feferman-Strahm is fairly conservative: the logic is restricted to positive existential quantification over N. Maybe you would prefer a quantifier free presentation. In any case, with your proposal you run into the well-known problem with analyses of (any kind of) finitism that you want to go beyond the finite levels (!). Using unfolding avoids that quandary. $\endgroup$ – Ulrik Buchholtz Dec 2 '13 at 4:02
$\begingroup$ @KeshavSrinivasan: It seems to me that skepticism in induction based on the view that the naturals are not a complete totality is actually ill-founded. That skepticism instead implies that we should not simply accept LEM for unbounded quantification. There are then two possible underlying logics that we may switch to, namely intuitionistic logic or 3-valued logic. In both cases, we can still justify having the rule ( ( A ⊢ B ; B ⊢ A ) ⊢ A∨¬A ) for any Σ1-sentence A and Π1-sentence B, and importantly we can justify the induction rule ( Q(0) ; ( k∈N ⊢ ( Q(k) ⊢ Q(k+1) ) ) ⊢ ∀k∈N ( Q(k) ) ). $\endgroup$ – user21820 May 20 '19 at 4:48
$\begingroup$ In particular, there is no philosophical issue with induction, but rather related principles that depend on LEM for unbounded quantification, such as the well-ordering principle. @UlrikBuchholtz: I'm interested to hear your opinion on my view as well. $\endgroup$ – user21820 May 20 '19 at 4:50
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged lo.logic proof-theory foundations ultrafinitism ordinal-analysis or ask your own question.
Has the Ramified Theory of Types been applied to NBG?
Why is adopting Russell's Axiom of Reducibility as strong as eliminating the Ramified Hierarchy?
What is the role of the (formalized) omega rule in Ramified Analysis?
Nelson natural number objects in a topos (say)
Z_2 versus second-order PA
Illustrating Edward Nelson's Worldview with Nonstandard Models of Arithmetic
Does the Feferman-Schutte analysis give a precise characterization of Predicative Second-Order Arithmetic?
Did Gödel prove that the Ramified Theory of Types collapses at $\omega_1$?
What is the proof-theoretic ordinal of Hyperarithmetical Comprehension? | CommonCrawl |
Sat, 27 Apr 2019 18:52:21 GMT
12.E: Vectors in Space (Exercises)
[ "article:topic", "license:ccbyncsa", "showtoc:no", "authorname:openstaxstrang" ]
Book: Calculus (OpenStax)
12: Vectors in Space
Contributed by Gilbert Strang & Edwin "Jed" Herman
Professor (Mathematics) at Massachusetts Institute of Technology (Strang) & University of Wisconsin-Stevens Point (Herman)
Publisher: OpenStax CNX
12.1: Vectors in the Plane
12.2: Vectors in Three Dimensions
12.3: The Dot Product
12.4: The Cross Product
12.5: Equations of Lines and Planes in Space
12.6: Quadric Surfaces
12.7: Cylindrical and Spherical Coordinates
Chapter Review Exercise
For the following exercises, consider points \(P(−1,3), Q(1,5),\) and \(R(−3,7)\). Determine the requested vectors and express each of them a. in component form and b. by using the standard unit vectors.
1) \(\vec{PQ}\)
Solution: \(a. \vec{PQ}=⟨2,2⟩; b. \vec{PQ}=2i+2j\)
2) \(\vec{PR}\)
3) \(\vec{QP}\)
Solution: \(a. \vec{QP}=⟨−2,−2⟩; b. \vec{QP}=−2i−2j\)
4) \(\vec{RP}\)
5) \(\vec{PQ}+\vec{PR}\)
Solution: \(a. \vec{PQ}+\vec{PR}=⟨0,6⟩; b. \vec{PQ}+\vec{PR}=6j\)
6) \(\vec{PQ}−\vec{PR}\)
7) \(2\vec{PQ}−2\vec{PR}\)
Solution: \(a. 2\vec{PQ}→−2\vec{PR}=⟨8,−4⟩; b. 2\vec{PQ}−2\vec{PR}=8i−4j\)
8) \(2\vec{PQ}+\frac{1}{2}\vec{PR}\)
9) The unit vector in the direction of \(\vec{PQ}\)
Solution: \(a. ⟨\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}⟩; b. \frac{1}{\sqrt{2}}i+\frac{1}{\sqrt{2}}j\)
10) The unit vector in the direction of \(\vec{PR}\)
11) A vector \(v\) has initial point \((−1,−3)\) and terminal point \((2,1)\). Find the unit vector in the direction of \(v\). Express the answer in component form.
Solution: \(⟨\frac{3}{5},\frac{4}{5}⟩\)
12) A vector \(v\) has initial point \((−2,5)\) and terminal point \((3,−1)\). Find the unit vector in the direction of \(v\). Express the answer in component form.
13) The vector \(v\) has initial point \(P(1,0)\) and terminal point \(Q\) that is on the y-axis and above the initial point. Find the coordinates of terminal point \(Q\) such that the magnitude of the vector \(v\) is \(\sqrt{5}\).
Solution: \(Q(0,2)\)
14) The vector \(v\) has initial point \(P(1,1)\) and terminal point \(Q\) that is on the x-axis and left of the initial point. Find the coordinates of terminal point \(Q\) such that the magnitude of the vector \(v\) is \(\sqrt{10}\).
For the following exercises, use the given vectors \(a\) and \(b\).
a. Determine the vector sum \(a+b\) and express it in both the component form and by using the standard unit vectors.
b. Find the vector difference \(a−b\) and express it in both the component form and by using the standard unit vectors.
c. Verify that the vectors \(a, b,\) and \(a+b\), and, respectively, \(a, b\), and \(a−b\) satisfy the triangle inequality.
d. Determine the vectors \(2a, −b,\) and \(2a−b.\) Express the vectors in both the component form and by using standard unit vectors.
15) \(a=2i+j, b=i+3j\)
Solution: \(a. a+b=3i+4j, a+b=⟨3,4⟩;\) b. \(a−b=i−2j, a−b=⟨1,−2⟩;\) c. Answers will vary; d. \(2a=4i+2j, 2a=⟨4,2⟩, −b=−i−3j, −b=⟨−1,−3⟩, 2a−b=3i−j, 2a−b=⟨3,−1⟩\)
16) \(a=2i, b=−2i+2j\)
17) Let \(a\) be a standard-position vector with terminal point \((−2,−4)\). Let \(b\) be a vector with initial point \((1,2)\) and terminal point \((−1,4)\). Find the magnitude of vector \(−3a+b−4i+j.\)
Solution: \(15\)
18) Let \(a\) be a standard-position vector with terminal point at \((2,5)\). Let \(b\) be a vector with initial point \((−1,3)\) and terminal point \((1,0)\). Find the magnitude of vector \(a−3b+14i−14j.\)
19) Let \(u\) and \(v\) be two nonzero vectors that are nonequivalent. Consider the vectors \(a=4u+5v\) and \(b=u+2v\) defined in terms of \(u\) and \(v\). Find the scalar \(λ\) such that vectors \(a+λb\) and \(u−v\) are equivalent.
Solution: \(λ=−3\)
20) Let \(u\) and \(v\) be two nonzero vectors that are nonequivalent. Consider the vectors \(a=2u−4v\) and \(b=3u−7v\) defined in terms of \(u\) and \(v\). Find the scalars \(α\) and \(β\) such that vectors \(αa+βb\) and \(u−v\) are equivalent.
21) Consider the vector \(a(t)=⟨cost,sint⟩\) with components that depend on a real number \(t\). As the number \(t\) varies, the components of \(a(t)\) change as well, depending on the functions that define them.
a. Write the vectors \(a(0)\) and \(a(π)\) in component form.
b. Show that the magnitude \(∥a(t)∥\) of vector \(a(t)\) remains constant for any real number \(t\).
c. As \(t\) varies, show that the terminal point of vector \(a(t)\) describes a circle centered at the origin of radius \(1\).
Solution: \(a. a(0)=⟨1,0⟩, a(π)=⟨−1,0⟩;\) b. Answers may vary; c. Answers may vary
22) Consider vector \(a(x)=⟨x,\sqrt{1−x^2}⟩\) with components that depend on a real number \(x∈[−1,1]\). As the number \(x\) varies, the components of \(a(x)\) change as well, depending on the functions that define them.
a. Write the vectors \(a(0)\) and \(a(1)\) in component form.
b. Show that the magnitude \(∥a(x)∥\) of vector \(a(x)\) remains constant for any real number \(x\)
c. As \(x\) varies, show that the terminal point of vector \(a(x)\) describes a circle centered at the origin of radius \(1\).
23) Show that vectors \(a(t)=⟨cost,sint⟩\) and \(a(x)=⟨x,\sqrt{1−x^2}⟩\) are equivalent for \(x=r\) and \(t=2kπ\), where \(k\) is an integer.
Solution: Answers may vary
24) Show that vectors \(a(t)=⟨cost,sint⟩\) and \(a(x)=⟨x,\sqrt{1−x^2}⟩\) are opposite for \(x=r\) and \(t=π+2kπ\), where \(k\) is an integer.
For the following exercises, find vector \(v\) with the given magnitude and in the same direction as vector \(u\).
25) \(‖v‖=7,u=⟨3,4⟩\)
Solution: \(v=⟨\frac{21}{5},\frac{28}{5}⟩\)
26) \(‖v‖=3,u=⟨−2,5⟩\)
27) \(‖v‖=7,u=⟨3,−5⟩\)
Solution: \(v=⟨\frac{21\sqrt{34}}{34},−\frac{35\sqrt{34}}{34}⟩\)
28) \(‖v‖=10,u=⟨2,−1⟩\)
For the following exercises, find the component form of vector \(u\), given its magnitude and the angle the vector makes with the positive x-axis. Give exact answers when possible.
29) \(‖u‖=2, θ=30°\)
Solution: \(u=⟨\sqrt{3},1⟩\)
31) \(‖u‖=5, θ=\frac{π}{2}\)
Solution: \(u=⟨0,5⟩\)
32) \(‖u‖=8, θ=π\)
33) \(‖u‖=10, θ=\frac{5π}{6}\)
Solution: \(u=⟨−5\sqrt{3},5⟩\)
For the following exercises, vector \(u\) is given. Find the angle \(θ∈[0,2π)\) that vector \(u\) makes with the positive direction of the x-axis, in a counter-clockwise direction.
35) \(u=5\sqrt{2}i−5\sqrt{2}j\)
Solution: \(θ=\frac{7π}{4}\)
36) \(u=−\sqrt{3}i−j\)
37) Let \(a=⟨a_1,a_2⟩, b=⟨b_1,b_2⟩\), and \(c=⟨c_1,c_2⟩\) be three nonzero vectors. If \(a_1b_2−a_2b_1≠0\), then show there are two scalars, \(α\) and \(β\), such that \(c=αa+βb.\)
38) Consider vectors \(a=⟨2,−4⟩, b=⟨−1,2⟩,\) and \(0\) Determine the scalars \(α\) and \(β\) such that \(c=αa+βb\).
39) Let \(P(x_0,f(x_0))\) be a fixed point on the graph of the differential function \(f\) with a domain that is the set of real numbers.
a. Determine the real number \(z_0\) such that point \(Q(x_0+1,z_0)\) is situated on the line tangent to the graph of \(f\) at point \(P\).
b. Determine the unit vector \(u\) with initial point \(P\) and terminal point \(Q\).
Solution: \(a. z_0=f(x_0)+f′(x_0); b. u=\frac{1}{\sqrt{1+[f′(x_0)]^2}}⟨1,f′(x_0)⟩\)
40) Consider the function \(f(x)=x^4,\) where \(x∈R\).
a. Determine the real number \(z_0\) such that point \(Q(2,z_0)\) s situated on the line tangent to the graph of \(f\) at point \(P(1,1)\).
41) Consider \(f\) and \(g\) two functions defined on the same set of real numbers \(D\). Let \(a=⟨x,f(x)⟩\) and \(b=⟨x,g(x)⟩\) be two vectors that describe the graphs of the functions, where \(x∈D\). Show that if the graphs of the functions \(f\) and \(g\) do not intersect, then the vectors \(a\) and \(b\) are not equivalent.
42) Find \(x∈R\) such that vectors \(a=⟨x,sinx⟩\) and \(b=⟨x,cosx⟩\) are equivalent.
43) Calculate the coordinates of point \(D\) such that \(ABCD\) is a parallelogram, with \(A(1,1), B(2,4)\), and \(C(7,4)\).
Solution: \(D(6,1)\)
44) Consider the points \(A(2,1), B(10,6), C(13,4), and D(16,−2).\) Determine the component form of vector \(\vec{AD}\).
45) The speed of an object is the magnitude of its related velocity vector. A football thrown by a quarterback has an initial speed of \(70\) mph and an angle of elevation of \(30°\). Determine the velocity vector in mph and express it in component form. (Round to two decimal places.)
Solution: \(⟨60.62,35⟩\)
46) A baseball player throws a baseball at an angle of \(30°\) with the horizontal. If the initial speed of the ball is \(100\) mph, find the horizontal and vertical components of the initial velocity vector of the baseball. (Round to two decimal places.)
47) A bullet is fired with an initial velocity of \(1500\) ft/sec at an angle of \(60°\) with the horizontal. Find the horizontal and vertical components of the velocity vector of the bullet. (Round to two decimal places.)
Solution: The horizontal and vertical components are \(750\) ft/sec and \(1299.04\) ft/sec, respectively.
48) [T] A 65-kg sprinter exerts a force of \(798\) N at a \(19°\) angle with respect to the ground on the starting block at the instant a race begins. Find the horizontal component of the force. (Round to two decimal places.)
49) [T] Two forces, a horizontal force of \(45\) lb and another of \(52\) lb, act on the same object. The angle between these forces is \(25°\). Find the magnitude and direction angle from the positive x-axis of the resultant force that acts on the object. (Round to two decimal places.)
Solution: The magnitude of resultant force is \(94.71\) lb; the direction angle is \(13.42°\).
50) [T] Two forces, a vertical force of \(26\) lb and another of \(45\) lb, act on the same object. The angle between these forces is \(55°\). Find the magnitude and direction angle from the positive x-axis of the resultant force that acts on the object. (Round to two decimal places.)
51) [T] Three forces act on object. Two of the forces have the magnitudes \(58\) N and \(27\) N, and make angles \(53°\) and \(152°\), respectively, with the positive x-axis. Find the magnitude and the direction angle from the positive x-axis of the third force such that the resultant force acting on the object is zero. (Round to two decimal places.)
Solution: The magnitude of the third vector is \(60.03\)N; the direction angle is \(259.38°\).
52) Three forces with magnitudes 80 lb, 120 lb, and 60 lb act on an object at angles of \(45°, 60°\) and \(30°\), respectively, with the positive x-axis. Find the magnitude and direction angle from the positive x-axis of the resultant force. (Round to two decimal places.)
53) [T] An airplane is flying in the direction of \(43°\) east of north (also abbreviated as \(N43E\)) at a speed of \(550\) mph. A wind with speed \(25\) mph comes from the southwest at a bearing of \(N15E\). What are the ground speed and new direction of the airplane?
Solution: The new ground speed of the airplane is \(572.19\) mph; the new direction is \(N41.82E.\)
54) [T] A boat is traveling in the water at \(30\) mph in a direction of \(N20E\) (that is, \(20°\) east of north). A strong current is moving at \(15\) mph in a direction of \(N45E\). What are the new speed and direction of the boat?
55) [T] A 50-lb weight is hung by a cable so that the two portions of the cable make angles of \(40°\) and \(53°\), respectively, with the horizontal. Find the magnitudes of the forces of tension \(T_1\) and \(T_2\) in the cables if the resultant force acting on the object is zero. (Round to two decimal places.)
Solution: \(∥T_1∥=30.13lb, ∥T_2∥=38.35lb\)
56) [T] A 62-lb weight hangs from a rope that makes the angles of \(29°\) and \(61°\), respectively, with the horizontal. Find the magnitudes of the forces of tension \(T_1\) and \(T_2\) in the cables if the resultant force acting on the object is zero. (Round to two decimal places.)
57) [T] A 1500-lb boat is parked on a ramp that makes an angle of \(30°\) with the horizontal. The boat's weight vector points downward and is a sum of two vectors: a horizontal vector \(v_1\) that is parallel to the ramp and a vertical vector \(v_2\) that is perpendicular to the inclined surface. The magnitudes of vectors \(v_1\) and \(v_2\) are the horizontal and vertical component, respectively, of the boat's weight vector. Find the magnitudes of \(v_1\) and \(v_2\). (Round to the nearest integer.)
Solution: \(∥v1∥=750 lb, ∥v2∥=1299 lb\)
58) [T] An 85-lb box is at rest on a \(26°\) incline. Determine the magnitude of the force parallel to the incline necessary to keep the box from sliding. (Round to the nearest integer.)
59) A guy-wire supports a pole that is \(75\) ft high. One end of the wire is attached to the top of the pole and the other end is anchored to the ground \(50\) ft from the base of the pole. Determine the horizontal and vertical components of the force of tension in the wire if its magnitude is \(50\) lb. (Round to the nearest integer.)
Solution: The two horizontal and vertical components of the force of tension are \(28\) lb and \(42\) lb, respectively.
60) A telephone pole guy-wire has an angle of elevation of \(35°\) with respect to the ground. The force of tension in the guy-wire is \(120\) lb. Find the horizontal and vertical components of the force of tension. (Round to the nearest integer.)
Consider a rectangular box with one of the vertices at the origin, as shown in the following figure. If point \(\displaystyle A(2,3,5)\) is the opposite vertex to the origin, then find
a. the coordinates of the other six vertices of the box and
b. the length of the diagonal of the box determined by the vertices \(\displaystyle O\) and \(\displaystyle A\).
Solution: \(\displaystyle a. (2,0,5),(2,0,0),(2,3,0),(0,3,0),(0,3,5),(0,0,5); b. \sqrt{38}\)
2) Find the coordinates of point \(\displaystyle P\) and determine its distance to the origin.
For the following exercises, describe and graph the set of points that satisfies the given equation.
3) \(\displaystyle (y−5)(z−6)=0\)
Solution: A union of two planes: \(\displaystyle y=5\) (a plane parallel to the xz-plane) and \(\displaystyle z=6\) (a plane parallel to the xy-plane)
4) \(\displaystyle (z−2)(z−5)=0\)
5) \(\displaystyle (y−1)^2+(z−1)^2=1\)
Solution: A cylinder of radius \(\displaystyle 1\) centered on the line \(\displaystyle y=1,z=1\)
6) \(\displaystyle (x−2)^2+(z−5)^2=4\)
7) Write the equation of the plane passing through point \(\displaystyle (1,1,1)\) that is parallel to the xy-plane.
Solution: \(\displaystyle z=1\)
8) Write the equation of the plane passing through point \(\displaystyle (1,−3,2)\) that is parallel to the xz-plane.
9) Find an equation of the plane passing through points \(\displaystyle 1,−3,−2), (0,3,−2),\) and \(\displaystyle (1,0,−2).\)
Solution: \(\displaystyle z=−2\)
10) Find an equation of the plane passing through points \(\displaystyle (1,9,2), (1,3,6),\) and \(\displaystyle (1,−7,8).\)
For the following exercises, find the equation of the sphere in standard form that satisfies the given conditions.
11) Center \(\displaystyle C(−1,7,4)\) and radius \(\displaystyle 4\)
Solution: \(\displaystyle (x+1)^2+(y−7)^2+(z−4)^2=16\)
13) Diameter \(\displaystyle PQ,\) where \(\displaystyle P(−1,5,7)\) and \(\displaystyle Q(−5,2,9)\)
Solution: \(\displaystyle (x+3)^2+(y−3.5)^2+(z−8)^2=\frac{29}{4}\)
14) Diameter \(\displaystyle PQ,\) where \(\displaystyle P(−16,−3,9)\) and \(\displaystyle Q(−2,3,5)\)
For the following exercises, find the center and radius of the sphere with an equation in general form that is given.
15) \(\displaystyle P(1,2,3) x^2+y^2+z^2−4z+3=0\)
Solution: Center \(\displaystyle C(0,0,2)\) and radius \(\displaystyle 1\)
16) \(\displaystyle x^2+y^2+z^2−6x+8y−10z+25=0\)
For the following exercises, express vector \(\displaystyle \vec{PQ}\) with the initial point at \(\displaystyle P\) and the terminal point at \(\displaystyle Q\)
a. in component form and
b. by using standard unit vectors.
17) \(\displaystyle P(3,0,2)\) and \(\displaystyle Q(−1,−1,4)\)
Solution: \(\displaystyle a. \vec{PQ}=⟨−4,−1,2⟩; b. \vec{PQ}=−4i−j+2k\)
18) \(\displaystyle P(0,10,5)\) and \(\displaystyle Q(1,1,−3)\)
19) \(\displaystyle P(−2,5,−8)\) and \(\displaystyle M(1,−7,4)\), where \(\displaystyle M\) is the midpoint of the line segment \(\displaystyle PQ\)
Solution: \(\displaystyle a. \vec{PQ}=⟨6,−24,24⟩; . \vec{PQ}=6i−24j+24k\)
20) \(\displaystyle Q(0,7,−6)\) and \(\displaystyle M(−1,3,2)\), where \(\displaystyle M\) is the midpoint of the line segment \(\displaystyle PQ\)
21) Find terminal point \(\displaystyle Q\) of vector \(\displaystyle \vec{PQ}=⟨7,−1,3⟩\) with the initial point at \(\displaystyle P(−2,3,5).\)
Solution: \(\displaystyle Q(5,2,8)\)
22) Find initial point \(\displaystyle P\) of vector \(\displaystyle \vec{PQ}=⟨−9,1,2⟩\) with the terminal point at \(\displaystyle Q(10,0,−1).\)
For the following exercises, use the given vectors \(\displaystyle a\) and \(\displaystyle b\) to find and express the vectors \(\displaystyle a+b, 4a\), and \(\displaystyle −5a+3b\) in component form.
23) \(\displaystyle a=⟨−1,−2,4⟩, b=⟨−5,6,−7⟩\)
Solution: \(\displaystyle a+b=⟨−6,4,−3⟩, 4a=⟨−4,−8,16⟩, −5a+3b=⟨−10,28,−41⟩\)
24) \(\displaystyle a=⟨3,−2,4⟩, b=⟨−5,6,−9⟩\)
25) \(\displaystyle a=−k, b=−i\)
Solution: \(\displaystyle a+b=⟨−1,0,−1⟩, 4a=⟨0,0,−4⟩, −5a+3b=⟨−3,0,5⟩\)
26) \(\displaystyle a=i+j+k, b=2i−3j+2k\)
For the following exercises, vectors \(\displaystyle u\) and \(\displaystyle v\) are given. Find the magnitudes of vectors \(\displaystyle u−v\) and \(\displaystyle −2u\).
27) \(\displaystyle u=2i+3j+4k, v=−i+5j−k\)
Solution: \(\displaystyle ‖u−v‖=\sqrt{38}, ‖−2u‖=2\sqrt{29}\)
28) \(\displaystyle u=i+j, v=j−k\)
29) \(\displaystyle u=⟨2cost,−2sint,3⟩, v=⟨0,0,3⟩,\) where \(\displaystyle t\) is a real number.
Solution: \(\displaystyle ‖u−v‖=2, ‖−2u‖=2\sqrt{13}\)
30) \(\displaystyle u=⟨0,1,sinht⟩, v=⟨1,1,0⟩,\) where \(\displaystyle t\) is a real number.
For the following exercises, find the unit vector in the direction of the given vector \(\displaystyle a\) and express it using standard unit vectors.
31) \(\displaystyle a=3i−4j\)
Solution: \(\displaystyle a=\frac{3}{5}i−\frac{4}{5}j\)
32) \(\displaystyle a=⟨4,−3,6⟩\)
33) \(\displaystyle a=\vec{PQ}\), where \(\displaystyle P(−2,3,1)\) and \(\displaystyle Q(0,−4,4)\)
Solution: \(\displaystyle ⟨\frac{2}{\sqrt{62}},−\frac{7}{\sqrt{62}},\frac{3}{\sqrt{62}}⟩\)
34) \(\displaystyle a=\vec{OP},\) where \(\displaystyle P(−1,−1,1)\)
35) \(\displaystyle a=u−v+w,\) where \(\displaystyle u=i−j−k, v=2i−j+k,\) and \(\displaystyle w=−i+j+3k\)
Solution: \(\displaystyle ⟨−\frac{2}{\sqrt{6}},\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}}⟩\)
36) \(\displaystyle a=2u+v−w,\) where \(\displaystyle u=i−k, v=2j\), and \(\displaystyle w=i−j\)
37) Determine whether \(\displaystyle \vec{AB}\) and \(\displaystyle \vec{PQ}\) are equivalent vectors, where \(\displaystyle A(1,1,1),B(3,3,3),P(1,4,5),\) and \(\displaystyle Q(3,6,7).\)
Solution: Equivalent vectors
38) Determine whether the vectors \(\displaystyle \vec{AB}\) and \(\displaystyle \vec{PQ}\) are equivalent, where \(\displaystyle A(1,4,1), B(−2,2,0), P(2,5,7),\) and \(\displaystyle Q(−3,2,1)\).
For the following exercises, find vector \(\displaystyle u\) with a magnitude that is given and satisfies the given conditions.
39) \(\displaystyle v=⟨7,−1,3⟩, ‖u‖=10,u\) and \(\displaystyle v\) have the same direction
Solution: \(\displaystyle u=⟨\frac{70}{\sqrt{59}},−\frac{10}{\sqrt{59}},\frac{30}{\sqrt{59}}⟩\)
40) \(\displaystyle v=⟨2,4,1⟩, ‖u‖=15,u\) and \(\displaystyle v\) have the same direction
41) \(\displaystyle v=⟨2sint,2cost,1⟩, ‖u‖=2,u\) and \(\displaystyle v\) have opposite directions for any \(\displaystyle t\), where \(\displaystyle t\) is a real number
Solution: \(\displaystyle u=⟨−\frac{4}{\sqrt{5}}sint,−\frac{4}{\sqrt{5}}cost,−\frac{2}{\sqrt{5}}⟩\)
42) \(\displaystyle v=⟨3sinht,0,3⟩, ‖u‖=5,u\) and \(\displaystyle v\) have opposite directions for any \(\displaystyle t\), where \(\displaystyle t\) is a real number
43) Determine a vector of magnitude \(\displaystyle 5\) in the direction of vector \(\displaystyle \vec{AB}\), where \(\displaystyle A(2,1,5)\) and \(\displaystyle B(3,4,−7).\)
Solution: \(\displaystyle ⟨\frac{5}{\sqrt{154}},\frac{15}{\sqrt{154}},−\frac{60}{\sqrt{154}}⟩\)
44) Find a vector of magnitude \(\displaystyle 2\) that points in the opposite direction than vector \(\displaystyle \vec{AB}\), where \(\displaystyle A(−1,−1,1)\) and \(\displaystyle B(0,1,1).\) Express the answer in component form.
45) Consider the points \(\displaystyle A(2,α,0),B(0,1,β),\) and \(\displaystyle C(1,1,β)\), where \(\displaystyle α\) and \(\displaystyle β\) are negative real numbers. Find \(\displaystyle α\) and \(\displaystyle β\) such that \(\displaystyle ∥\vec{OA}−\vec{OB}+\vec{OC}∥=∥\vec{OB}∥=4.\)
Solution: \(\displaystyle α=−\sqrt{7}, β=−\sqrt{15}\)
46) Consider points \(\displaystyle A(α,0,0),B(0,β,0),\) and \(\displaystyle C(α,β,β),\) where \(\displaystyle α\) and \(\displaystyle β\) are positive real numbers. Find \(\displaystyle α\) and \(\displaystyle β\) such that \(\displaystyle ∥\bar{OA}+\bar{OB}∥=\sqrt{2}\) and \(\displaystyle ∥\bar{OC}∥=\sqrt{3}\).
47) Let \(\displaystyle P(x,y,z)\) be a point situated at an equal distance from points \(\displaystyle A(1,−1,0)\) and \(\displaystyle B(−1,2,1)\). Show that point \(\displaystyle P\) lies on the plane of equation \(\displaystyle −2x+3y+z=2.\)
48) Let \(\displaystyle P(x,y,z)\) be a point situated at an equal distance from the origin and point \(\displaystyle A(4,1,2)\). Show that the coordinates of point P satisfy the equation \(\displaystyle 8x+2y+4z=21.\)
49) The points \(\displaystyle A,B,\) and \(\displaystyle C\) are collinear (in this order) if the relation \(\displaystyle ∥\vec{AB}∥+∥\vec{BC}∥=∥\vec{AC}∥\) is satisfied. Show that \(\displaystyle A(5,3,−1),B(−5,−3,1),\) and \(\displaystyle C(−15,−9,3)\) are collinear points.
50) Show that points \(\displaystyle A(1,0,1), B(0,1,1),\) and \(\displaystyle C(1,1,1)\) are not collinear.
51) [T] A force \(\displaystyle F\) of \(\displaystyle 50N\) acts on a particle in the direction of the vector \(\displaystyle \vec{OP}\), where \(\displaystyle P(3,4,0).\)
a. Express the force as a vector in component form.
b. Find the angle between force \(\displaystyle F\) and the positive direction of the x-axis. Express the answer in degrees rounded to the nearest integer.
Solution: \(\displaystyle a. F=⟨30,40,0⟩; b. 53°\)
52) [T] A force \(\displaystyle F\) of \(\displaystyle 40N\) acts on a box in the direction of the vector \(\displaystyle \vec{OP}\), where \(\displaystyle P(1,0,2).\)
a. Express the force as a vector by using standard unit vectors.
b. Find the angle between force \(\displaystyle F\) and the positive direction of the x-axis.
53) If \(\displaystyle F\) is a force that moves an object from point \(\displaystyle P_1(x_1,y_1,z_1)\) to another point \(\displaystyle P_2(x_2,y_2,z_2)\), then the displacement vector is defined as \(\displaystyle D=(x_2−x_1)i+(y_2−y_1)j+(z_2−z_1)k\). A metal container is lifted \(\displaystyle 10\) m vertically by a constant force \(\displaystyle F\). Express the displacement vector \(\displaystyle D\) by using standard unit vectors.
Solution: \(\displaystyle D=10k\)
54) A box is pulled \(\displaystyle 4\) yd horizontally in the x-direction by a constant force \(\displaystyle F\). Find the displacement vector in component form.
55) The sum of the forces acting on an object is called the resultant or net force. An object is said to be in static equilibrium if the resultant force of the forces that act on it is zero. Let \(\displaystyle F_1=⟨10,6,3⟩, F_2=⟨0,4,9⟩\), and \(\displaystyle F_3=⟨10,−3,−9⟩\) be three forces acting on a box. Find the force \(\displaystyle F_4\) acting on the box such that the box is in static equilibrium. Express the answer in component form.
Solution: \(\displaystyle F_4=⟨−20,−7,−3⟩\)
56) [T] Let \(\displaystyle F_k=⟨1,k,k^2⟩, k=1,...,n\) be n forces acting on a particle, with \(\displaystyle n≥2.\)
a. Find the net force \(\displaystyle F=\sum_{k=1}^nF_k.\) Express the answer using standard unit vectors.
b. Use a computer algebra system (CAS) to find n such that \(\displaystyle ∥F∥<100.\)
57) The force of gravity \(\displaystyle F\) acting on an object is given by \(\displaystyle F=mg\), where m is the mass of the object (expressed in kilograms) and \(\displaystyle g\) is acceleration resulting from gravity, with \(\displaystyle ∥g∥=9.8 N/kg.\) A 2-kg disco ball hangs by a chain from the ceiling of a room.
a. Find the force of gravity \(\displaystyle F\) acting on the disco ball and find its magnitude.
b. Find the force of tension \(\displaystyle T\) in the chain and its magnitude.
Express the answers using standard unit vectors.
Figure 18: (credit: modification of work by Kenneth Lu, Flickr)
Solution: \(\displaystyle a. F=−19.6k, ∥F∥=19.6 N; b. T=19.6k, ‖T‖=19.6 N\)
58) A 5-kg pendant chandelier is designed such that the alabaster bowl is held by four chains of equal length, as shown in the following figure.
a. Find the magnitude of the force of gravity acting on the chandelier.
b. Find the magnitudes of the forces of tension for each of the four chains (assume chains are essentially vertical).
59) [T] A 30-kg block of cement is suspended by three cables of equal length that are anchored at points \(\displaystyle P(−2,0,0), Q(1,\sqrt{3},0),\) and \(\displaystyle R(1,−\sqrt{3},0)\). The load is located at \(\displaystyle S(0,0,−2\sqrt{3})\), as shown in the following figure. Let \(\displaystyle F_1, F_2\), and \(\displaystyle F_3\) be the forces of tension resulting from the load in cables \(\displaystyle RS,QS,\) and \(\displaystyle PS,\) respectively.
a. Find the gravitational force \(\displaystyle F\) acting on the block of cement that counterbalances the sum \(\displaystyle F_1+F_2+F_3\) of the forces of tension in the cables.
b. Find forces \(\displaystyle F_1, F_2,\) and \(\displaystyle F_3\). Express the answer in component form.
Solution: a. \(\displaystyle F=−294k\) N; b. \(\displaystyle F_1=⟨−\frac{49\sqrt{3}}{3},49,−98⟩, F_2=⟨−\frac{49\sqrt{3}}{3},−49,−98⟩\), and \(\displaystyle F_3=⟨\frac{98}{\sqrt{3}}{3},0,−98⟩\) (each component is expressed in newtons)
60) Two soccer players are practicing for an upcoming game. One of them runs 10 m from point A to point B. She then turns left at \(\displaystyle 90°\) and runs 10 m until she reaches point C. Then she kicks the ball with a speed of 10 m/sec at an upward angle of \(\displaystyle 45°\) to her teammate, who is located at point A. Write the velocity of the ball in component form.
61) Let \(\displaystyle r(t)=⟨x(t),y(t),z(t)⟩\) be the position vector of a particle at the time \(\displaystyle t∈[0,T]\), where \(\displaystyle x,y,\) and \(\displaystyle z\) are smooth functions on \(\displaystyle [0,T]\). The instantaneous velocity of the particle at time \(\displaystyle t\) is defined by vector \(\displaystyle v(t)=⟨x'(t),y'(t),z'(t)⟩\), with components that are the derivatives with respect to \(\displaystyle t\), of the functions \(\displaystyle x, y\), and \(\displaystyle z\), respectively. The magnitude \(\displaystyle ∥v(t)∥\) of the instantaneous velocity vector is called the speed of the particle at time \(\displaystyle t\). Vector \(\displaystyle a(t)=⟨x''(t),y''(t),z''(t)⟩\), with components that are the second derivatives with respect to \(\displaystyle t\), of the functions \(\displaystyle x,y,\) and \(\displaystyle z\), respectively, gives the acceleration of the particle at time \(\displaystyle t\). Consider \(\displaystyle r(t)=⟨cost,sint,2t⟩\) the position vector of a particle at time \(\displaystyle t∈[0,30],\) where the components of \(\displaystyle r\) are expressed in centimeters and time is expressed in seconds.
a. Find the instantaneous velocity, speed, and acceleration of the particle after the first second. Round your answer to two decimal places.
b. Use a CAS to visualize the path of the particle—that is, the set of all points of coordinates \(\displaystyle (cost,sint,2t),\) where \(\displaystyle t∈[0,30].\)
\(\displaystyle a. v(1)=⟨−0.84,0.54,2⟩\) (each component is expressed in centimeters per second); \(\displaystyle ∥v(1)∥=2.24\) (expressed in centimeters per second); \(\displaystyle a(1)=⟨−0.54,−0.84,0⟩\) (each component expressed in centimeters per second squared);
62) [T] Let \(\displaystyle r(t)=⟨t,2t^2,4t^2⟩\) be the position vector of a particle at time \(\displaystyle t\) (in seconds), where \(\displaystyle t∈[0,10]\) (here the components of \(\displaystyle r\) are expressed in centimeters).
a. Find the instantaneous velocity, speed, and acceleration of the particle after the first two seconds. Round your answer to two decimal places.
b. Use a CAS to visualize the path of the particle defined by the points \(\displaystyle (t,2t^2,4t^2),\) where \(\displaystyle t∈[0,60].\)
For the following exercises, the vectors \(\displaystyle u\) and \(\displaystyle v\) are given. Calculate the dot product \(\displaystyle u⋅v\).
1) \(\displaystyle u=⟨3,0⟩, v=⟨2,2⟩\)
Solution: 6
2) \(\displaystyle u=⟨3,−4⟩, v=⟨4,3⟩\)
3) \(\displaystyle u=⟨2,2,−1⟩, v=⟨−1,2,2⟩\)
4) \(\displaystyle u=⟨4,5,−6⟩, v=⟨0,−2,−3⟩\)
For the following exercises, the vectors \(\displaystyle a, b\), and \(\displaystyle c\) are given. Determine the vectors \(\displaystyle (a⋅b)c\) and \(\displaystyle (a⋅c)b.\) Express the vectors in component form.
5) \(\displaystyle a=⟨2,0,−3⟩, b=⟨−4,−7,1⟩, c=⟨1,1,−1⟩\)
Solution: \(\displaystyle (a⋅b)c=⟨−11,−11,11⟩; (a⋅c)b=⟨−20,−35,5⟩\)
6) \(\displaystyle a=⟨0,1,2⟩, b=⟨−1,0,1⟩, c=⟨1,0,−1⟩\)
7) \(\displaystyle a=i+j, b=i−k, c=i−2k\)
Solution: \(\displaystyle (a⋅b)c=⟨1,0,−2⟩; (a⋅c)b=⟨1,0,−1⟩\)
8) \(\displaystyle a=i−j+k, b=j+3k, c=−i+2j−4k\)
For the following exercises, the two-dimensional vectors \(\displaystyle a\) and \(\displaystyle b\) are given.
a. Find the measure of the angle \(\displaystyle θ\) between a and b. Express the answer in radians rounded to two decimal places, if it is not possible to express it exactly.
b. Is \(\displaystyle θ\) an acute angle?
9) [T] \(\displaystyle a=⟨3,−1⟩, b=⟨−4,0⟩\)
Solution: \(\displaystyle a. θ=2.82\)rad; \(\displaystyle b. θ\) is not acute.
10) [T] \(\displaystyle a=⟨2,1⟩, b=⟨−1,3⟩\)
11) \(\displaystyle u=3i, v=4i+4j\)
Solution: \(\displaystyle a. θ=\frac{π}{4}\)rad; \(\displaystyle b. θ\) is acute.
12) \(\displaystyle u=5i, v=−6i+6j\)
For the following exercises, find the measure of the angle between the three-dimensional vectors \(\displaystyle a\) and \(\displaystyle b\). Express the answer in radians rounded to two decimal places, if it is not possible to express it exactly.
13) \(\displaystyle a=⟨3,−1,2⟩, b=⟨1,−1,−2⟩\)
Solution: \(\displaystyle θ=\frac{π}{2}\)
14) \(\displaystyle a=⟨0,−1,−3⟩, b=⟨2,3,−1⟩\)
15) \(\displaystyle a=i+j, b=j−k\)
16) \(\displaystyle a=i−2j+k, b=i+j−2k\)
17) [T] \(\displaystyle a=3i−j−2k, b=v+w,\) where \(\displaystyle v=−2i−3j+2k\) and \(\displaystyle w=i+2k\)
Solution: \(\displaystyle θ=2\)rad
18) [T] \(\displaystyle a=3i−j+2k, b=v−w,\) where \(\displaystyle v=2i+j+4k\) and \(\displaystyle w=6i+j+2k\)
For the following exercises determine whether the given vectors are orthogonal.
19) \(\displaystyle a=⟨x,y⟩, b=⟨−y,x⟩,\) where x and y are nonzero real numbers
Solution: Orthogonal
20) \(\displaystyle a=⟨x,x⟩, b=⟨−y,y⟩,\) where x and y are nonzero real numbers
21) \(\displaystyle a=3i−j−2k, b=−2i−3j+k\)
Solution: Not orthogonal
22) \(\displaystyle a=i−j, b=7i+2j−k\)
23) Find all two-dimensional vectors a orthogonal to vector \(\displaystyle b=⟨3,4⟩.\) Express the answer in component form.
Solution: \(\displaystyle a=⟨−\frac{4α}{3},α⟩,\) where \(\displaystyle α≠0\) is a real number
24) Find all two-dimensional vectors \(\displaystyle a\) orthogonal to vector \(\displaystyle b=⟨5,−6⟩.\) Express the answer by using standard unit vectors.
25) Determine all three-dimensional vectors \(\displaystyle u\) orthogonal to vector \(\displaystyle v=⟨1,1,0⟩.\) Express the answer by using standard unit vectors.
Solution: \(\displaystyle u=−αi+αj+βk,\) where \(\displaystyle α\) and \(\displaystyle β\) are real numbers such that \(\displaystyle α^2+β^2≠0\)
26) Determine all three-dimensional vectors \(\displaystyle u\) orthogonal to vector \(\displaystyle v=i−j−k.\) Express the answer in component form.
27) Determine the real number \(\displaystyle α\) such that vectors \(\displaystyle a=2i+3j\) and \(\displaystyle b=9i+αj\) are orthogonal.
Solution: \(\displaystyle α=−6\)
28) Determine the real number \(\displaystyle α\) such that vectors \(\displaystyle a=−3i+2j\) and \(\displaystyle b=2i+αj\) are orthogonal.
29) [T] Consider the points \(\displaystyle P(4,5)\) and \(\displaystyle Q(5,−7)\).
a. Determine vectors \(\displaystyle \vec{OP}\) and \(\displaystyle \vec{OQ}\). Express the answer by using standard unit vectors.
b. Determine the measure of angle O in triangle OPQ. Express the answer in degrees rounded to two decimal places.
Solution: \(\displaystyle a. \vec{OP}→=4i+5j, \vec{OQ}=5i−7j; b. 105.8°\)
30) [T] Consider points \(\displaystyle A(1,1), B(2,−7),\) and \(\displaystyle C(6,3)\).
a. Determine vectors \(\displaystyle \vec{BA}\) and \(\displaystyle \vec{BC}\). Express the answer in component form.
b. Determine the measure of angle B in triangle ABC. Express the answer in degrees rounded to two decimal places.
31) Determine the measure of angle A in triangle ABC, where \(\displaystyle A(1,1,8), B(4,−3,−4),\) and \(\displaystyle C(−3,1,5).\) Express your answer in degrees rounded to two decimal places.
Solution: \(\displaystyle 68.33°\)
32) Consider points \(\displaystyle P(3,7,−2)\) and \(\displaystyle Q(1,1,−3).\) Determine the angle between vectors \(\displaystyle \vec{OP}\) and \(\displaystyle \vec{OQ}\). Express the answer in degrees rounded to two decimal places.
For the following exercises, determine which (if any) pairs of the following vectors are orthogonal.
33) \(\displaystyle u=⟨3,7,−2⟩, v=⟨5,−3,−3⟩, w=⟨0,1,−1⟩\)
Solution: \(\displaystyle u\) and \(\displaystyle v\) are orthogonal; \(\displaystyle v\) and \(\displaystyle w\) are orthogonal.
34) \(\displaystyle u=i−k, v=5j−5k, w=10j\)
35) Use vectors to show that a parallelogram with equal diagonals is a square.
36) Use vectors to show that the diagonals of a rhombus are perpendicular.
37) Show that \(\displaystyle u⋅(v+w)=u⋅v+u⋅w\) is true for any vectors \(\displaystyle u, v\), and \(\displaystyle w\).
38) Verify the identity \(\displaystyle u⋅(v+w)=u⋅v+u⋅w\) for vectors \(\displaystyle u=⟨1,0,4⟩, v=⟨−2,3,5⟩,\) and \(\displaystyle w=⟨4,−2,6⟩.\)
For the following problems, the vector \(\displaystyle u\) is given.
a. Find the direction cosines for the vector u.
b. Find the direction angles for the vector u expressed in degrees. (Round the answer to the nearest integer.)
39) \(\displaystyle u=⟨2,2,1⟩\)
Solution: \(\displaystyle a. cosα=\frac{2}{3},cosβ=\frac{2}{3},\) and \(\displaystyle cosγ=\frac{1}{3}; b. α=48°, β=48°,\) and \(\displaystyle γ=71°\)
40) \(\displaystyle u=i−2j+2k\)
41) \(\displaystyle u=⟨−1,5,2⟩\)
Solution: \(\displaystyle a. cosα=−\frac{1}{\sqrt{30}},cosβ=\frac{5}{\sqrt{30}},\) and \(\displaystyle cosγ=\frac{2}{\sqrt{30}}; b. α=101°, β=24°,\) and \(\displaystyle γ=69°\)
43) Consider \(\displaystyle u=⟨a,b,c⟩\) a nonzero three-dimensional vector. Let \(\displaystyle cosα, cosβ,\) and \(\displaystyle cosγ\) be the directions of the cosines of \(\displaystyle u\). Show that \(\displaystyle cos^2α+cos^2β+cos^2γ=1.\)
44) Determine the direction cosines of vector \(\displaystyle u=i+2j+2k\) and show they satisfy \(\displaystyle cos^2α+cos^2β+cos^2γ=1.\)
of vector v into the orthogonal components w and q, where w is the projection of v onto u and q is a vector orthogonal to the direction of u.
For the following exercises, the vectors \(\displaystyle u\) and \(\displaystyle v\) are given.
a. Find the vector projection \(\displaystyle w=proj_uv\) of vector \(\displaystyle v\) onto vector \(\displaystyle u\). Express your answer in component form.
b. Find the scalar projection \(\displaystyle comp_uv\) of vector \(\displaystyle v\) onto vector \(\displaystyle u\).
45) \(\displaystyle u=5i+2j, v=2i+3j\)
Solution: \(\displaystyle a. w=⟨\frac{80}{29},\frac{32}{29}⟩; b. comp_uv=\frac{16}{\sqrt{29}}\)
46) \(\displaystyle u=⟨−4,7⟩, v=⟨3,5⟩\)
47) \(\displaystyle u=3i+2k, v=2j+4k\)
Solution: \(\displaystyle a. w=⟨\frac{24}{13},0,\frac{16}{13}⟩; b. comp_uv=\frac{8}{\sqrt{13}}\)
48) \(\displaystyle u=⟨4,4,0⟩, v=⟨0,4,1⟩\)
49) Consider the vectors \(\displaystyle u=4i−3j\) and \(\displaystyle v=3i+2j.\)
a. Find the component form of vector \(\displaystyle w=proj_uv\) that represents the projection of \(\displaystyle v\) onto \(\displaystyle u\).
b. Write the decomposition \(\displaystyle v=w+q\) of vector \(\displaystyle v\) into the orthogonal components \(\displaystyle w\) and \(\displaystyle q\), where \(\displaystyle w\) is the projection of \(\displaystyle v\) onto \(\displaystyle u\) and \(\displaystyle q\) is a vector orthogonal to the direction of \(\displaystyle u\).
Solution: \(\displaystyle a. w=⟨\frac{24}{25},−\frac{18}{25}⟩; b. q=⟨\frac{51}{25},\frac{68}{25}⟩, v=w+q=⟨\frac{24}{25},−\frac{18}{25}⟩+⟨\frac{51}{25},\frac{68}{25}⟩\)
50) Consider vectors \(\displaystyle u=2i+4j\) and \(\displaystyle v=4j+2k.\)
a. Find the component form of vector \(\displaystyle w=proj_uv\) 0that represents the projection of \(\displaystyle v\) onto \(\displaystyle u\).
51) A methane molecule has a carbon atom situated at the origin and four hydrogen atoms located at points \(\displaystyle P(1,1,−1),Q(1,−1,1),R(−1,1,1),\) and \(\displaystyle S(−1,−1,−1)\) (see figure).
a. Find the distance between the hydrogen atoms located at P and R.
b. Find the angle between vectors \(\displaystyle \vec{OS}\) and \(\displaystyle \vec{OR}\) that connect the carbon atom with the hydrogen atoms located at S and R, which is also called the bond angle. Express the answer in degrees rounded to two decimal places.
Solution: \(\displaystyle a. 2\sqrt{2}; b. 109.47°\)
52) [T] Find the vectors that join the center of a clock to the hours 1:00, 2:00, and 3:00. Assume the clock is circular with a radius of 1 unit.
53) Find the work done by force \(\displaystyle F=⟨5,6,−2⟩\) (measured in Newtons) that moves a particle from point \(\displaystyle P(3,−1,0)\) to point \(\displaystyle Q(2,3,1)\) along a straight line (the distance is measured in meters).
Solution: \(\displaystyle 17N⋅m\)
54) [T] A sled is pulled by exerting a force of 100 N on a rope that makes an angle of \(\displaystyle 25°\) with the horizontal. Find the work done in pulling the sled 40 m. (Round the answer to one decimal place.)
55) [T] A father is pulling his son on a sled at an angle of \(\displaystyle 20°\)with the horizontal with a force of 25 lb (see the following image). He pulls the sled in a straight path of 50 ft. How much work was done by the man pulling the sled? (Round the answer to the nearest integer.)
Solution: 1175 ft⋅lb
56) [T] A car is towed using a force of 1600 N. The rope used to pull the car makes an angle of 25° with the horizontal. Find the work done in towing the car 2 km. Express the answer in joules \(\displaystyle (1J=1N⋅m)\) rounded to the nearest integer.
57) [T] A boat sails north aided by a wind blowing in a direction of \(\displaystyle N30°E\) with a magnitude of 500 lb. How much work is performed by the wind as the boat moves 100 ft? (Round the answer to two decimal places.)
\(25000\sqrt{3}\) ft-lbs \(\approx 43,301.27\) ft-lbs
Vector representing the wind: \(\vecs w = 500\cos 60^{\circ} \mathbf{\hat i} + 500\sin 60^{\circ} \mathbf{\hat j}\)
Vector representing the displacement to the north: \(\vecs d = 100 \mathbf{\hat j}\)
Work done by the wind: \(W = \vecs w \cdot \vecs d = 25000\sqrt{3}\) ft-lbs \(\approx 43,301.27\) ft-lbs
58) Vector \(\displaystyle p=⟨150,225,375⟩\) represents the price of certain models of bicycles sold by a bicycle shop. Vector \(\displaystyle n=⟨10,7,9⟩\) represents the number of bicycles sold of each model, respectively. Compute the dot product \(\displaystyle p⋅n\) and state its meaning.
59) [T] Two forces \(\displaystyle F_1\) and \(\displaystyle F_2\) are represented by vectors with initial points that are at the origin. The first force has a magnitude of 20 lb and the terminal point of the vector is point \(\displaystyle P(1,1,0)\). The second force has a magnitude of 40 lb and the terminal point of its vector is point \(\displaystyle Q(0,1,1)\). Let F be the resultant force of forces \(\displaystyle F_1\) and \(\displaystyle F_2\).
a. Find the magnitude of \(\displaystyle F\). (Round the answer to one decimal place.)
b. Find the direction angles of \(\displaystyle F\). (Express the answer in degrees rounded to one decimal place.)
Solution: \(\displaystyle a. ∥F_1+F_2∥=52.9\) lb; b. The direction angles are \(\displaystyle α=74.5°,β=36.7°,\) and \(\displaystyle γ=57.7°.\)
60) [T] Consider \(\displaystyle r(t)=⟨cost,sint,2t⟩\) the position vector of a particle at time \(\displaystyle t∈[0,30]\), where the components of \(\displaystyle r\) are expressed in centimeters and time in seconds. Let \(\displaystyle \vec{OP}\) be the position vector of the particle after 1 sec.
a. Show that all vectors \(\displaystyle \vec{PQ}\), where \(\displaystyle Q(x,y,z)\) is an arbitrary point, orthogonal to the instantaneous velocity vector v(1) of the particle after 1 sec, can be expressed as \(\displaystyle \vec{PQ}=⟨x−cos1,y−sin1,z−2⟩\), where \(\displaystyle xsin1−ycos1−2z+4=0.\) The set of point Q describes a plane called the normal plane to the path of the particle at point P.
b. Use a CAS to visualize the instantaneous velocity vector and the normal plane at point P along with the path of the particle.
a. Find the cross product \(\displaystyle u×v\) of the vectors \(\displaystyle u\) and \(\displaystyle v\). Express the answer in component form.
b. Sketch the vectors \(\displaystyle u,v,\) and \(\displaystyle u×v.\)
1) \(\displaystyle u=⟨2,0,0⟩, v=⟨2,2,0⟩\)
\(\displaystyle a. u×v=⟨0,0,4⟩;\)
2) \(\displaystyle u=⟨3,2,−1⟩, v=⟨1,1,0⟩\)
3) \(\displaystyle u=2i+3j, v=j+2k\)
a. \(\displaystyle a. u×v=⟨6,−4,2⟩;\)
4) \(\displaystyle u=2j+3k, v=3i+k\)
5) Simplify \(\displaystyle (i×i−2i×j−4i×k+3j×k)×i.\)
Solution: \(\displaystyle −2j−4k\)
6) Simplify \(\displaystyle j×(k×j+2j×i−3j×j+5i×k).\)
In the following exercises, vectors \(\displaystyle u\) and \(\displaystyle v\) are given. Find unit vector \(\displaystyle w\) in the direction of the cross product vector \(\displaystyle u×v.\) Express your answer using standard unit vectors.
7) \(\displaystyle u=⟨3,−1,2⟩, v=⟨−2,0,1⟩\)
Solution: \(\displaystyle w=−\frac{1}{3\sqrt{6}}i−\frac{7}{3\sqrt{6}}j−\frac{2}{3\sqrt{6}}k\)
9) \(\displaystyle u=\vec{AB}, v=\vec{AC},\) where \(\displaystyle A(1,0,1), B(1,−1,3)\), and \(\displaystyle C(0,0,5)\)
Solution: \(\displaystyle w=−\frac{4}{\sqrt{21}}i−\frac{2}{\sqrt{21}}j−\frac{1}{\sqrt{21}}k\)
10) \(\displaystyle u=\vec{OP}, v=\vec{PQ},\) where \(\displaystyle P(−1,1,0)\) and \(\displaystyle Q(0,2,1)\)
11) Determine the real number \(\displaystyle α\) such that \(\displaystyle u×v\) and \(\displaystyle i\) are orthogonal, where \(\displaystyle u=3i+j−5k\) and \(\displaystyle v=4i−2j+αk.\)
Solution: \(\displaystyle α=10\)
12) Show that \(\displaystyle u×v\) and \(\displaystyle 2i−14j+2k\) cannot be orthogonal for any α real number, where \(\displaystyle u=i+7j−k\) and \(\displaystyle v=αi+5j+k\).
13) Show that \(\displaystyle u×v\) is orthogonal to \(\displaystyle u+v\) and \(\displaystyle u−v\), where \(\displaystyle u\) and \(\displaystyle v\) are nonzero vectors.
14) Show that \(\displaystyle v×u\) is orthogonal to \(\displaystyle (u⋅v)(u+v)+u\), where \(\displaystyle u\) and \(\displaystyle v\) are nonzero vectors.
15) Calculate the determinant \(\displaystyle \begin{bmatrix}i&j&K\\1&&−1&7\\2&0&3\end{bmatrix}\).
Solution: \(\displaystyle −3i+11j+2k\)
16) Calculate the determinant \(\displaystyle \begin{bmatrix}i&j&K\\0&3&−4\\1&6&−1\end{bmatrix}\).
For the following exercises, the vectors \(\displaystyle u\) and \(\displaystyle v\) are given. Use determinant notation to find vector \(\displaystyle w\) orthogonal to vectors \(\displaystyle u\) and \(\displaystyle v\).
17) \(\displaystyle u=⟨−1,0,e^t⟩, v=⟨1,e^{−t},0⟩,\) where \(\displaystyle t\) is a real number
Solution: \(\displaystyle w=⟨−1,e^t,−e^{−t}⟩\)
18) \(\displaystyle u=⟨1,0,x⟩, v=⟨\frac{2}{x},1,0⟩,\) where \(\displaystyle x\) is a nonzero real number
19) Find vector \(\displaystyle (a−2b)×c,\) where \(\displaystyle a=\begin{bmatrix}i&j&k\\2&−1&5\\0&1&8\end{bmatrix}, b=\begin{bmatrix}i&j&K\\0&1&1\\2&−1&−2\end{bmatrix},\) and \(\displaystyle c=i+j+k.\)
Solution: \(\displaystyle −26i+17j+9k\)
20) Find vector \(\displaystyle c×(a+3b),\) where \(\displaystyle a=\begin{bmatrix}i&j&K\\5&0&9\\0&1&0\end{bmatrix}, b=\begin{bmatrix}i&j&k\\0&−1&1\\7&1&−1\end{bmatrix},\) and \(\displaystyle c=i−k.\)
21) [T] Use the cross product \(\displaystyle u×v\) to find the acute angle between vectors \(\displaystyle u\) and \(\displaystyle v\), where \(\displaystyle u=i+2j\) and \(\displaystyle v=i+k.\) Express the answer in degrees rounded to the nearest integer.
Solutuion: \(\displaystyle 72°\)
22) [T] Use the cross product \(\displaystyle u×v\) to find the obtuse angle between vectors \(\displaystyle u\) and \(\displaystyle v\), where \(\displaystyle u=−i+3j+k\) and \(\displaystyle v=i−2j.\) Express the answer in degrees rounded to the nearest integer.
23) Use the sine and cosine of the angle between two nonzero vectors \(\displaystyle u\) and \(\displaystyle v\) to prove Lagrange's identity: \(\displaystyle ‖u×v‖^2=‖u‖^2‖v‖^2−(u⋅v)^2\).
24) Verify Lagrange's identity \(\displaystyle ‖u×v‖^2=‖u‖^2‖v‖^2−(u⋅v)^2\) for vectors \(\displaystyle u=−i+j−2k\) and \(\displaystyle v=2i−j.\)
25) Nonzero vectors \(\displaystyle u\) and \(\displaystyle v\) are called collinear if there exists a nonzero scalar \(\displaystyle α\) such that \(\displaystyle v=αu\). Show that \(\displaystyle u\) and \(\displaystyle v\) are collinear if and only if \(\displaystyle u×v=0.\)
26) Nonzero vectors \(\displaystyle u\) and \(\displaystyle v\) are called collinear if there exists a nonzero scalar \(\displaystyle α\) such that \(\displaystyle v=αu\). Show that vectors \(\displaystyle \vec{AB}\) and \(\displaystyle \vec{AC}\) are collinear, where \(\displaystyle A(4,1,0), B(6,5,−2),\) and \(\displaystyle C(5,3,−1).\)
27) Find the area of the parallelogram with adjacent sides \(\displaystyle u=⟨3,2,0⟩\) and \(\displaystyle v=⟨0,2,1⟩\).
Solution: \(\displaystyle 7\)
28) Find the area of the parallelogram with adjacent sides \(\displaystyle u=i+j\) and \(\displaystyle v=i+k.\)
29) Consider points \(\displaystyle A(3,−1,2),B(2,1,5),\) and \(\displaystyle C(1,−2,−2).\)
a. Find the area of parallelogram ABCD with adjacent sides \(\displaystyle \vec{AB}\) and \(\displaystyle \vec{AC}\).
b. Find the area of triangle ABC.
c. Find the distance from point A to line BC.
Solution: \(\displaystyle a. 5\sqrt{6}; b. \frac{5\sqrt{6}}{2}; c. \frac{5\sqrt{6}}{\sqrt{59}}\)
30) Consider points \(\displaystyle A(2,−3,4),B(0,1,2),\) and \(\displaystyle C(−1,2,0).\)
c. Find the distance from point B to line AC.
In the following exercises, vectors \(\displaystyle u,v\), and \(\displaystyle w\) are given.
a. Find the triple scalar product \(\displaystyle u⋅(v×w).\)
b. Find the volume of the parallelepiped with the adjacent edges \(\displaystyle u,v\), and \(\displaystyle w\).
31) \(\displaystyle u=i+j, v=j+k,\) and \(\displaystyle w=i+k\)
Solution: \(\displaystyle a. 2; b. 2\)
32) \(\displaystyle u=⟨−3,5,−1⟩, v=⟨0,2,−2⟩,\) and \(\displaystyle w=⟨3,1,1⟩\)
33) Calculate the triple scalar products \(\displaystyle v⋅(u×w)\) and \(\displaystyle w⋅(u×v),\) where \(\displaystyle u=⟨1,1,1⟩, v=⟨7,6,9⟩,\) and \(\displaystyle w=⟨4,2,7⟩.\)
Solution: \(\displaystyle v⋅(u×w)=−1, w⋅(u×v)=1\)
34) Calculate the triple scalar products \(\displaystyle w⋅(v×u)\) and \(\displaystyle u⋅(w×v),\) where \(\displaystyle u=⟨4,2,−1⟩, v=⟨2,5,−3⟩,\) and \(\displaystyle w=⟨9,5,−10⟩.\)
35) Find vectors \(\displaystyle a,b\), and \(\displaystyle c\) with a triple scalar product given by the determinant \(\displaystyle \begin{bmatrix}1&2&3\\0&2&5\\8&9&2\end{bmatrix}\). Determine their triple scalar product.
Solution: \(\displaystyle a=⟨1,2,3⟩, b=⟨0,2,5⟩, c=⟨8,9,2⟩; a⋅(b×c)=−9\)
36) The triple scalar product of vectors \(\displaystyle a,b,\) and \(\displaystyle c\) is given by the determinant \(\displaystyle \begin{bmatrix}0&−2&1\\0&1&4\\1&−3&7\end{bmatrix}\). Find vector \(\displaystyle a−b+c.\)
37) Consider the parallelepiped with edges \(\displaystyle OA,OB,\) and \(\displaystyle OC\), where \(\displaystyle A(2,1,0),B(1,2,0),\) and \(\displaystyle C(0,1,α).\)
a. Find the real number \(\displaystyle α>0\) such that the volume of the parallelepiped is \(\displaystyle 3\) units3.
b. For \(\displaystyle α=1,\) find the height \(\displaystyle h\) from vertex \(\displaystyle C\) of the parallelepiped. Sketch the parallelepiped.
Solution: \(\displaystyle a. α=1; b. h=1,\)
38) Consider points \(\displaystyle A(α,0,0),B(0,β,0),\) and \(\displaystyle C(0,0,γ)\), with \(\displaystyle α, β\), and \(\displaystyle γ\) positive real numbers.
a. Determine the volume of the parallelepiped with adjacent sides \(\displaystyle \vec{OA}, \vec{OB},\) and \(\displaystyle \vec{OC}\).
b. Find the volume of the tetrahedron with vertices \(\displaystyle O,A,B,\) and \(\displaystyle C\). (Hint: The volume of the tetrahedron is \(\displaystyle 1/6\) of the volume of the parallelepiped.)
c. Find the distance from the origin to the plane determined by \(\displaystyle A,B,\) and \(\displaystyle C\). Sketch the parallelepiped and tetrahedron.
39) Let \(\displaystyle u,v,\) and \(\displaystyle w\) be three-dimensional vectors and c be a real number. Prove the following properties of the cross product.
a. \(\displaystyle u×u=0\)
b. \(\displaystyle u×(v+w)=(u×v)+(u×w)\)
c. \(\displaystyle c(u×v)=(cu)×v=u×(cv)\)
d. \(\displaystyle u⋅(u×v)=0\)
40) Show that vectors \(\displaystyle u=⟨1,0,−8⟩, v=⟨0,1,6⟩,\) and \(\displaystyle w=⟨−1,9,3⟩\) satisfy the following properties of the cross product.
41) Nonzero vectors \(\displaystyle u,v,\) and \(\displaystyle w\) are said to be linearly dependent if one of the vectors is a linear combination of the other two. For instance, there exist two nonzero real numbers \(\displaystyle α\) and \(\displaystyle β\) such that \(\displaystyle w=αu+βv\). Otherwise, the vectors are called linearly independent. Show that \(\displaystyle u,v,\) and \(\displaystyle w\) are coplanar if and only if they are linear dependent.
42) Consider vectors \(\displaystyle u=⟨1,4,−7⟩, v=⟨2,−1,4⟩, w=⟨0,−9,18⟩,\) and \(\displaystyle p=⟨0,−9,17⟩.\)
a. Show that \(\displaystyle u,v,\) and \(\displaystyle w\) are coplanar by using their triple scalar product
b. Show that \(\displaystyle u,v,\) and \(\displaystyle w\) are coplanar, using the definition that there exist two nonzero real numbers \(\displaystyle α\) and \(\displaystyle β\) such that \(\displaystyle w=αu+βv.\)
c. Show that \(\displaystyle u,v,\) and \(\displaystyle p\) are linearly independent—that is, none of the vectors is a linear combination of the other two.
43) Consider points \(\displaystyle A(0,0,2), B(1,0,2), C(1,1,2),\) and \(\displaystyle D(0,1,2).\) Are vectors \(\displaystyle \vec{AB}, \vec{AC},\) and \(\displaystyle \vec{AD}\) linearly dependent (that is, one of the vectors is a linear combination of the other two)?
Solution: Yes, \(\displaystyle \vec{AD}=α\vec{AB}+β\vec{AC},\) where \(\displaystyle α=−1\) and \(\displaystyle β=1.\)
44) Show that vectors \(\displaystyle i+j, i−j,\) and \(\displaystyle i+j+k\) are linearly independent—that is, there exist two nonzero real numbers \(\displaystyle α\) and \(\displaystyle β\) such that \(\displaystyle i+j+k=α(i+j)+β(i−j).\)
45) Let \(\displaystyle u=⟨u_1,u_2⟩\) and \(\displaystyle v=⟨v_1,v_2⟩\) be two-dimensional vectors. The cross product of vectors \(\displaystyle u\) and \(\displaystyle v\) is not defined. However, if the vectors are regarded as the three-dimensional vectors \(\displaystyle \tilde{u}=⟨u_1,u_2,0⟩\) and \(\displaystyle \tilde{v}=⟨v_1,v_2,0⟩\), respectively, then, in this case, we can define the cross product of \(\displaystyle \tilde{u}\) and \(\displaystyle \tilde{v}\). In particular, in determinant notation, the cross product of \(\displaystyle \tilde{u}\) and \(\displaystyle \tilde{v}\) is given by
\(\displaystyle \tilde{u}×\tilde{v}=\begin{bmatrix}i&j&k\\u_1&u_2&0\\v_1&v_2&0\end{bmatrix}\).
Use this result to compute \(\displaystyle (icosθ+jsinθ)×(isinθ−jcosθ),\) where \(\displaystyle θ\) is a real number.
Solution: \(\displaystyle −k\)
46) Consider points \(\displaystyle P(2,1), Q(4,2),\) and \(\displaystyle R(1,2).\)
a. Find the area of triangle \(\displaystyle P,Q,\) and \(\displaystyle R.\)
b. Determine the distance from point \(\displaystyle R\) to the line passing through \(\displaystyle P\) and \(\displaystyle Q\).
47) Determine a vector of magnitude \(\displaystyle 10\) perpendicular to the plane passing through the x-axis and point \(\displaystyle P(1,2,4).\)
Solution: \(\displaystyle ⟨0,±4\sqrt{5},2\sqrt{5}⟩\)
48) Determine a unit vector perpendicular to the plane passing through the z-axis and point \(\displaystyle A(3,1,−2).\)
49) Consider \(\displaystyle u\) and \(\displaystyle v\) two three-dimensional vectors. If the magnitude of the cross product vector \(\displaystyle u×v\) is \(\displaystyle k\) times larger than the magnitude of vector \(\displaystyle u\), show that the magnitude of \(\displaystyle v\) is greater than or equal to \(\displaystyle k\), where \(\displaystyle k\) is a natural number.
50) [T] Assume that the magnitudes of two nonzero vectors \(\displaystyle u\) and \(\displaystyle v\) are known. The function \(\displaystyle f(θ)=‖u‖‖v‖sinθ\) defines the magnitude of the cross product vector \(\displaystyle u×v,\) where \(\displaystyle θ∈[0,π]\) is the angle between \(\displaystyle u\) and \(\displaystyle v\).
a. Graph the function \(\displaystyle f\).
b. Find the absolute minimum and maximum of function \(\displaystyle f\). Interpret the results.
c. If \(\displaystyle ‖u‖=5\) and \(\displaystyle ‖v‖=2\), find the angle between \(\displaystyle u\) and \(\displaystyle v\) if the magnitude of their cross product vector is equal to \(\displaystyle 9\).
51) Find all vectors \(\displaystyle w=⟨w_1,w_2,w_3⟩\) that satisfy the equation \(\displaystyle ⟨1,1,1⟩×w=⟨−1,−1,2⟩.\)
Solution: \(\displaystyle w=⟨w_3−1,w_3+1,w_3⟩,\) where \(\displaystyle w_3\) is any real number
52) Solve the equation \(\displaystyle w×⟨1,0,−1⟩=⟨3,0,3⟩,\) where \(\displaystyle w=⟨w_1,w_2,w_3⟩\) is a nonzero vector with a magnitude of \(\displaystyle 3\).
53) [T] A mechanic uses a 12-in. wrench to turn a bolt. The wrench makes a \(\displaystyle 30°\) angle with the horizontal. If the mechanic applies a vertical force of \(\displaystyle 10\) lb on the wrench handle, what is the magnitude of the torque at point \(\displaystyle P\) (see the following figure)? Express the answer in foot-pounds rounded to two decimal places.
Solution: 8.66 ft-lb
54) [T] A boy applies the brakes on a bicycle by applying a downward force of 20 lb on the pedal when the 6-in. crank makes a \(\displaystyle 40°\) angle with the horizontal (see the following figure). Find the torque at point \(\displaystyle P\). Express your answer in foot-pounds rounded to two decimal places.
55) [T] Find the magnitude of the force that needs to be applied to the end of a 20-cm wrench located on the positive direction of the y-axis if the force is applied in the direction \(\displaystyle ⟨0,1,−2⟩\) and it produces a \(\displaystyle 100\) N·m torque to the bolt located at the origin.
Solution: 250 N
56) [T] What is the magnitude of the force required to be applied to the end of a 1-ft wrench at an angle of \(\displaystyle 35°\) to produce a torque of \(\displaystyle 20\) N·m?
57) [T] The force vector \(\displaystyle F\) acting on a proton with an electric charge of \(\displaystyle 1.6×10^{−19}C\) (in coulombs) moving in a magnetic field \(\displaystyle B\) where the velocity vector \(\displaystyle v\) is given by \(\displaystyle F=1.6×10^{−19}(v×B)\) (here, \(\displaystyle v\) is expressed in meters per second, \(\displaystyle B\) is in tesla [T], and \(\displaystyle F\) is in newtons [N]). Find the force that acts on a proton that moves in the xy-plane at velocity \(\displaystyle v=10^5i+10^5j\) (in meters per second) in a magnetic field given by \(\displaystyle B=0.3j\).
Solution: \(\displaystyle F=4.8×10^{−15}kN\)
58) [T] The force vector \(\displaystyle F\) acting on a proton with an electric charge of \(\displaystyle 1.6×10^{−19}C\) moving in a magnetic field \(\displaystyle B\) where the velocity vector \(\displaystyle v\) is given by \(\displaystyle F=1.6×10^{−19}(v×B)\) (here, \(\displaystyle v\) is expressed in meters per second, \(\displaystyle B\) in \(\displaystyle T\), and \(\displaystyle F\) in \(\displaystyle N\)). If the magnitude of force \(\displaystyle F\) acting on a proton is \(\displaystyle 5.9×10^{−17} N\) and the proton is moving at the speed of 300 m/sec in magnetic field \(\displaystyle B\) of magnitude 2.4 T, find the angle between velocity vector \(\displaystyle v\) of the proton and magnetic field \(\displaystyle B\). Express the answer in degrees rounded to the nearest integer.
60) [T] Consider \(\displaystyle r(t)=⟨cost,sint,2t⟩\) the position vector of a particle at time \(\displaystyle t∈[0,30]\), where the components of \(\displaystyle r\) are expressed in centimeters and time in seconds. Let \(\displaystyle \vec{OP}\) be the position vector of the particle after \(\displaystyle 1\) sec.
a. Determine unit vector \(\displaystyle B(t)\) (called the binormal unit vector) that has the direction of cross product vector \(\displaystyle v(t)×a(t),\) where \(\displaystyle v(t)\) and \(\displaystyle a(t)\) are the instantaneous velocity vector and, respectively, the acceleration vector of the particle after \(\displaystyle t\) seconds.
b. Use a CAS to visualize vectors \(\displaystyle v(1), a(1),\) and \(\displaystyle B(1)\) as vectors starting at point \(\displaystyle P\) along with the path of the particle.
\(\displaystyle a. B(t)=⟨\frac{2sint}{\sqrt{5}},−\frac{2cost}{\sqrt{5}},\frac{1}{\sqrt{5}}⟩;\)
61) A solar panel is mounted on the roof of a house. The panel may be regarded as positioned at the points of coordinates (in meters) \(\displaystyle A(8,0,0), B(8,18,0), C(0,18,8),\) and \(\displaystyle D(0,0,8)\) (see the following figure).
a. Find vector \(\displaystyle n=\vec{AB}×\vec{AD}\) perpendicular to the surface of the solar panels. Express the answer using standard unit vectors.
b. Assume unit vector \(\displaystyle s=\frac{1}{\sqrt{3}}i+\frac{1}{\sqrt{3}}j+\frac{1}{\sqrt{3}}k\) points toward the Sun at a particular time of the day and the flow of solar energy is \(\displaystyle F=900s\) (in watts per square meter [\(\displaystyle W/m^2\)]). Find the predicted amount of electrical power the panel can produce, which is given by the dot product of vectors \(\displaystyle F\) and \(\displaystyle n\) (expressed in watts).
c. Determine the angle of elevation of the Sun above the solar panel. Express the answer in degrees rounded to the nearest whole number. (Hint: The angle between vectors \(\displaystyle n\) and \(\displaystyle s\) and the angle of elevation are complementary.)
In the following exercises, points \(\displaystyle P\) and \(\displaystyle Q\) are given. Let \(\displaystyle L\) be the line passing through points \(\displaystyle P\) and \(\displaystyle Q\).
a. Find the vector equation of line \(\displaystyle L\).
b. Find parametric equations of line \(\displaystyle L\).
c. Find symmetric equations of line \(\displaystyle L\).
d. Find parametric equations of the line segment determined by \(\displaystyle P\) and \(\displaystyle Q\).
1) \(\displaystyle P(−3,5,9), Q(4,−7,2)\)
Solution: \(\displaystyle a. r=⟨−3,5,9⟩+t⟨7,−12,−7⟩, t∈R; b. x=−3+7t,y=5−12t,z=9−7t, t∈R; c. \frac{x+3}{7}=\frac{y−5}{−12}=\frac{z−9}{−7}; d. x=−3+7t,y=5−12t,z=9−7t, t∈[0,1]\)
2) \(\displaystyle P(4,0,5),Q(2,3,1)\)
3) \(\displaystyle P(−1,0,5), Q(4,0,3)\)
Solution: \(\displaystyle a. r=⟨−1,0,5⟩+t⟨5,0,−2⟩, t∈R; b. x=−1+5t,y=0,z=5−2t, t∈R; c. \frac{x+1}{5}=\frac{z−5}{−2},y=0; d. x=−1+5t,y=0,z=5−2t, t∈[0,1]\)
4) \(\displaystyle P(7,−2,6), Q(−3,0,6)\)
For the following exercises, point \(\displaystyle P\) and vector \(\displaystyle v\) are given. Let \(\displaystyle L\) be the line passing through point \(\displaystyle P\) with direction \(\displaystyle v\).
a. Find parametric equations of line \(\displaystyle L\).
b. Find symmetric equations of line \(\displaystyle L\).
c. Find the intersection of the line with the xy-plane.
5) \(\displaystyle P(1,−2,3), v=⟨1,2,3⟩\)
Solution: \(\displaystyle a. x=1+t,y=−2+2t,z=3+3t, t∈R; b. \frac{x−1}{1}=\frac{y+2}{2}=\frac{z−3}{3}; c. (0,−4,0)\)
6) \(\displaystyle P(3,1,5), v=⟨1,1,1⟩\)
7) \(\displaystyle P(3,1,5), v=\vec{QR},\) where \(\displaystyle Q(2,2,3)\) and \(\displaystyle R(3,2,3)\)
Solution: \(\displaystyle a. x=3+t,y=1,z=5, t∈R; b. y=1,z=5;\) c. The line does not intersect the xy-plane.
For the following exercises, line \(\displaystyle L\) is given.
a. Find point \(\displaystyle P\) that belongs to the line and direction vector \(\displaystyle v\) of the line. Express \(\displaystyle v\) in component form.
b. Find the distance from the origin to line \(\displaystyle L\).
9) \(\displaystyle x=1+t,y=3+t,z=5+4t, t∈R\)
Solution: \(\displaystyle a. P(1,3,5), v=⟨1,1,4⟩; b. \sqrt{3}\)
10) \(\displaystyle −x=y+1,z=2\)
Find the distance between point \(\displaystyle A(−3,1,1)\) and the line of symmetric equations
11) \(\displaystyle x=−y=−z.\)
Solution: \(\displaystyle \frac{2\sqrt{2}}{\sqrt{3}}\)
Find the distance between point \(\displaystyle A(4,2,5)\) and the line of parametric equations
12) \(\displaystyle x=−1−t,y=−t,z=2, t∈R.\)
For the following exercises, lines \(\displaystyle L_1\) and \(\displaystyle L_2\) are given.
a. Verify whether lines \(\displaystyle L_1\) and \(\displaystyle L_2\) are parallel.
b. If the lines \(\displaystyle L_1\) and \(\displaystyle L_2\) are parallel, then find the distance between them.
13) \(\displaystyle L_1:x=1+t,y=t,z=2+t, t∈R, L_2:x−3=y−1=z−3\)
Solution: a. Parallel; b. \(\displaystyle \frac{\sqrt{2}}{\sqrt{3}}\)
14) \(\displaystyle L_1:x=2,y=1,z=t, L_2:x=1,y=1,z=2−3t, t∈R\)
15) Show that the line passing through points \(\displaystyle P(3,1,0)\) and \(\displaystyle Q(1,4,−3)\) is perpendicular to the line with equation \(\displaystyle x=3t,y=3+8t,z=−7+6t, t∈R.\)
16) Are the lines of equations \(\displaystyle x=−2+2t,y=−6,z=2+6t\) and \(\displaystyle x=−1+t,y=1+t,z=t, t∈R,\) perpendicular to each other?
17) Find the point of intersection of the lines of equations \(\displaystyle x=−2y=3z\) and \(\displaystyle x=−5−t,y=−1+t,z=t−11, t∈R.\)
Solution: \(\displaystyle (−12,6,−4)\)
18) Find the intersection point of the x-axis with the line of parametric equations \(\displaystyle x=10+t,y=2−2t,z=−3+3t,
t∈R.\)
For the following exercises, lines \(\displaystyle L_1\) and \(\displaystyle L_2\) are given. Determine whether the lines are equal, parallel but not equal, skew, or intersecting.
19) \(\displaystyle L_1:x=y−1=−z\) and \(\displaystyle L_2:x−2=−y=\frac{z}{2}\)
Solution: The lines are skew.
20) \(\displaystyle L_1:x=2t,y=0,z=3, t∈R\) and \(\displaystyle L_2:x=0,y=8+s,z=7+s, s∈R\)
21) \(\displaystyle L_1:x=−1+2t,y=1+3t,z=7t, t∈R\) and \(\displaystyle L_2:x−1=\frac{2}{3}(y−4)=\frac{2}{7}z−2\)
Solution: The lines are equal.
22) \(\displaystyle L_1:3x=y+1=2z\) and \(\displaystyle L_2:x=6+2t,y=17+6t,z=9+3t, t∈R\)
23) Consider line \(\displaystyle L\) of symmetric equations \(\displaystyle x−2=−y=\frac{z}{2}\) and point \(\displaystyle A(1,1,1).\)
a. Find parametric equations for a line parallel to \(\displaystyle L\) that passes through point \(\displaystyle A\).
b. Find symmetric equations of a line skew to \(\displaystyle L\) and that passes through point \(\displaystyle A\).
c. Find symmetric equations of a line that intersects \(\displaystyle L\) and passes through point \(\displaystyle A\).
Solution: \(\displaystyle a. x=1+t,y=1−t,z=1+2t, t∈R;\) b. For instance, the line passing through \(\displaystyle A\) with direction vector \(\displaystyle j:x=1,z=1;\) c. For instance, the line passing through \(\displaystyle A\) and point \(\displaystyle (2,0,0)\) that belongs to \(\displaystyle L\) is a line that intersects; \(\displaystyle L:\frac{x−1}{−1}=y−1=z−1\)
24) Consider line \(\displaystyle L\) of parametric equations \(\displaystyle x=t,y=2t,z=3, t∈R.\)
a. Find parametric equations for a line parallel to \(\displaystyle L\) that passes through the origin.
b. Find parametric equations of a line skew to \(\displaystyle L\) that passes through the origin.
c. Find symmetric equations of a line that intersects \(\displaystyle L\) and passes through the origin.
For the following exercises, point \(\displaystyle P\) and vector \(\displaystyle n\) are given.
a. Find the scalar equation of the plane that passes through \(\displaystyle P\) and has normal vector \(\displaystyle n\).
b. Find the general form of the equation of the plane that passes through \(\displaystyle P\) and has normal vector \(\displaystyle n\).
25) \(\displaystyle P(0,0,0), n=3i−2j+4k\)
Solution: \(\displaystyle a. 3x−2y+4z=0; b. 3x−2y+4z=0\)
26) \(\displaystyle P(3,2,2), n=2i+3j−k\)
27) \(\displaystyle P(1,2,3), n=⟨1,2,3⟩\)
Solution: \(\displaystyle a. (x−1)+2(y−2)+3(z−3)=0; b. x+2y+3z−14=0\)
28) \(\displaystyle P(0,0,0), n=⟨−3,2,−1⟩\)
For the following exercises, the equation of a plane is given.
a. Find normal vector \(\displaystyle n\) to the plane. Express \(\displaystyle n\) using standard unit vectors.
b. Find the intersections of the plane with the axes of coordinates.
c. Sketch the plane.
29) [T] \(\displaystyle 4x+5y+10z−20=0\)
\(\displaystyle a. n=4i+5j+10k; b. (5,0,0), (0,4,0),\) and \(\displaystyle (0,0,2);\)
30) \(\displaystyle 3x+4y−12=0\)
31) \(\displaystyle 3x−2y+4z=0\)
\(\displaystyle a. n=3i−2j+4k; b. (0,0,0);\)
32) \(\displaystyle x+z=0\)
33) Given point \(\displaystyle P(1,2,3)\) and vector \(\displaystyle n=i+j\), find point \(\displaystyle Q\) on the x-axis such that \(\displaystyle \vec{PQ}\) and \(\displaystyle n\) are orthogonal.
Solution: \(\displaystyle (3,0,0)\)
34) Show there is no plane perpendicular to \(\displaystyle n=i+j\) that passes through points \(\displaystyle P(1,2,3)\) and \(\displaystyle Q(2,3,4)\).
35) Find parametric equations of the line passing through point \(\displaystyle P(−2,1,3)\) that is perpendicular to the plane of equation \(\displaystyle 2x−3y+z=7.\)
Solution: \(\displaystyle x=−2+2t,y=1−3t,z=3+t, t∈R\)
36) Find symmetric equations of the line passing through point \(\displaystyle P(2,5,4)\) that is perpendicular to the plane of equation \(\displaystyle 2x+3y−5z=0.\)
37) Show that line \(\displaystyle \frac{x−1}{2}=\frac{y+1}{3}=\frac{z−2}{4}\) is parallel to plane \(\displaystyle x−2y+z=6\).
38) Find the real number \(\displaystyle α\) such that the line of parametric equations \(\displaystyle x=t,y=2−t,z=3+t, t∈R\) is parallel to the plane of equation \(\displaystyle αx+5y+z−10=0.\)
For the following exercises, the equations of two planes are given.
a. Determine whether the planes are parallel, orthogonal, or neither.
b. If the planes are neither parallel nor orthogonal, then find the measure of the angle between the planes. Express the answer in degrees rounded to the nearest integer.
39) [T] \(\displaystyle x+y+z=0, 2x−y+z−7=0\)
Solution: a. The planes are neither parallel nor orthogonal; b. \(\displaystyle 62°\)
40) \(\displaystyle 5x−3y+z=4, x+4y+7z=1\)
41) \(\displaystyle x−5y−z=1, 5x−25y−5z=−3\)
Solution: a. The planes are parallel.
42) [T] \(\displaystyle x−3y+6z=4, 5x+y−z=4\)
43) Show that the lines of equations \(\displaystyle x=t,y=1+t,z=2+t, t∈R,\) and \(\displaystyle \frac{x}{2}=\frac{y−1}{3}=z−3\) are skew, and find the distance between them.
Solution: \(\displaystyle \frac{1}{\sqrt{6}}\)
44) Show that the lines of equations \(\displaystyle x=−1+t,y=−2+t,z=3t, t∈R,\) and \(\displaystyle x=5+s,y=−8+2s,z=7s, s∈R\) are skew, and find the distance between them.
45) Consider point \(\displaystyle C(−3,2,4)\) and the plane of equation \(\displaystyle 2x+4y−3z=8\).
a. Find the radius of the sphere with center \(\displaystyle C\) tangent to the given plane.
b. Find point P of tangency.
Solution: \(\displaystyle a. \frac{18}{\sqrt{29}}; b. P(−\frac{51}{29},\frac{130}{29},\frac{62}{29})\)
46) Consider the plane of equation \(\displaystyle x−y−z−8=0.\)
a. Find the equation of the sphere with center \(\displaystyle C\) at the origin that is tangent to the given plane.
b. Find parametric equations of the line passing through the origin and the point of tangency.
47) Two children are playing with a ball. The girl throws the ball to the boy. The ball travels in the air, curves \(\displaystyle 3\) ft to the right, and falls \(\displaystyle 5\) ft away from the girl (see the following figure). If the plane that contains the trajectory of the ball is perpendicular to the ground, find its equation.
Solution: \(\displaystyle 4x−3y=0\)
48) [T] John allocates \(\displaystyle d\) dollars to consume monthly three goods of prices \(\displaystyle a,b\), and \(\displaystyle c\). In this context, the budget equation is defined as \(\displaystyle ax+by+cz=d,\) where \(\displaystyle x≥0,y≥0\), and \(\displaystyle z≥0\) represent the number of items bought from each of the goods. The budget set is given by \(\displaystyle {(x,y,z)|ax+by+cz≤d,x≥0,y≥0,z≥0},\) and the budget plane is the part of the plane of equation \(\displaystyle ax+by+cz=d\) for which \(\displaystyle x≥0,y≥0\), and \(\displaystyle z≥0\). Consider \(\displaystyle a=$8, b=$5, c=$10,\) and \(\displaystyle d=$500.\)
a. Use a CAS to graph the budget set and budget plane.
b. For \(\displaystyle z=25,\) find the new budget equation and graph the budget set in the same system of coordinates.
49) [T] Consider \(\displaystyle r(t)=⟨sint,cost,2t⟩\) the position vector of a particle at time \(\displaystyle t∈[0,3]\), where the components of \(\displaystyle r\) are expressed in centimeters and time is measured in seconds. Let \(\displaystyle \vec{OP}\) be the position vector of the particle after \(\displaystyle 1\) sec.
a. Determine the velocity vector \(\displaystyle v(1)\) of the particle after \(\displaystyle 1\) sec.
b. Find the scalar equation of the plane that is perpendicular to \(\displaystyle v(1)\) and passes through point \(\displaystyle P\). This plane is called the normal plane to the path of the particle at point \(\displaystyle P\).
c. Use a CAS to visualize the path of the particle along with the velocity vector and normal plane at point \(\displaystyle P\).
Solution: \(\displaystyle a. v(1)=⟨cos1,−sin1,2⟩; b. (cos1)(x−sin1)−(sin1)(y−cos1)+2(z−2)=0;\)
50) [T] A solar panel is mounted on the roof of a house. The panel may be regarded as positioned at the points of coordinates (in meters) \(\displaystyle A(8,0,0), B(8,18,0), C(0,18,8),\) and \(\displaystyle D(0,0,8)\) (see the following figure).
a. Find the general form of the equation of the plane that contains the solar panel by using points \(\displaystyle A,B,\) and \(\displaystyle C\), and show that its normal vector is equivalent to \(\displaystyle \vec{AB}×\vec{AD}.\)
b. Find parametric equations of line \(\displaystyle L_1\) that passes through the center of the solar panel and has direction vector \(\displaystyle s=\frac{1}{\sqrt{3}}i+\frac{1}{\sqrt{3}}j+\frac{1}{\sqrt{3}}k,\) which points toward the position of the Sun at a particular time of day.
c. Find symmetric equations of line \(\displaystyle L_2\) that passes through the center of the solar panel and is perpendicular to it.
d. Determine the angle of elevation of the Sun above the solar panel by using the angle between lines \(\displaystyle L_1\) and \(\displaystyle L_2\).
For the following exercises, sketch and describe the cylindrical surface of the given equation.
1) [T] \(\displaystyle x^2+z^2=1\)
Solution: The surface is a cylinder with the rulings parallel to the y-axis.
2) [T] \(\displaystyle x^2+y^2=9\)
3) [T] \(\displaystyle z=cos(\frac{π}{2}+x)\)
Solution: The surface is a cylinder with rulings parallel to the y-axis.
4) [T] \(\displaystyle z=e^x\)
5) [T] \(\displaystyle z=9−y^2\)
Solution: The surface is a cylinder with rulings parallel to the x-axis.
6) [T] \(\displaystyle z=ln(x)\)
For the following exercises, the graph of a quadric surface is given.
a. Specify the name of the quadric surface.
b. Determine the axis of symmetry of the quadric surface.
Solution: a. Cylinder; b. The x-axis
Solution: a. Hyperboloid of two sheets; b. The x-axis
For the following exercises, match the given quadric surface with its corresponding equation in standard form.
a. \(\displaystyle \frac{x^2}{4}+\frac{y^2}{9}−\frac{z^2}{12}=1\)
b. \(\displaystyle \frac{x^2}{4}−\frac{y^2}{9}−\frac{z^2}{12}=1\)
c. \(\displaystyle \frac{x^2}{4}+\frac{y^2}{9}+\frac{z^2}{12}=1\)
d. \(\displaystyle z^2=4x^2+3y^2\)
e. \(\displaystyle z=4x^2−y^2\)
f. \(\displaystyle 4x^2+y^2−z^2=0\)
11) Hyperboloid of two sheets
Solution: b.
12) Ellipsoid
13) Elliptic paraboloid
Solution: d.
14) Hyperbolic paraboloid
15) Hyperboloid of one sheet
Solution: a.
16) Elliptic cone
For the following exercises, rewrite the given equation of the quadric surface in standard form. Identify the surface.
17) \(\displaystyle −x^2+36y^2+36z^2=9\)
Solution: \(\displaystyle −\frac{x^2}{9}+\frac{y^2}{\frac{1}{4}}+\frac{z^2}{\frac{1}{4}}=1,\) hyperboloid of one sheet with the x-axis as its axis of symmetry
18) \(\displaystyle −4x^2+25y^2+z^2=100\)
19) \(\displaystyle −3x^2+5y^2−z^2=10\)
Solution: \(\displaystyle −\frac{x^2}{\frac{10}{3}}+\frac{y^2}{2}−\frac{z^2}{10}=1,\) hyperboloid of two sheets with the y-axis as its axis of symmetry
20) \(\displaystyle 3x^2−y^2−6z^2=18\)
21) \(\displaystyle 5y=x^2−z^2\)
Solution: \(\displaystyle y=−\frac{z^2}{5}+\frac{x^2}{5},\) hyperbolic paraboloid with the y-axis as its axis of symmetry
22) \(\displaystyle 8x^2−5y^2−10z=0\)
23) \(\displaystyle x^2+5y^2+3z^2−15=0\)
Solurion: \(\displaystyle \frac{x^2}{15}+\frac{y^2}{3}+\frac{z^2}{5}=1,\) ellipsoid
24) \(\displaystyle 63x^2+7y^2+9z^2−63=0\)
25) \(\displaystyle x^2+5y^2−8z^2=0\)
Solution: \(\displaystyle \frac{x^2}{40}+\frac{y^2}{8}−\frac{z^2}{5}=0,\) elliptic cone with the z-axis as its axis of symmetry
26) \(\displaystyle 5x^2−4y^2+20z^2=0\)
27) \(\displaystyle 6x=3y^2+2z^2\)
Solution: \(\displaystyle x=\frac{y^2}{2}+\frac{z^2}{3},\) elliptic paraboloid with the x-axis as its axis of symmetry
28) \(\displaystyle 49y=x^2+7z^2\)
For the following exercises, find the trace of the given quadric surface in the specified plane of coordinates and sketch it.
29) [T] \(\displaystyle x^2+z^2+4y=0,z=0\)
Solution: Parabola \(\displaystyle y=−\frac{x^2}{4},\)
30) [T] \(\displaystyle x^2+z^2+4y=0,x=0\)
31) [T] \(\displaystyle −4x^2+25y^2+z^2=100,x=0\)
Solution: Ellipse \(\displaystyle \frac{y^2}{4}+\frac{z^2}{100}=1,\)
32) [T] \(\displaystyle −4x^2+25y^2+z^2=100,y=0\)
33) [T] \(\displaystyle x^2+\frac{y^2}{4}+\frac{z^2}{100}=1,x=0\)
34) [T] \(\displaystyle x^2−y−z^2=1,y=0\)
35) Use the graph of the given quadric surface to answer the questions.
b. Which of the equations—\(\displaystyle 16x^2+9y^2+36z^2=3600,9x^2+36y^2+16z^2=3600,\) or \(\displaystyle 36x^2+9y^2+16z^2=3600\) —corresponds to the graph?
c. Use b. to write the equation of the quadric surface in standard form.
Solution: a. Ellipsoid; b. The third equation; c. \(\displaystyle \frac{x^2}{100}+\frac{y^2}{400}+\frac{z^2}{225}=1\)
b. Which of the equations—\(\displaystyle 36z=9x^2+y^2,9x^2+4y^2=36z\), or \(\displaystyle −36z=−81x^2+4y^2\) —corresponds to the graph above?
For the following exercises, the equation of a quadric surface is given.
a. Use the method of completing the square to write the equation in standard form.
b. Identify the surface.
37) \(\displaystyle x^2+2z^2+6x−8z+1=0\)
Solution: \(\displaystyle a. \frac{(x+3)^2}{16}+\frac{(z−2)^2}{8}=1;\) b. Cylinder centered at \(\displaystyle (−3,2)\) with rulings parallel to the y-axis
38) \(\displaystyle 4x^2−y^2+z^2−8x+2y+2z+3=0\)
39) \(\displaystyle x^2+4y^2−4z^2−6x−16y−16z+5=0\)
Solution: \(\displaystyle a. \frac{(x−3)^2}{4}+(y−2)^2−(z+2)^2=1;\) b. Hyperboloid of one sheet centered at \(\displaystyle (3,2,−2),\) with the z-axis as its axis of symmetry
40) \(\displaystyle x^2+z^2−4y+4=0\)
41) \(\displaystyle x^2+\frac{y^2}{4}−\frac{z^2}{3}+6x+9=0\)
Solution: \(\displaystyle a. (x+3)^2+\frac{y^2}{4}−\frac{z^2}{3}=0;\) b. Elliptic cone centered at \(\displaystyle (−3,0,0),\) with the z-axis as its axis of symmetry
42) \(\displaystyle x^2−y^2+z^2−12z+2x+37=0\)
43) Write the standard form of the equation of the ellipsoid centered at the origin that passes through points \(\displaystyle A(2,0,0),B(0,0,1),\) and \(\displaystyle C(12,\sqrt{11},\frac{1}{2}).\)
Solution: \(\displaystyle \frac{x^2}{4}+\frac{y^2}{16}+z^2=1\)
44) Write the standard form of the equation of the ellipsoid centered at point \(\displaystyle P(1,1,0)\) that passes through points \(\displaystyle A(6,1,0),B(4,2,0)\) and \(\displaystyle C(1,2,1)\).
45) Determine the intersection points of elliptic cone \(\displaystyle x^2−y^2−z^2=0\) with the line of symmetric equations \(\displaystyle \frac{x−1}{2}=\frac{y+1}{3}=z.\)
Solution: \(\displaystyle (1,−1,0)\) and \(\displaystyle (\frac{13}{3},4,\frac{5}{3})\)
46) Determine the intersection points of parabolic hyperboloid \(\displaystyle z=3x^2−2y^2\) with the line of parametric equations \(\displaystyle x=3t,y=2t,z=19t\), where \(\displaystyle t∈R.\)
47) Find the equation of the quadric surface with points \(\displaystyle P(x,y,z)\) that are equidistant from point \(\displaystyle Q(0,−1,0)\) and plane of equation \(\displaystyle y=1.\) Identify the surface.
Solution: \(\displaystyle x^2+z^2+4y=0,\) elliptic paraboloid
48) Find the equation of the quadric surface with points \(\displaystyle P(x,y,z)\) that are equidistant from point \(\displaystyle Q(0,2,0)\) and plane of equation \(\displaystyle y=−2.\) Identify the surface.
49) If the surface of a parabolic reflector is described by equation \(\displaystyle 400z=x^2+y^2,\) find the focal point of the reflector.
Solution: \(\displaystyle (0,0,100)\)
50) Consider the parabolic reflector described by equation \(\displaystyle z=20x^2+20y^2.\) Find its focal point.
51) Show that quadric surface \(\displaystyle x^2+y^2+z^2+2xy+2xz+2yz+x+y+z=0\) reduces to two parallel planes.
52) Show that quadric surface \(\displaystyle x^2+y^2+z^2−2xy−2xz+2yz−1=0\) reduces to two parallel planes passing.
53) [T] The intersection between cylinder \(\displaystyle (x−1)^2+y^2=1\) and sphere \(\displaystyle x^2+y^2+z^2=4\) is called a Viviani curve.
a. Solve the system consisting of the equations of the surfaces to find the equation of the intersection curve. (Hint: Find \(\displaystyle x\) and \(\displaystyle y\) in terms of \(\displaystyle z\).)
b. Use a computer algebra system (CAS) to visualize the intersection curve on sphere \(\displaystyle x^2+y^2+z^2=4\).
Solution: \(\displaystyle a. x=2−\frac{z^2}{2},y=±\frac{z}{2}\sqrt{4−z^2},\) where \(\displaystyle z∈[−2,2];\)
54) Hyperboloid of one sheet \(\displaystyle 25x^2+25y^2−z^2=25\) and elliptic cone \(\displaystyle −25x^2+75y^2+z^2=0\) are represented in the following figure along with their intersection curves. Identify the intersection curves and find their equations (Hint: Find y from the system consisting of the equations of the surfaces.)
55) [T] Use a CAS to create the intersection between cylinder \(\displaystyle 9x^2+4y^2=18\) and ellipsoid \(\displaystyle 36x^2+16y^2+9z^2=144\), and find the equations of the intersection curves.
Solution: two ellipses of equations \(\displaystyle \frac{x^2}{2}+\frac{y^2}{\frac{9}{2}}=1\) in planes \(\displaystyle z=±2\sqrt{2}\)
56) [T] A spheroid is an ellipsoid with two equal semiaxes. For instance, the equation of a spheroid with the z-axis as its axis of symmetry is given by \(\displaystyle \frac{x^2}{a^2}+\frac{y^2}{a^2}+\frac{z^2}{c^2}=1\), where \(\displaystyle a\) and \(\displaystyle c\) are positive real numbers. The spheroid is called oblate if \(\displaystyle c<a\), and prolate for \(\displaystyle c>a\).
a. The eye cornea is approximated as a prolate spheroid with an axis that is the eye, where \(\displaystyle a=8.7mm\) and \(\displaystyle c=9.6mm\).Write the equation of the spheroid that models the cornea and sketch the surface.
b. Give two examples of objects with prolate spheroid shapes.
57) [T] In cartography, Earth is approximated by an oblate spheroid rather than a sphere. The radii at the equator and poles are approximately \(\displaystyle 3963\)mi and \(\displaystyle 3950\)mi, respectively.
a. Write the equation in standard form of the ellipsoid that represents the shape of Earth. Assume the center of Earth is at the origin and that the trace formed by plane \(\displaystyle z=0\) corresponds to the equator.
b. Sketch the graph.
c. Find the equation of the intersection curve of the surface with plane \(\displaystyle z=1000\) that is parallel to the xy-plane. The intersection curve is called a parallel.
d. Find the equation of the intersection curve of the surface with plane \(\displaystyle x+y=0\) that passes through the z-axis. The intersection curve is called a meridian.
\(\displaystyle a. \frac{x^2}{3963^2}+\frac{y^2}{3963^2}+\frac{z^2}{3950^2}=1;\)
c. The intersection curve is the ellipse of equation \(\displaystyle \frac{x^2}{3963^2}+\frac{y^2}{3963^2}=\frac{(2950)(4950)}{3950^2}\), and the intersection is an ellipse.; d. The intersection curve is the ellipse of equation \(\displaystyle \frac{2y^2}{3963^2}+\frac{z^2}{3950^2}=1.\)
58) [T] A set of buzzing stunt magnets (or "rattlesnake eggs") includes two sparkling, polished, superstrong spheroid-shaped magnets well-known for children's entertainment. Each magnet is \(\displaystyle 1.625\) in. long and \(\displaystyle 0.5\) in. wide at the middle. While tossing them into the air, they create a buzzing sound as they attract each other.
a. Write the equation of the prolate spheroid centered at the origin that describes the shape of one of the magnets.
b. Write the equations of the prolate spheroids that model the shape of the buzzing stunt magnets. Use a CAS to create the graphs.
59) [T] A heart-shaped surface is given by equation \(\displaystyle (x^2+\frac{9}{4}y^2+z^2−1)^3−x^2z^3−\frac{9}{80}y^2z^3=0.\)
a. Use a CAS to graph the surface that models this shape.
b. Determine and sketch the trace of the heart-shaped surface on the xz-plane.
b. The intersection curve is \(\displaystyle (x^2+z^2−1)^3−x^2z^3=0.\)
60) [T] The ring torus symmetric about the z-axis is a special type of surface in topology and its equation is given by \(\displaystyle (x^2+y^2+z^2+R^2−r^2)^2=4R^2(x^2+y^2)\), where \(\displaystyle R>r>0\). The numbers \(\displaystyle R\) and \(\displaystyle r\) are called are the major and minor radii, respectively, of the surface. The following figure shows a ring torus for which \(\displaystyle R=2\) and \(\displaystyle r=1\).
a. Write the equation of the ring torus with \(\displaystyle R=2\) and \(\displaystyle r=1\), and use a CAS to graph the surface. Compare the graph with the figure given.
b. Determine the equation and sketch the trace of the ring torus from a. on the xy-plane.
c. Give two examples of objects with ring torus shapes.
Use the following figure as an aid in identifying the relationship between the rectangular, cylindrical, and spherical coordinate systems.
For the following exercises, the cylindrical coordinates \(\displaystyle (r,θ,z)\) of a point are given. Find the rectangular coordinates \(\displaystyle (x,y,z)\) of the point.
1) \(\displaystyle (4,\frac{π}{6},3)\)
Solution: \(\displaystyle (2\sqrt{3},2,3)\)
3) \(\displaystyle (4,\frac{7π}{6},3)\)
Solution: \(\displaystyle −2\sqrt{3},−2,3)\)
4) \(\displaystyle (2,π,−4)\)
For the following exercises, the rectangular coordinates \(\displaystyle (x,y,z)\) of a point are given. Find the cylindrical coordinates \(\displaystyle (r,θ,z)\)of the point.
5) \(\displaystyle (1,\sqrt{3},2)\)
Solution: \(\displaystyle (2,\frac{π}{3},2)\)
6) \(\displaystyle (1,1,5)\)
7) \(\displaystyle (3,−3,7)\)
Solution: \(\displaystyle (3\sqrt{2},−\frac{π}{4},7)\)
8) \(\displaystyle (−2\sqrt{2},2\sqrt{2},4)\)
For the following exercises, the equation of a surface in cylindrical coordinates is given. Find the equation of the surface in rectangular coordinates. Identify and graph the surface.
9) [T] \(\displaystyle r=4\)
Solution: A cylinder of equation \(\displaystyle x^2+y^2=16,\) with its center at the origin and rulings parallel to the z-axis,
10) [T] \(\displaystyle z=r^2cos^2θ\)
11) [T] \(\displaystyle r^2cos(2θ)+z^2+1=0\)
Solution: Hyperboloid of two sheets of equation \(\displaystyle −x^2+y^2−z^2=1,\) with the y-axis as the axis of symmetry,
12) [T] \(\displaystyle r=3sinθ\)
13) [T] \(\displaystyle r=2cosθ\)
Solution: Cylinder of equation \(\displaystyle x^2−2x+y^2=0,\) with a center at \(\displaystyle (1,0,0)\) and radius \(\displaystyle 1\), with rulings parallel to the z-axis,
14) [T] \(\displaystyle r^2+z^2=5\)
15) [T] \(\displaystyle r=2secθ\)
Solution: Plane of equation \(\displaystyle x=2,\)
16) [T] \(\displaystyle r=3cscθ\)
For the following exercises, the equation of a surface in rectangular coordinates is given. Find the equation of the surface in cylindrical coordinates.
17) \(\displaystyle z=3\)
18) \(\displaystyle x=6\)
19) \(\displaystyle x^2+y^2+z^2=9\)
Solution: \(\displaystyle r^2+z^2=9\)
20) \(\displaystyle y=2x^2\)
21) \(\displaystyle x^2+y^2−16x=0\)
Solution: \(\displaystyle r=16cosθ,r=0\)
22) \(\displaystyle x^2+y^2−3\sqrt{x^2+y^2}+2=0\)
For the following exercises, the spherical coordinates \(\displaystyle (ρ,θ,φ)\) of a point are given. Find the rectangular coordinates \(\displaystyle (x,y,z)\) of the point.
23) \(\displaystyle (3,0,π)\)
Solution: \(\displaystyle (0,0,−3)\)
24) \(\displaystyle (1,\frac{π}{6},\frac{π}{6})\)
25) \(\displaystyle (12,−\frac{π}{4},\frac{π}{4})\)
Solution: \(\displaystyle (6,−6,\sqrt{2})\)
For the following exercises, the rectangular coordinates \(\displaystyle (x,y,z)\) of a point are given. Find the spherical coordinates \(\displaystyle (ρ,θ,φ)\) of the point. Express the measure of the angles in degrees rounded to the nearest integer.
27) \(\displaystyle (4,0,0)\)
Solution: \(\displaystyle (4,0,90°)\)
28) \(\displaystyle (−1,2,1)\)
Solution: \(\displaystyle (3,90°,90°)\)
30) \(\displaystyle (−2,2\sqrt{3},4)\)
For the following exercises, the equation of a surface in spherical coordinates is given. Find the equation of the surface in rectangular coordinates. Identify and graph the surface.
31) [T] \(\displaystyle ρ=3\)
Solution: Sphere of equation \(\displaystyle x^2+y^2+z^2=9\) centered at the origin with radius \(\displaystyle 3\),
32) [T] \(\displaystyle φ=\frac{π}{3}\)
33) [T] \(\displaystyle ρ=2cosφ\)
Solution: Sphere of equation \(\displaystyle x^2+y^2+(z−1)^2=1\) centered at \(\displaystyle (0,0,1)\) with radius \(\displaystyle 1\),
34) [T] \(\displaystyle ρ=4cscφ\)
Solution: The xy-plane of equation \(\displaystyle z=0,\)
36) [T] \(\displaystyle ρ=6cscφsecθ\)
For the following exercises, the equation of a surface in rectangular coordinates is given. Find the equation of the surface in spherical coordinates. Identify the surface.
37) \(\displaystyle x^2+y^2−3z^2=0, z≠0\)
Solution: \(\displaystyle φ=\frac{π}{3}\) or \(\displaystyle φ=\frac{2π}{3};\) Elliptic cone
38) \(\displaystyle x^2+y^2+z^2−4z=0\)
Solution: \(\displaystyle ρcosφ=6;\) Plane at \(\displaystyle z=6\)
40) \(\displaystyle x^2+y^2=9\)
For the following exercises, the cylindrical coordinates of a point are given. Find its associated spherical coordinates, with the measure of the angle φ
in radians rounded to four decimal places.
41) [T] \(\displaystyle (1,\frac{π}{4},3)\)
Solution: \(\displaystyle (\sqrt{10},\frac{π}{4},0.3218)\)
42) [T] \(\displaystyle (5,π,12)\)
43) \(\displaystyle (3,\frac{π}{2},3)\)
Solution: \(\displaystyle (3\sqrt{2},\frac{π}{2},\frac{π}{4})\)
44) \(\displaystyle (3,−\frac{π}{6},3)\)
For the following exercises, the spherical coordinates of a point are given. Find its associated cylindrical coordinates.
45) \(\displaystyle (2,−\frac{π}{4},\frac{π}{2})\)
Solution: \(\displaystyle (2,−\frac{π}{4},0)\)
For the following exercises, find the most suitable system of coordinates to describe the solids.
49) The solid situated in the first octant with a vertex at the origin and enclosed by a cube of edge length \(\displaystyle a\), where \(\displaystyle a>0\)
Solution: Cartesian system, \(\displaystyle {(x,y,z)|0≤x≤a,0≤y≤a,0≤z≤a}\)
50) A spherical shell determined by the region between two concentric spheres centered at the origin, of radii of \(\displaystyle a\) and \(\displaystyle b\), respectively, where \(\displaystyle b>a>0\)
51) A solid inside sphere \(\displaystyle x^2+y^2+z^2=9\) and outside cylinder \(\displaystyle (x−\frac{3}{2})^2+y^2=\frac{9}{4}\)
Solution: Cylindrical system, \(\displaystyle {(r,θ,z)∣r^2+z^2≤9,r≥3cosθ,0≤θ≤2π}\)
52) A cylindrical shell of height \(\displaystyle 10\) determined by the region between two cylinders with the same center, parallel rulings, and radii of \(\displaystyle 2\) and \(\displaystyle 5\), respectively
53) [T] Use a CAS to graph in cylindrical coordinates the region between elliptic paraboloid \(\displaystyle z=x^2+y^2\) and cone \(\displaystyle x^2+y^2−z^2=0.\)
Solution: The region is described by the set of points \(\displaystyle {(r,θ,z)∣∣0≤r≤1,0≤θ≤2π,r^2≤z≤r}.\)
54) [T] Use a CAS to graph in spherical coordinates the "ice cream-cone region" situated above the xy-plane between sphere \(\displaystyle x^2+y^2+z^2=4\) and elliptical cone \(\displaystyle x^2+y^2−z^2=0.\)
55) Washington, DC, is located at \(\displaystyle 39°\) N and \(\displaystyle 77°\) W (see the following figure). Assume the radius of Earth is \(\displaystyle 4000\) mi. Express the location of Washington, DC, in spherical coordinates.
Solution: \(\displaystyle (4000,−77°,51°)\)
56) San Francisco is located at \(\displaystyle 37.78°N\) and \(\displaystyle 122.42°W.\) Assume the radius of Earth is \(\displaystyle 4000\)mi. Express the location of San Francisco in spherical coordinates.
57) Find the latitude and longitude of Rio de Janeiro if its spherical coordinates are \(\displaystyle (4000,−43.17°,102.91°).\)
Solution: \(\displaystyle 43.17°W, 22.91°S\)
58) Find the latitude and longitude of Berlin if its spherical coordinates are \(\displaystyle (4000,13.38°,37.48°).\)
59) [T] Consider the torus of equation \(\displaystyle (x^2+y^2+z^2+R^2−r^2)^2=4R^2(x^2+y^2),\) where \(\displaystyle R≥r>0.\)
a. Write the equation of the torus in spherical coordinates.
b. If \(\displaystyle R=r,\) the surface is called a horn torus. Show that the equation of a horn torus in spherical coordinates is \(\displaystyle ρ=2Rsinφ.\)
c. Use a CAS to graph the horn torus with \(\displaystyle R=r=2\) in spherical coordinates.
Solution: \(\displaystyle a. ρ=0, ρ+R2−r2−2Rsinφ=0;\)
60) [T] The "bumpy sphere" with an equation in spherical coordinates is \(\displaystyle ρ=a+bcos(mθ)sin(nφ)\), with \(\displaystyle θ∈[0,2π]\) and \(\displaystyle φ∈[0,π]\), where \(\displaystyle a\) and \(\displaystyle b\) are positive numbers and \(\displaystyle m\) and \(\displaystyle n\) are positive integers, may be used in applied mathematics to model tumor growth.
a. Show that the "bumpy sphere" is contained inside a sphere of equation \(\displaystyle ρ=a+b.\) Find the values of \(\displaystyle θ\) and \(\displaystyle φ\) at which the two surfaces intersect.
b. Use a CAS to graph the surface for \(\displaystyle a=14, b=2, m=4,\) and \(\displaystyle n=6\) along with sphere \(\displaystyle ρ=a+b.\)
c. Find the equation of the intersection curve of the surface at b. with the cone \(\displaystyle φ=\frac{π}{12}\). Graph the intersection curve in the plane of intersection.
For the following exercises, determine whether the statement is true or false. Justify the answer with a proof or a counterexample.
1) For vectors \(\displaystyle a\) and \(\displaystyle b\) and any given scalar \(\displaystyle c, c(a⋅b)=(ca)⋅b.\)
Solution: True
2) For vectors \(\displaystyle a\) and \(\displaystyle b\) and any given scalar \(\displaystyle c, c(a×b)=(ca)×b\).
3) The symmetric equation for the line of intersection between two planes \(\displaystyle x+y+z=2\) and \(\displaystyle x+2y−4z=5\) is given by \(\displaystyle −\frac{x−1}{6}=\frac{y−1}{5}=z.\)
Solution: False
4) If \(\displaystyle a⋅b=0,\) then \(\displaystyle a\) is perpendicular to \(\displaystyle b\).
For the following exercises, use the given vectors to find the quantities.
5) \(\displaystyle a=9i−2j,b=−3i+j\)
a. \(\displaystyle 3a+b\)
b. \(\displaystyle |a|\)
c. \(\displaystyle a×|b×|a\)
d. \(\displaystyle b×|a\)
Solution: a. \(\displaystyle ⟨24,−5⟩;\) b. \(\displaystyle \sqrt{85}\); c. Can't dot a vector with a scalar; d. \(\displaystyle −29\)
6) \(\displaystyle a=2i+j−9k,b=−i+2k,c=4i−2j+k\)
a. \(\displaystyle 2a−b\)
b. \(\displaystyle |b×c|\)
c. \(\displaystyle b×|b×c|\)
d. \(\displaystyle c×|b×a|\)
e. \(\displaystyle proj_ab\)
7) Find the values of \(\displaystyle a\) such that vectors \(\displaystyle ⟨2,4,a⟩\) and \(\displaystyle ⟨0,−1,a⟩\) are orthogonal.
Solution: \(\displaystyle a=±2\)
For the following exercises, find the unit vectors.
8) Find the unit vector that has the same direction as vector \(\displaystyle v\) that begins at \(\displaystyle (0,−3)\) and ends at \(\displaystyle (4,10).\)
9) Find the unit vector that has the same direction as vector \(\displaystyle v\) that begins at \(\displaystyle (1,4,10)\) and ends at \(\displaystyle (3,0,4).\)
Solution: \(\displaystyle ⟨\frac{1}{\sqrt{14}},−\frac{2}{\sqrt{14}},−\frac{3}{\sqrt{14}}⟩\)
For the following exercises, find the area or volume of the given shapes.
10) The parallelogram spanned by vectors \(\displaystyle a=⟨1,13⟩\) and \(\displaystyle b=⟨3,21⟩\)
11) The parallelepiped formed by \(\displaystyle a=⟨1,4,1⟩\) and \(\displaystyle b=⟨3,6,2⟩,\) and \(\displaystyle c=⟨−2,1,−5⟩\)
Solution: \(\displaystyle 27\)
For the following exercises, find the vector and parametric equations of the line with the given properties.
12) The line that passes through point \(\displaystyle (2,−3,7)\) that is parallel to vector \(\displaystyle ⟨1,3,−2⟩\)
13) The line that passes through points \(\displaystyle (1,3,5)\) and \(\displaystyle (−2,6,−3)\)
Solution: \(\displaystyle x=1−3t,y=3+3t,z=5−8t,r(t)=(1−3t)i+3(1+t)j+(5−8t)k\)
For the following exercises, find the equation of the plane with the given properties.
14) The plane that passes through point \(\displaystyle (4,7,−1)\) and has normal vector \(\displaystyle n=⟨3,4,2⟩\)
15) The plane that passes through points \(\displaystyle (0,1,5),(2,−1,6),\) and \(\displaystyle (3,2,5).\)
Solution: \(\displaystyle −x+3y+8z=43\)
For the following exercises, find the traces for the surfaces in planes \(\displaystyle x=k,y=k\), and \(\displaystyle z=k.\) Then, describe and draw the surfaces.
16) \(\displaystyle 9x^2+4y^2−16y+36z^2=20\)
17) \(\displaystyle x^2=y^2+z^2\)
Solution: \(\displaystyle x=k\) trace: \(\displaystyle k^2=y^2+z^2\) is a circle, \(\displaystyle y=k\) trace: \(\displaystyle x^2−z^2=k^2\) is a hyperbola (or a pair of lines if \(\displaystyle k=0), z=k\) trace: \(\displaystyle x^2−y^2=k^2\) is a hyperbola (or a pair of lines if \(\displaystyle k=0\)). The surface is a cone.
For the following exercises, write the given equation in cylindrical coordinates and spherical coordinates.
18) \(\displaystyle x^2+y^2+z^2=144\)
19) \(\displaystyle z=x^2+y^2−1\)
Solution: Cylindrical: \(\displaystyle z=r^2−1,\) spherical: \(\displaystyle cosφ=ρsin^2φ−\frac{1}{ρ}\)
For the following exercises, convert the given equations from cylindrical or spherical coordinates to rectangular coordinates. Identify the given surface.
20) \(\displaystyle ρ^2(sin^2(φ)−cos^2(φ))=1\)
21) \(\displaystyle r^2−2rcos(θ)+z^2=1\)
Solution: \(\displaystyle x^2−2x+y^2+z^2=1\), sphere
For the following exercises, consider a small boat crossing a river.
22) If the boat velocity is \(\displaystyle 5\)km/h due north in still water and the water has a current of \(\displaystyle 2\) km/h due west (see the following figure), what is the velocity of the boat relative to shore? What is the angle \(\displaystyle θ\) that the boat is actually traveling?
23) When the boat reaches the shore, two ropes are thrown to people to help pull the boat ashore. One rope is at an angle of \(\displaystyle 25°\) and the other is at \(\displaystyle 35°\). If the boat must be pulled straight and at a force of \(\displaystyle 500N\), find the magnitude of force for each rope (see the following figure).
Solution: 331 N, and 244 N
24) An airplane is flying in the direction of 52° east of north with a speed of 450 mph. A strong wind has a bearing 33° east of north with a speed of 50 mph. What is the resultant ground speed and bearing of the airplane?
25) Calculate the work done by moving a particle from position \(\displaystyle (1,2,0)\) to \(\displaystyle (8,4,5)\) along a straight line with a force \(\displaystyle F=2i+3j−k.\)
Solution: \(\displaystyle 15J\)
The following problems consider your unsuccessful attempt to take the tire off your car using a wrench to loosen the bolts. Assume the wrench is \(\displaystyle 0.3\)m long and you are able to apply a 200-N force.
26) Because your tire is flat, you are only able to apply your force at a \(\displaystyle 60°\) angle. What is the torque at the center of the bolt? Assume this force is not enough to loosen the bolt.
27) Someone lends you a tire jack and you are now able to apply a 200-N force at an \(\displaystyle 80°\) angle. Is your resulting torque going to be more or less? What is the new resulting torque at the center of the bolt? Assume this force is not enough to loosen the bolt.
Solution: More, \(\displaystyle 59.09 J\)
13: Vector-Valued Functions
Gilbert Strang & Edwin "Jed" Herman | CommonCrawl |
SYMMETRIC GROUP OF ORDER
In the theory of Coxeter groups, the symmetric group is the Coxeter group of type A n and occurs as the Weyl group of the general linear group. In combinatorics, the symmetric groups, their elements (permutations), and their representations provide a rich source of problems involving Young tableaux, plactic monoids, and the Bruhat order. Symmetry was taught to humans by nature itself. A lot of flowers and most of the animals are symmetric in nature. Inspired by this, humans learned to build their architecture with symmetric aspects that made buildings balanced and proportionate in their foundation, like the pyramids of Egypt! We can observe symmetry around us in many forms. Origin Symmetry is when every part has a matching part: the same distance from the central point. but in the opposite direction. Check to see if the equation is the same when we replace both x with −x and y with −y.
The notion that group theory captures the idea of "symmetry" derives from the notion of the symmetric group, and the very important theorem due to Cayley. Symmetric - Pharmaceutical & Biotech Online Training Courses Your Partner for Pharma and Biotech Training View All Training Courses Upcoming Training Courses All Pharma & Biotech Medical Devices ANNEX 1 Process Industry View All Training Courses Clients that have benefited from our courses Testimonials. Every group of order n is isomorphic to a subgroup of. Sn. Proof. Suppose G a group of order n. Let G operate on itself by left multiplication. Then by our. the definition of a semidirect product and prove that the symmetric group is a semi-direct product of the alternating group and a subgroup of order 2. In the theory of Coxeter groups, the symmetric group is the Coxeter group of type A n and occurs as the Weyl group of the general linear group. In combinatorics, the symmetric groups, their elements (permutations), and their representations provide a rich source of problems involving Young tableaux, plactic monoids, and the Bruhat order. Websymmetric - having similarity in size, shape, and relative position of corresponding parts symmetrical parallel - being everywhere equidistant and not intersecting; "parallel lines never converge"; "concentric circles are parallel"; "dancers in two parallel rows". The symmetric group is important in many different areas of mathematics, including combinatorics, Galois theory, and the definition of the determinant of a matrix. It is also a key object in group theory itself; in fact, every finite group is a subgroup of S n S_n S n for some n, n, n, so understanding the subgroups of S n S_n S n is equivalent to understanding every finite . The symmetric homology of group rings is related to stable homotopy theory. Two chain complexes are constructed that compute symmetric homology, as well as two. May 27, · The symmetric group (S 3, 0) has order 6. (Z, +) is a group of infinite order. Types of Groups Depending upon the order of groups, we can classify the groups as follows: . WebSymmetric-key algorithms [a] are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. [1] The keys, in practice, represent a shared secret between two or more parties that can . Websymmetrical. (sɪˈmɛtrɪkəl) adj. 1. possessing or displaying symmetry. Compare asymmetric. 2. (Mathematics) maths. a. (of two points) capable of being joined by a line that is bisected by a given point or bisected perpendicularly by a given line or plane: the points (x, y) and (–x, –y) are symmetrical about the origin. WebOrigin Symmetry is when every part has a matching part: the same distance from the central point. but in the opposite direction. Check to see if the equation is the same when we replace both x with −x and y with −y. Origin Symmetry is when every part has a matching part: the same distance from the central point. but in the opposite direction. Check to see if the equation is the same when we replace both x with −x and y with −y. WebSymmetry is defined as a proportionate and balanced similarity that is found in two halves of an object, that is, one-half is the mirror image of the other half. For example, different shapes like square, rectangle, circle are symmetric along their respective lines of symmetry. What is a Symmetrical Shape?
The maximum order of an element of finite symmetric group by William Miller, American Mathematical Monthly, page Share Cite Follow edited Dec 30, at user answered Dec 30, at Bobby 7, 2 30 59 Add a comment You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged. Thus symmetric groups can be considered universal with respect to subgroups, just as free groups can be considered universal with respect to quotient groups. 21 Symmetric and alternating groups Recall. The symmetric group on nletters is the group S n= Perm(f1;;ng) Theorem (Cayley). If Gis a group of order nthen Gis isomorphic to a subgroup of S n. Proof. Let Sbe the set of all elements of G. Consider the action of Gon S G S!S; ab:= ab This action de nes a homomorphism %: G!Perm(S). Check: this homomor-. Symmetry is defined as a proportionate and balanced similarity that is found in two halves of an object, that is, one-half is the mirror image of the other half. For example, different shapes like square, rectangle, circle are symmetric along their respective lines of symmetry. What is a Symmetrical Shape? 1. A general fact for groups: the order of the product of commuting elements. σ = c 1 ⋅ c 2 ⋅ c m. is the lowest common multiple of the orders of the c i. Consider now the group to be S n and c i disjoint cycles therefore commuting. Pairwise commuting factors is essential. WebA vertical line that divides an object into two identical halves is called a vertical line of symmetry. That means that the vertical line goes from top to bottom (or vice versa) in an object and divides it into its mirror halves. For example, the star below shows a vertical line of symmetry. The Horizontal Line of Symmetry. symmetrical adjective sym· met· ri· cal sə-ˈme-tri-kəl variants or symmetric -trik: having, involving, or exhibiting symmetry: as a: affecting corresponding parts simultaneously and . The permutations of a set X form a group, SX, under composition. This is especially clear if one thinks of the permutation as a bijection on X, where the. The symmetric group S n S_n Sn is the group of permutations on n n n objects. Usually the objects are labeled { 1, 2, , n }, \{1,2,\ldots,n\}. returns a permutation group generated by (1,2,3). As expected this is a group of order 3. Notice that we do not get back a group of the actual cosets, but. The symmetric group on four letters, S4, contains the following permutations: permutations type. (12), (13), (14), (23), (24), (34) order isomorphic to.
ocala florida highway patrol|mixing viagra and ghb
Symmetry was taught to humans by nature itself. A lot of flowers and most of the animals are symmetric in nature. Inspired by this, humans learned to build their architecture with symmetric aspects that made buildings balanced and proportionate in their foundation, like the pyramids of Egypt! We can observe symmetry around us in many forms. Theorem (Cayley). If G is a group of order n then G is isomorphic to a subgroup of Sn. Proof. Let S be the set of. 1. A general fact for groups: the order of the product of commuting elements. σ = c 1 ⋅ c 2 ⋅ c m. is the lowest common multiple of the orders of the c i. Consider now the group to be S n and c i disjoint cycles therefore commuting. Pairwise commuting factors is essential. Moreover, ⟨ (1 2 3) ⟩ has order 3 and is thus distinct from the other cyclic subgroups, which have order 2. Finally, the order 2 cyclic subgroups are. Notes on the symmetric group 1 Computations in the symmetric group Recall that, given a set X, the set S X of all bijections from Xto itself (or, more brie y, permutations of X) is group under function composition. In particular, for each n2N, the symmetric group S n is the group of per-mutations of the set f1;;ng, with the group operation equal to function composition. Thus S. Let G be the group of automorphisms, and X the set of 2-cycles. We note that an automorphism must send order-2 elements to order-2 elements, and that the In general, πσ = σπ,. i.e., multiplication of permutations is not commutative. Page 3. Cycles. A permutation π of a set X is called a cycle . Oct 10, · Order of Symmetric Group - ProofWiki Order of Symmetric Group Contents 1 Theorem 2 Proof 3 Examples 3rd Symmetric Group 4 Sources Theorem Let S be a finite . WebA geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of . Mathematics Stack Exchange.
Oct 10, · Order of Symmetric Group - ProofWiki Order of Symmetric Group Contents 1 Theorem 2 Proof 3 Examples 3rd Symmetric Group 4 Sources Theorem Let S be a finite . Subgroups generated by a rotation. Rotation by 2π/n generates a subgroup isomorphic to Cn the cyclic group of order n. Note that Cn is the symmetry group of. A geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of the object, but doesn't change the overall shape. A(S) is the set of mappings of S onto itself. If S is a finite set with say n elements then A(S) is called a symmetric group of order n denoted S_n. The. The symmetric group on four letters, S4, contains the following permutations: permutations type. (12), (13), (14), (23), (24), (34) order isomorphic to. Feb 27, · The order of a group is not the same thing as the order of an element. There is a connection, but don't worry about that yet. The order of a group, written $|G|$, is the number of elements it has. The order of an element in a group - say $g \in G$, is the smallest $n$ such $g^n = e$ where $e$ is the identity element of the group $G$. Each permutation of S4 can be written as composition of disjoint cycles. So the (5 points) Let G be a group, and let a be an element of order Every permutation in Sn S n has a cycle decomposition that is unique up to ordering of the cycles and up to a cyclic permutation of the elements within each. | CommonCrawl |
Computer Science > Cryptography and Security
[Submitted on 24 May 2019 (v1), last revised 15 Feb 2021 (this version, v4)]
Title:Quantum Period Finding is Compression Robust
Authors:Alexander May, Lars Schlieper
Abstract: We study quantum period finding algorithms such as Simon and Shor (and its variants Ekerå-Håstad and Mosca-Ekert). For a periodic function $f$ these algorithms produce -- via some quantum embedding of $f$ -- a quantum superposition $\sum_x |x\rangle|f(x)\rangle$, which requires a certain amount of output qubits that represent $|f(x)\rangle$. We show that one can lower this amount to a single output qubit by hashing $f$ down to a single bit in an oracle setting.
Namely, we replace the embedding of $f$ in quantum period finding circuits by oracle access to several embeddings of hashed versions of $f$. We show that on expectation this modification only doubles the required amount of quantum measurements, while significantly reducing the total number of qubits. For example, for Simon's algorithm that finds periods in $f: \mathbb{F}_2^n \rightarrow \mathbb{F}_2^n$ our hashing technique reduces the required output qubits from $n$ down to $1$, and therefore the total amount of qubits from $2n$ to $n+1$. We also show that Simon's algorithm admits real world applications with only $n+1$ qubits by giving a concrete realization of a hashed version of the cryptographic Even-Mansour construction. Moreover, for a variant of Simon's algorithm on Even-Mansour that requires only classical queries to Even-Mansour we save a factor of (roughly) $4$ in the qubits.
Our oracle-based hashed version of the Ekerå-Håstad algorithm for factoring $n$-bit RSA reduces the required qubits from $(\frac 3 2 + o(1))n$ down to $(\frac 1 2 + o(1))n$. We also show a real-world (non-oracle) application in the discrete logarithm setting by giving a concrete realization of a hashed version of Mosca-Ekert for the Decisional Diffie Hellman problem in $\mathbb{F}_{p^m}$, thereby reducing the number of qubits by even a linear factor from $m \log p$ downto $\log p$.
Subjects: Cryptography and Security (cs.CR); Quantum Physics (quant-ph)
Cite as: arXiv:1905.10074 [cs.CR]
(or arXiv:1905.10074v4 [cs.CR] for this version)
From: Lars Schlieper [view email]
[v1] Fri, 24 May 2019 07:35:04 UTC (450 KB)
[v2] Mon, 4 Nov 2019 19:35:58 UTC (60 KB)
[v3] Mon, 9 Nov 2020 12:47:52 UTC (208 KB)
[v4] Mon, 15 Feb 2021 15:21:11 UTC (209 KB)
Lars Schlieper | CommonCrawl |
Total : PDF: 125 XML: 41 | Total views: 166
Information and Communication Technology (ICT) and International Terrorism: Boko Haram and Al Shabaab in Perspective
Dr. Uwak,
Dr. Uwak
University of Uyo, Nigeria, Faculty of Social Sciences Department of Public Administration and Local Government
U. Eyo, Egemuka,
U. Eyo, Egemuka
Department of Political Science and Public Administration, Faculty of Social Sciences University of Uyo, Uyo
C. Collins,
C. Collins
Article Date Published : 1 September 2018 | Page No.: PS-2018-248-263 | Google Scholar
DOI https://doi.org/10.18535/ijsrm/v6i8.ps01
This study examined the relationship between Information and Communication Technology (ICT) and international terrorism with regards to the activities of terrorist groups. This was premised on the fact that ICT has created a network with a truly global reach for advancement of terrorist activities. The internet technology makes it easy for an individual to communicate with relative ease and anonymity, quickly and effectively across borders to an almost limitless audience. This factor made ICT and their innovative tendencies of the social media like Whatsapp, Facebook, Instagram, Youtube, Twitter etc to serve as veritable tool and motivation for terrorist groups and their activities. Historical and descriptive methods were employed in this study hence data were drawn from relevant primary and secondary sources which included published and unpublished materials. The findings of the study revealed that increased availability of ICT and other innovative tendencies has made it easier for terrorist organizations to communicate, recruit, radicalize as well as mobilize individuals to plan and coordinate terrorist attacks. Based on the above findings, it was recommended among others that understanding terrorist recruitment through the social media was vital to counter-terrorism. Hence, understanding how and why an individual is radicalize and recruited into a terrorist organization is therefore an important part of addressing the fight against terrorism. Accordingly, there should be investment in scientific research and industrial development that are more resistant for terrorist use.
Keywords: Terrorism, ICT, Global, Boko Haram and Al Shabaab
1. Background to the Study
The new technological developments that have occurred during the last three decades have shifted the conception of national security. Our transition into the information and communication age has been with a series of threats to the national security of many states. Today, nations face the danger of a physical damage but also having their information infrastructures destroyed, altered, or incapacitated by the new genre of offensive technologies. For instance, the aftermath of the September 11, 2001 carnage in the United States, it has been widely reported that the terrorists used high-tech tools to plan and consummate their reprehensible attacks. The irony of it all is that the high-tech tools they used are the very same tools we use to enhance our lives. Yes, they used common tools such as mobile phones, e-mail, the Internet etc. The advances in technology means the ability to share ideas, videos, and other digital content to more people than ever before. The technology has also grown more user-friendly and cheaper. This has leveled the playing field, both for those who communicate information for good as well as those who have more sinister reasons to get their messages across (Metz, 2012).
Many terrorist organizations such as ISIS today use the Internet to recruit, radicalize, and mobilize individuals from all walks of life. The Internet also offers these terrorist groups a level of anonymity. In years past, jihadists would have to travel to different cities and towns trying to find the few like-minded individuals who shared their 2 extremist views. Or they would be forced underground and would risk being found out (Metz, 2012). Today that lengthy process of locating other extremists and sharing ideas has been greatly reduced. The Internet, through Facebook, Instagram, YouTube, WhatsApp, Chat rooms, Websites, and other social media platforms, enables terrorist organizations to recruit from across the globe (Von Behr, 2013).
As observed by Weimann (2014), today, 90 percent of terrorist groups' communication over the social media is accomplished through the internet. Younger people favour social media, which is free, as well as interactive. Social media enables anyone to share messages, contribute to the discussion, and even ask questions of terrorist group leaders. Weimann goes on to illustrate just how quickly technology and social media use is evolving. Facebook, which began in 2004, had 1.31 billion users a decade later. For instance, the YouTube, which began in 2005, as of 2016 counted 100 hours of video uploaded each minute. And Twitter, which started in 2006, had 555 million Twitter users tweeting about 58 million tweets a day last year.
2. Statement of Problem
Today, advances in technology mean the ability to share ideas, videos, and other digital content available to more people than ever before. The implication is that technology has also grown more user-friendly and cheaper. This has therefore leveled the playing field, both for those who communicate information for good as well as those who have more sinister reasons to get their messages across. To this end, violent extremists today use the Internet to "recruit, radicalize, and mobilize individuals, including Americans. The Internet has become the main tool used by terrorist organizations to communicate with like-minded peers, followers, and potential members.
It is worthy to note that the Internet also offers terrorist groups a level of anonymity. In years past, jihadists would have to travel to different mosques trying to find the few like-minded individuals who shared their extremist views. Or they would be forced underground and would risk being found out. Today that lengthy process of locating other extremists and sharing ideas has been greatly reduced. The Internet, through chat rooms, websites, and now social media, enables terrorist groups to recruit from across the globe.
One terrorist group that has proven its skills in using social media, to expand its reach, increase its publicity and gain followers is the Islamic State of Syria and Iran (ISIS). In June 2014, the Islamic State declared that it had formed a caliphate. According to the literature, at that point, the group had upwards of 15,000 militants in its membership. Following mass desertions from the Iraqi army, the Islamic State, or ISIS, took control of a large swath of land in Syria and Iraq. Since then, foreign fighters from 80 countries have joined ISIS. Many come from Muslim-majority countries, but others have traveled to Syria from the United States, Australia, and Western European countries. Some 150 Americans are believed to have traveled or attempted to travel to Syria over the past three years to take part in jihadist activities. The organization offers young people instant gratification, including adventure, power, community, and sex.
For instance, a March 2015 Brookings Institution study found that last fall, ISIS supporters were using at least 46,000 Twitter accounts. Three quarters of the supporters listed Arabic as their first language, but nearly one in five chose English. These accounts also had a higher than average number of followers (1,000 each) and a higher than average number of tweets.
There is a significant amount of literature about terrorist groups using social media, the history, growth, and threat of ISIS and how it uses Twitter and YouTube. However, the strength of the literature weakens when discussing ways to limit ISIS's use of social media to recruit followers and what the government's role could and should be in doing so. Some literature discusses the possibility of the United States waging "covert information operations" similar to undercover military maneuvers.
It is against this background that this study intends to examine the relationship between ICT and the promotion of international terrorism activities and to make recommendations on how these anomalies can be mitigated or nipped in the bud.
3. Research Questions
The following research questions guided the formulation of the objectives of the study:
What is the impact of new media on international security and world peace?
What is the impact of the innovative tendencies of social media on the recruitment of young people for terrorism?
What measures should be adopted to amelioration the dangers posed by ISIS, in the efforts towards maintaining international security and world peace?
4. Objectives of the Study
To ascertain the impact of new media on international security and world peace.
To examine the impact of the innovative tendencies of social media on the recruitment of young people for terrorism.
To suggest measures that should be adopted by the international community to ensure that the dangers posed by ISIS through globalization are grossly minimized.
5. Literature Review/ Conceptual Literature
5.1 The Concept of Security
According to Rudolph (2003) security has been the cornerstone in the study of international relations, essentially its raison d'être. However, like many other concepts in Political Science, it has been and continues to be notorious sly difficult to define/conceptualize. Security is a social construction; thus the term security has no meaning in itself; rather it is given a specific meaning by people within the emergence of an inter-subjective consensus. As a result, over the course of time the term comes to have a particular meaning, although it may change over time (Sheehan, 2005). In spite of the efforts by scholars to conceptualize the notion of security in a coherent manner, no one generally acceptable definition of security has yet been produced. In addition to the term being highly contested, some scholars have argued that the term is underdeveloped, so much so that it is inadequate for use. One of the reasons for this situation is attributed to the fact that the term is simply too complex to garner attention and has thus been neglected in favour of other concepts (Transnational Terrorism, Security and the Rule of Law, 2007). A further problem that Sheehan (2005) identifies is that the meaning of security has often been treated as being obvious and nonsensical.
The realist tradition has exercised an enormous amount of influence in the field of security studies, which in a sense has provided a baseline for other traditions (Elman, 2008). Realists harbour a narrow conception of security where security is defined in terms of states, militaries and the use of threat and force. The constructivist tradition argues that security is a social construction, in other words it means different things in different contexts. Two opposing actors will view security differently. It can also be seen as a site of negotiation and contestation, where actors will compete to portray the identity and values of a specific group in such way that it provides a foundation for political action. Constructivists view identity and norms as central to the study of security, as the two together provide "the limits for feasible and legitimate political action. Finally, agents and structures are mutually constituted, and because the world is one of our own making, even structural change is always possible even if difficult" (McDonald, 2008).
Even though security is an essentially contested and highly politicized concept, it is something that is desired by everyone (Williams, 2008; Eckersley, 2009). Security is said to imply an absence of threat (Robinson, 2008). Williams (2008) adds that security is also associated with the alleviation of threats to particular values, especially if those threats, if left unchecked, threaten the survival of a particular referent object. Security also gives individuals/groups the ability to pursue their cherished political and social ambitions. It is stated that a threat can be seen as "a combination of the capability and intent to do harm or enact violence" (Anderson, 2012). He adds that both of these components are required to constitute a threat. Thus, security implies that an individual/group is safe from harm or violent actions.
Anderson (2012) further states that the scope of security is broadened when looking at security at an international level. The threats now have international, transnational and multinational implications. Thus, threats can constitute the harm of individuals across the globe, even if the threat is only "directly" present in one area. According to Anderson (2012), it is also important to determine the agent and target (the referent object) of the particular threat. The agent of terrorism, as a security issue, can be defined as the terrorist or terrorist organization and the target (referent object) as individuals/groups/property (whichever the group believes will accomplish their specific goal).
According to Transnational Terrorism, Security and the Rule of Law (2007), global/international security is said to represent a programme of 'collective security' for the global populace. Thus, international security also relates to ensuring that when one state enhances its security, it does not threaten to reduce the security of a potential adversary; this ensures the maintenance of the overall stability of the international system (Sheehan, 2005).
5.2 The Concept of Globalisation
Furthermore, the exact definition of what globalization constitutes is also contested as scholars have very different conceptions of the term. Cha (2000) argues that one can best understand the phenomenon of globalization as a spatial one. Thus, globalization is not an event, but rather a steady and continuous expansion of processes of interaction and forms of organization, as well as forms of cooperation outside of the traditional boundaries defined by sovereignty (Cha, 2000). Some scholars liken globalization to interdependence, others to liberalization, whilst others even liken it to universalization, Westernization and imperialism (Nassar, 200). Many scholars argue that globalization can be identified as the leader in the spread of Western culture and practices around the globe. Additionally, Nassar (2005) as the "modern" practices of the West are spreading, they are replacing the older and more traditional ways of doing things. Furthermore, it is argued that the process of modernization (Westernization) that is associated with globalization can be seen as equivalent to the Americanization of the world. Steger (2013) agrees with Nassar in stating that globalization encompasses the Westernization and therefore the Americanization of the world.
There are a number of different facets to globalization; Heine and Thakur (2011) state that the primary aspect of globalization is concerned with the expansion of economic activities across the boundaries of nation states. This expansion has led to an increasing level of interdependence amongst nations and its citizens, through the "widespread diffusion of technology," as well as an increasing volume of cross-border flows of goods, services, investment and finance. Other aspects of the globalization process include the movement of information, ideas and people as well as cultural exchanges across international boundaries (Heine and Thakur, 2011).
According to Ervin and Smith (2008) globalization can be seen as the "shrinking" of the globe whilst there is an increasing amount of interaction between the different actors that are at play in the world. Another scholar adds that globalization can be defined as "an extension and intensification in the exchange of goods, persons, and ideas" (Zimmermann, 2011). Globalization is also said to refer to the diffusion of technology and culture (Li and Schaub, 2004). Nassar (2005) adds that globalization integrates markets, values, environmental concerns and politics across the globe. Furthermore, Cha (2000) argues that globalization can be seen as a spatial reorganization of industry, production, and finance amongst others, which causes local decisions to have a global impact. According to Kay (2004), the phenomenon of globalization can be best described as the "creation of a variety of trans-boundary mechanisms for interaction that affect and reflect the acceleration of economic, political and security interdependence." Thus, decisions made in one state affect the lives of citizens across the globe.
Some scholars are of the opinion that globalization improves security; whilst many others contend that it has created instances of declining international security. Many proponents of globalization view it as a facilitator of economic openness, global culture and political transparency. In addition, it channels common human standards and equality across the globe. This leads to an increasing sense of global proximity, which supposedly leads to cooperation, and increases security worldwide. In contrast, globalization is often viewed as a tool that large hegemonic states use to implement their economic "primacy" whilst other states lag behind. Thus, globalization is seen as threat (by those that lag behind and are disadvantaged), which must be fought against. States might seek to defend against the so-called threat that globalization poses, as groups or individuals organize to fight against the perceived dangers of globalization (Kay, 2004).
5.3 The Concept of Terrorism
Finding a definition for terrorism is not always considered easy, as there are a number of different definitions for the term. According to Schmid (2011) a legal definition for terrorism is still elusive even after being proposed by the League of Nations in 1937. No single definition of terrorism has received the international stamp of approval. Thus, it is not surprising that terrorism is a politically loaded and contested concept for which hundreds of often diverging definitions exist (Schmid, 2011b:694). Schmid (2011) and Easson and Schmid (2011:148) provide over 250 different academic, governmental and intergovernmental definitions of terrorism in The Routledge Handbook of Terrorism Research, the definitions range from as early as 1794, where Robespierre defines terror to as recently as 2010. The extensive range of definitions provided allows one to see how definitions of terrorism have evolved and developed over time.
Furthermore, it allows one to see that an agreed upon definition of terrorism has been elusive for some time and is not merely a contemporary problem. One of the major problems the United Nations has experienced in developing an internationally agreed upon conceptualization of terrorism, are the reservations of Arab and Muslim countries (Schmid, 2011). Hoffman (2006) attributes the difficulty in defining terrorism to the fact that the meaning of term has changed so frequently throughout history. The meaning and usage of the term have changed over the course of history in order to accommodate the political discourse of each successive era. Furthermore, the term has become increasingly elusive with the passage of time (Hoffman, 2006).
For Cilliers (2003), terrorism can be described as the illegitimate use or threat of violence against individuals or property to coerce governments or societies for the purposes of political objectives. Rogers (2008) adds that terrorism also makes use of this fear borne out of the threat of violence to gain public attention. According to Cilliers (2003) terrorism is different from other forms of organized criminal behaviour in that its proponents do not act to gain financially or economically. Thus, terrorists act to gain politically or to make a point. In addition, terrorism can be regarded as a premeditated act or actions. It is planned before the terrorist actions are carried out. It does not just occur randomly.
Even though the UN has thus far failed to develop a comprehensive agreed upon definition of terrorism, it has made some progress in combating terrorism. For example, the Security Council resolution 1566 (2004) includes several measures that will strengthen the role of the UN in its efforts to combat terrorism. The UN has emphasized that achieving a consensus definition within the General Assembly will hold enormous value given the General Assembly's unique legitimacy in normative terms. It is thus important that the General Assembly complete negotiations on a comprehensive convention on terrorism, as soon as possible (United Nations, 2015). This definition of terrorism should include a number of elements, namely: a "recognition, in the preamble, that State use of force against civilians is regulated by the Geneva Conventions and other instruments, and, if of sufficient scale, constitutes a war crime by the persons concerned or a crime against humanity"; a restatement that acts falling under the previous 12 anti-terrorism conventions are regarded as terrorism, and a declaration that these acts are deemed a crime under international law; furthermore there should be a restatement that terrorism in time of armed conflict is prohibited under Geneva Conventions and Protocols; and reference must also be made to the definitions contained in the 1999 International Convention for the Suppression of the Financing of Terrorism and Security Council resolution 1566 (2004) (United Nations, 2015).
Lastly, terrorism is described as: any action, in addition to actions already specified by the existing conventions on aspects of terrorism, the Geneva Conventions and Security Council resolution 1566 (2004), that is intended to cause death or serious bodily harm to civilians or non-combatants, when the purpose of such an act, by its nature or context, is to intimidate a population, or to compel a Government or an international organization to do or to abstain from doing any act.
In comparison to the UN, the African Union (AU) has developed a definition of terrorism that has been ratified by the majority of member states. According to the then Organization for African Unity (OAU, 1999), as set out in the OAU Convention on the Prevention and Combating of Terrorism, a terrorist act can be defined as:
any act which is a violation of the criminal laws of a State Party and which may endanger the life, physical integrity or freedom of, or cause serious injury or death to, any person, any number or group of persons or causes or may cause damage to public or private property, natural resources, environmental or cultural heritage and is calculated or intended to:
intimidate, put in fear, force, coerce or induce any government, body, institution, the general public or any segment thereof, to do or abstain from doing any act, or to adopt or abandon a particular standpoint, or to act according to certain principles; or
Disrupt any public service, the delivery of any essential service to the public or to create a public emergency; or
Create general insurrection in a State.
any promotion, sponsoring, contribution to, command, aid, incitement, encouragement, attempt, threat, conspiracy, organizing, or procurement of any person, with the intent to commit any act referred to in paragraph (a) (i) to(iii).
6. Case Studies
6.1 Boko Haram
The Islamic sect popularly known as Boko Haram, officially "Jama'atul Alhul Sunnah Lidda'wati wal Jihad" (People Committed to the Propagation of the Prophet's Teachings and Jihad), has unleashed a wave of terror upon the populace of Northern Nigeria in the last few years (Agbiboa, 2013; Bamidele, 2012). This has become a nation-wide and even a global concern, especially with such events as the kidnapping of the Chibok girls. Bamidele (2012 mentions that on a daily basis websites, magazines and news channels run stories as well as pictures of the acts of violence perpetrated by the group. Bamidele (2012) argues that the group emerged in 2002, even though it only became prominent in 2009. On the other hand, Connell (2012:88) argues that the group was founded much earlier, in 1995, under the original name of "Ahlulsunna wal'jama'ah hijra." It is thus not surprising that the majority of scholars agree that the precise date relating to the emergence of Boko Haram is unclear (Maiangwa, Uzodike, Whetho, and Onapajo, 2012; Onuoha, 2013). This lack of clarity is as a result of the fact that very few journalists/scholars have been granted the opportunity to interview the group, which in effect makes it necessary to rely on unverified accounts.
The majority of literature on Boko Haram is also somewhat inconclusive with regard to the real purpose behind its creation and existence (Bamidele, 2012). Some scholars argue that the group's roots lie in the "Maitatsine" doctrine (a brand of fundamentalist Islam introduced to northern Nigeria in 1945). On the other hand, others argue that Boko Haram emerged as a part of the resurgence of Islamic militant movements globally (Bamidele, 2012).
However, Onapajo and Uzodike (2012) dispute the claim that Boko Haram's roots can be found in the Maitatsine group and their uprisings of the 1980s. Hussein Solomon aptly describes the reasons behind the emergence of Boko Haram. He argues that the group emerged in response to local grievances in Nigeria, including: an increasing dissatisfaction with deteriorating living conditions, especially in the north, an unresponsive and corrupt political elite and a Nigerian state that has reinforced religious divisions and has been unable to transcend the many divisions of ethnicity, language and religion (Oyeniyi, 2014; Aghedo and Osumah, 2012; Cook, 2014).
One can arguably link the group's motivations to the meaning of their name. Boko Haram is a Hausa term that is loosely translated into "Western education is forbidden." This translation has however been rejected by the group which prefers "Western culture is forbidden," as it is broader and includes education (Agbiboa, 2013). As the name suggests, the group is opposed to everything they believe to have been infiltrated by Western beliefs and values. Boko Haram believes that the infiltration of Western beliefs and values, including Western style education, poses a threat to the traditional beliefs, values and customs of the Muslim communities of northern Nigeria (Forest, 2012).
According to Oyeniyi (2017), the group's hatred of Western education stems partly from a longstanding negative attitude that Muslims of northern Nigeria have harboured against Western education. Thus, the group has vowed to rid the Nigerian state of the corrupt ruling elite (who have been perverted by the decadence of Western culture) and institute what it believes to be religious purity (Agbiboa, 2013). Connel (2012) adds that the principle objective of the group has been the toppling of the secular Nigerian government and the implementation of a government based on anti-Western Sharia law. In addition, Onapajo and Uzodike (2012) argue that the group wants to establish an entire socio-political system based on the Islamic model. In 2014, the group declared the establishment of an Islamic State in northern Nigeria (BBC, 2014).
The group further expanded their aims following the execution of their leader, Mohammed Yusuf in 2009. Boko Haram now aimed to violently engage with the state security structure as a means of retaliation. The group also stated that they were aiming to convert former President Goodluck Jonathan from Christianity to Islam and evict non-Muslims from northern Nigeria (Oyeniyi, 2014). The death of Yusuf arguably led to the further radicalization of the group (Onuoha, 2013). The Islamic militant group has been seen to target individuals/objects that it has perceived as being corrupted by Westernization. Although most of its targets are not overtly Western, many of their targets have embraced the "decadence" associated with globalization/Westernization and have turned away from true Islam (Walker, 2012). Their aim is thus to purify Islam in the Nigerian state.
Other areas that scholarship has examined relate to the structure, funding and membership of Boko Haram (Connell, 2012; Forest, 2012; Onuoha, 2013, Pate, 2015). Not much is known about the group's structure, but scholars have briefly discussed the changing leadership (Onuoha, 2013:136). They have argued that very little is publicly known about Boko Haram's sources of finance (Forest, 2012; Connell, 2012; Stewart and Wroughton, 2014). Connell (2012) and Onuoha (2012:137) mention that members had to pay a daily levy to their leaders and other funds came from donations. Forest (2012) agrees that much According to the majority of accounts, the group draws its membership from the ranks of disaffected youths, unemployed graduates and former street children (Almajaris) (Onuoha, 2013; Waldek and Jayasekara, 2011; Pate, 2015).
6.2 The Al Shabaab
The first aspect that scholars tend to focus on relates to the origin of the terrorist group. Marchal (2009) delves into the phenomenon of radical Islam within Somalia in order to develop an understanding of the dynamics that led to the creation of Al Shabaab. This sets the stage for the origin of the group. According to Roque (2009) there is no consensus regarding their exact date of origin. It is however known that the group sprouted as the militant remnant of al Itihaad al Islamiya (AIAI), a Somali Islamist Organization, in the early 2000s (Wise, 2011). Two different scholars trace Al Shabaab's origin to 2004 (Marchal, 2009; Mwangi, 2012). The group was formally incorporated at an AIAI conference in 2003, but the name "Al Shabaab" only came into use in 2007 (Shinn, 2011).
Wise (2011) links Al Shabaab's origin to Somalia's tumultuous past, whilst he relates their radicalization and rise to prominence to the Ethiopian invasion in December of 2006. He further states that the period between the Ethiopian invasion on 24 December 2006 and early 2008 can be marked as the true emergence of the group (Wise, 2011). Murphy (2011) agrees with Wise in stating that the Islamist group fed off the resentment that Somalis felt towards the presence of the Ethiopian military. In addition, Roque (2009) and Hansen (2013) support their argument. They argue that the presence of the Ethiopian military forces, in addition to that of the Transitional Federal Government (TFG) forces prepared the ground in which organized radical responses could flourish (Roque, 2009:2; Hansen, 2013). The creation of Al Shabaab was thus a radical response to the presence of the Ethiopian military. Mwangi (2012) also shares the sentiments of these scholars. The occupation by Ethiopian troops created "a complex cocktail of nationalist, Islamist, anti-Ethiopian, anti-American, anti-Western and anti-foreigner sentiments" (Mwangi, 2012).
Al Shabaab originally emerged as an Islamist-nationalist guerrilla movement dedicated to combatting the insurgence of Ethiopian troops as well as the TFG forces (Wise, 2011:6). This was one of its principal goals/rallying points, but they would have to seek a new means of staying "relevant" with the withdrawal of Ethiopian troops in 2009 (Roque, 2009). The exact aims of the group are somewhat murky and unclearly expressed. However, Wise (2011), Mwangi (2012) and Ali (2008) argue that the group aims to establish a Somali Caliphate (Islamic State for Somalis of Somalia, Djibouti, Kenya and Ethiopia). This would entail taking over Somalia and spreading their ideology throughout the Horn of Africa. In addition, they also wish to spread their ideological beliefs onwards to the areas of Central, South and Eastern Africa (Ali, 2008).
Wise (2011) and Ali (2008) add that the group aims to wage jihad (holy war) against the enemies of Islam; this includes the removal of Western influence (something one can link to globalization), not only in Somalia or even the Horn of Africa, but also throughout the whole of Africa. Furthermore, they also wish to eliminate all other forms of Islam that are not in line with their Salafi-Wahhabist strand (Wise, 2011; Ali, 2008). In order to achieve these goals and to win favour amongst the populace, the group has provided the citizenry with essential services and welfare. They have cleared roadblocks, repaired roads, organized markets and re-established order and a justice system through employing Sharia courts (Roque, 2009). By continuously expanding their local community infrastructure and support, Al Shabaab is able to sustain its goal of jihad.
6.3 Empirical Literature
Skillicorn (2015) of Queen's University has conducted an empirical assessment of propaganda, focusing on Al Qaeda, ISIS, and the Taliban. The author conducted a research on three magazines made by each terrorist organization in order to measure the level of propaganda intensity and found that ISIS ranked the highest out of the three. He also proposed a combined model of propaganda, in which imaginative language, deception, and gamification of language were usual, and informative language and complexity were unusual (Skillicorn 2015). Though some of Skillicorn's findings are relevant to this study, the author mainly targeted the intensity of propaganda and attempted to measure it amongst competing organizations. Author and former US Special Operations Command advisor, James P. Farwell (2014) explored ISIS' power on the digital stage. He claimed that the group's appeal was as a fearsome warrior clan on a crusade against the west, a persona that quickly caught the attention of social media and spread like wildfire. Farwell (2014) claims that the only way to defeat ISIS is through systematic discretization and the destruction of this warrior persona they have built up around themselves. While this is also a very interesting point, and completely relevant to my research, Farwell overlooks the complexity of this warrior persona and does not delve deeper into how it was communicated and why it stuck with its audience.
Katagiri (2014) of the Department of International Security Studies, Air War College, has documented the threat of ISIS, including its well-known propaganda machine. Katagiri (2014) claims that this new wave of psychological and informational warfare has boosted ISIS' popularity and will continue to be the thorn in the side of western powers seeking to oust them from both the Middle East and the world at large. Katagiri (2014) concludes, however, that the power still lies with the western nations and not insurgent groups, though the former must be very careful that the balance does not tip out of their favor. Katagiri (2014) provides suggestions for the United States government, offering a deeper look at ISIS propaganda on both a communicative and psychological level, and suggesting ideas on how to counter and combat it.
7. Theoretical Framework
The study adopted the Cybernetics Theory which poses as the science of interactions on which the communication theory is anchored on. As such, the explanations in this context will however be provided with emphasis on the theoretical explanations of the communication theory of Karl Deutsch. The chief proponent of the cybernetics theory was Norbert Wiener (1948), who described the theory as a trans-disciplinary approach for exploring regulatory systems, their structures, constraints, and possibilities. According to Wiener, cybernetics poses as "the scientific study of control and communication in the animal and the machine. Cybernetics is thus applicable when a system being analyzed incorporates a closed signalling loop - originally referred to as a "circular causal" relationship - that is, where action by the system generates some change in its environment and that change is reflected in the system in some manner (feedback) that triggers a system change. Cybernetics is relevant for both, mechanical, physical, biological, cognitive, and social systems. The essential goal of the broad field of cybernetics is to understand and define the functions and processes of systems that have goals and that participate in circular, causal chains that move from action to sensing to comparison with desired goal, and again to action. Its focus is how anything (digital, mechanical or biological) processes information, reacts to information and changes or can be changed to better accomplish the tasks earlier mentioned. Cybernetics includes the study of feedback, black boxes and derived concepts such as communication and control in living organisms, machines and organizations including self-organization. In the 21st century, the term is often used in a rather loose way to imply "control of any system using technology.
In applying the theory to the study, it is clearly evident that international terrorism as evolved over the years is highly premised on a network of interactions and communication. As such, efforts have so far been made by this study in providing explanations on the role played by communication which is highly dependent on the level of flow of information, in aiding the activities of global terrorism with specific emphasis on ISIS. The emphasis of the theory on negative feedback mechanism however defines the revolution behind ICT, especially with reference to the utilization of the social media tools such as Facebook, WhatsApp, Instagram, YouTube, Online gaming by newly emerged terrorism network like ISIS and the attendant consequences it has generated in the society and the world at large. Specifically, the cybernetics theory is relevant to the study and can be effectively used to explain how the characteristics of communication channels as well as, social networks can be valuable for group activism like the ones perpetrated by ISIS, through the unfortunate establishment of ties between people from different background and diverse cultures across the world. It is worthy to note that, the anonymity provided by the internet, the egalitarian nature of Information and Communication Technology today have no doubt proven to be useful in providing information and opening of opportunities at a low cost.
The research design that was adopted for this study is the descriptive and historical methods. In adopting this research method, the efforts were geared towards investigating the impact of globalization on terrorism. This design aided the researcher in gathering data from the relevant secondary sources so as to enable the conduct of a proper research, as well as, make inquiry into the various strategies employed by terrorist organizations to carry out their operations. Other relevant information that this design allowed the researcher to gain access to were those obtainable in other countries, as well as, radical Islamic sects that have the same modus operandi like the ISIS. This enabled the researcher in drawing a valid conclusion for the study.
Based on the structure of this study, the independent variables include information and communication technology and social media, while the various forms of terrorist activities were the dependent variables. The data collected for this study was analyzed qualitatively. Qualitative analysis is essentially normative oriented, which by its nature, is critical in perspective. It is also largely based on theory and logic. This is because it attempts to understand historical development and explain socio-political conditions in totality, and to address social problems not only objectively but also historical (Creswell, 2014).
9. Evaluation of Research Questions
9.1 Research questions 1
Extremist terrorist groups like the Al Qaeda and ISIS have both incorporated a hybrid communication structure. This structure allows both groups to centrally control communication strategies and propaganda themes. By expanding into expansive social media networks, al Qaeda and ISIS distribute propaganda flatly and quickly to global audiences. Although both groups utilize hybrid communication structures, ISIS attempts to control each section of its structure through many control mechanisms. Both groups utilize several social media outlets to further distribute propaganda. By distributing and redistributing large quantities of propaganda throughout popular new media technologies, al Qaeda and ISIS expose large audiences to recruiting propaganda, maintain strong online presences, and thereby attract attention and potentially recruit new members. Both groups survey audiences to focus propaganda. Although al Qaeda collects information from social media users, most propaganda is tailored toward Muslims. ISIS, on the other hand, uses popular or trending topics to mask its propaganda or redirect social media users to other propaganda sites.
Additionally, unlike al Qaeda, ISIS uses social media to predict audience support prior to releasing major propaganda pieces. Since online games attract huge numbers of networked players, al Qaeda and ISIS have both developed video games to access these vast audiences. Al Qaeda's games maintained a focus on Muslims by centering on the defense of Islam. ISIS, on the other hand, mirrored its game after another widely popular online game. This way, ISIS, exploited the popularity of another new media technology or topic and thereby benefited through association or assimilation.
It must be noted here that a stark contrast exists between al Qaeda and ISIS messages. Al Qaeda attempts to recruit others to commit terrorism against non-Muslims anywhere in the world. Alternatively, ISIS is focused on a violent revolution in Muslim majority countries rather than attacking their Western sponsors. Therefore, ISIS is concerned with gaining and maintaining control of its territory, and they would like nothing more than for the West to "leave it alone to establish the Utopia. By projecting a large and successful online image, ISIS also invites others to join them in their success.
From whatever perspective, the activities of both al Qaeda and ISIS are inimical to peaceful existence, hence considered a threat to international security and world peace. This is because, information and communication technology has bridged the information divide that hitherto existed thereby bringing about easy access to all sorts of news which has the potency of influencing character and forming habit, most of which does not promote peaceful living. On this note, it can be affirmed on grounds of the potency and efficacy with which information and communication technology provides a fertile ground for terrorism to thrive with ease.
Blogs and social media are key components of violent extremist groups' active recruiting strategies. Blogs inadvertently assist violent extremist groups in narrow-casting their propaganda, and they allow them to identify trending or popular online topics. Because they center on specific issues, blogs sort audiences by demographic, sex, religion, or a host of other factors. Once audiences are narrowed by blog topic, violent extremist groups can interact with audiences, gather preferences, and narrowcast propaganda toward these focused audiences.
Since blog sites identify trending or popular online topics, violent extremist groups can manipulate propaganda to align with these popular issues. By linking propaganda to popular topics or hiding propaganda behind related titles, violent extremist groups benefit from the large amount of attention and activity that blogs generate. By hiding propaganda behind misleading titles, violent extremist groups can entice bloggers to follow links to propaganda or interact unknowingly with violent extremists. Facebook is the most popular social media venue online.
Within social media, users produce and consume an enormous number of videos, movies, audio clips, and several other types of media. Violent extremist groups also produce vast quantities of propaganda and utilize overt and surreptitious recruiting tactics when discussing or disseminating propaganda throughout social media networks. Some propaganda is obvious. It contains images of violence, logos from violent extremist groups, or is directly attributable to violent extremist groups. Other propaganda is less obvious. It portrays members as humanitarian aid workers or helping the poor.
Regardless of tactic, and like blogs, social media allows violent extremist groups to observe and interact with vast numbers of social media users, publicize propaganda, and project a huge online presence throughout globally distributed social media networks. Because social media is easy to use and involves expansive audiences, violent extremist groups have fully incorporated it into recruiting strategies. Since social media attracts enormous and networked audiences, violent extremist groups push their members to exploit social media to its fullest. In view of the above analysis, it can be affirmed that the innovative tendencies of social media tend to promote the recruitment of young people for terrorism purposes.
10. Discussion
The evaluation of question one revealed that the hybrid communication structure of extremist terrorist groups as facilitated by information and communication technology has the tendency to promote threat to international security and world peace. This is because of the user-friendly platform which makes it easy for sharing information like propaganda to be distributed online. This finding agrees with the views of Goodman, Kirk and Kirk (2007) who stated that there are many characteristics of the Internet, or cyberspace as they refer to it, that creates an environment conducive to the promotion of the ideas and ideals of terrorist organizations. These include anonymity, confidentiality, accessibility, low costs, intelligent interfaces, ease of use and the "force multiplier". The Internet provides users with an uncensored and essentially anonymous forum, which they can use as a means of conducting research, gathering intelligence and creating communication networks. Moreover, studies on terrorist communication have revealed a concern for the protection of anonymity, for example many posts on terrorist websites inform "users" of ways in which they can avoid spyware and surveillance. Furthermore, the free availability of encryption programmes has also provided terrorist organizations with the ability to communicate with one another via secure conduits without "detection." In addition, it is also extremely difficult to effectively track terrorist communications when they are utilising emails, as account information is usually anonymous, or the email messages are encrypted.
The finding also agrees with the view of Cronin (2003) that the increased availability of and access to ICT, specifically the Internet, has made it much easier for terrorist organizations to communicate, plan and coordinate attacks. Thus, ICT has essentially aided terrorists in their aims and could be said to have facilitated international terrorism. Furthermore, the mere existence and evolution of cyberspace has created a new type of terrorism that could possibly be used in conjunction with traditional terrorist attacks. It is however important to note that technology has not encouraged international terrorism, but that it has only aided/facilitated it. It is argued that globalization has also allowed terrorist organizations to move and reach across international borders in the same way that business and commerce do. In addition, terrorist organizations often make use of the same channels as business and commerce. For example, the dropping of barriers has enabled terrorist organizations such as Al Qaeda to move without prohibition across borders and establish terrorist cells in states around the globe.
The evaluation of question two revealed that the innovative tendencies of social media tend to promote a new trend of international terrorism and its expansion. This is because social media gives many people's access to videos, movies, audio clips, and several other types of media that promotes the new trend of terrorism at the international level. This finding agrees with the view of Weimann (2014) that today, 90 percent of terrorist groups' communications over the Internet are accomplished through social media. Younger people favour social media, which is free, as well as interactive. Social media enables anyone to share messaging, contribute to the discussion, and even ask questions of terrorist group leaders. Weimann (2014) goes on to illustrate just how quickly social media use and technology is evolving. Facebook, which began in 2004, had 1.31 billion users a decade later. YouTube, which began in 2005, as of last year counted 100 hours of video uploaded each minute. And Twitter, which started in 2006, had 555 million Twitter users tweeting about 58 million tweets a day last year.
The aim of this study was to examine the relationship between ICT and international terrorism. It was observed that ICT which was supposed to be an innovative tool that should no doubt contribute to accelerate development of productive forces, scientific and technological progress, as well as a more intensive and productive communication among states and their people has become a self-interested, inexorable, corrupting market culture into traditional communities. This is because it has provided a motivation for terrorist activities, through facilitating methods for it such as: computerization, digitization, satellite communication, optic fibre and the internet. To address the above issues, this researcher was interested in examining how the new media pose any identifiable threat on international security and world peace; the impact of the innovative tendencies of social media on international terrorism; and if modern communication technologies have aided ISIS in perpetrating terrorist activities.
12. Recommendations
Based on the findings of this study, the following recommendations have been reached:
Understanding terrorist recruitment through information and communication technology is vital to counterterrorism. Terrorism does not start solely from macro-level root causes. Without members, terrorist organizations cannot exist. Understanding how and why an individual is radicalized and recruited into a terrorist organization is therefore an important part of addressing the macro level root causes via the micro-level radicalizing factors.
Cyberspace is an international domain without state borders so the issue of terrorists operating within cyberspace cannot be addressed by a single state. There needs to be discourse within the United Nations and other international bodies so that a clear understanding can be established among governments across the world. The current issue is that no major power really wants to address the issue of cyber security because they have become reliant on cyber espionage.
Creating laws to limit actions a legitimate actor can take in cyberspace would most likely limit, if not completely outlaw, mass surveillance programs that many states use. By publicly discussing the issue states may potentially weaken themselves by taking away one of their tools for self-defense: intelligence gathering. Because of the security dilemma states are unlikely to do this so international regulation on actions within cyberspace are unlikely to progress until a major power steps forward and really pushes a pro-cyber security agenda.
States need to step forward and take responsibility in establishing norms of good faith relationships in cyberspace. Until this happens there will be few international laws placed on the internet and few international restrictions placed on terrorists looking to operate within cyberspace. States need to make a choice between sacrificing their ability to operate freely in cyberspace and allowing terrorist free reign to engage in propaganda campaigns and cyber-attacks. As it is now, the internet is a very attractive option for terrorists because they do not need to fear international pressures. If a terrorist uses the internet to attack a state it is the responsibility of that state to respond and no one else. This limits the amount of resistance terrorists' face by using the internet as a tool for insurgency. It is much easier for the terrorist to then make a cost-benefit analysis on whether to use the internet due to a lack of external influence.
States need to engage in multilateral discussions and agreements that will regulate cyberspace. The first step in this discussion is to establish norms of peaceful cohabitation in cyberspace. States need to stop covertly hacking each other for information and instead rely on other means to obtain intelligence. Private Citizens are so used to hearing about major hacks that they are becoming desensitized to cyber violence and norms established against these kinds of action would do a good job of reminding citizens of the dangers present on the internet. Cyberwar is at its heart information warfare and governments need to realize that they are not enemies in information warfare. The real enemies are terrorist and cybercriminals who will use tensions between different states in cyberspace to exploit the system and strengthen their own insurgency while delegitimizing real governments.
(Sp)oiling Domestic Terrorism? Boko Haram and State Response Agbiboa DanielE. Peace Review.2013-jul;:431-438. CrossRef Google Scholar
Find in text
The Boko Haram Uprising: how should Nigeria respond? Aghedo Iro, Osumah Oarhe. Third World Quarterly.2012-jun;:853-869. CrossRef Google Scholar
Put your brand online Kelly Marielle. Social Media for Your Student and Graduate Job Search.2016;:35-50. CrossRef Google Scholar
Defining–Redefining Security Buzan Barry, Hansen Lene. .2018. CrossRef Google Scholar
NIGERIA: Will Boko Haram Talk Peace? Africa Research Bulletin: Political, Social and Cultural Series.2012-mar;:19170-19172. CrossRef Google Scholar
Boko Haram and Islamic State Comolli Virginia. .2017. CrossRef Google Scholar
Globalization and the Study of International Security Cha VictorD. Journal of Peace Research.2000-may;:391-403. CrossRef Google Scholar
Theory of an Emerging-State Actor: The Islamic State of Iraq and Syria (ISIS) Case † Clancy Timothy. Systems.2018-may. CrossRef Google Scholar
TERRORISM AND AFRICA CILLIERS JAKKIE. African Security Review.2003-jan;:91-103. CrossRef Google Scholar
How and why Boko Haram blossomed: examining the fatal consequences of treating a purposive terrorist organisation as less so Eke SurulolaJames. Defense & Security Analysis.2015-oct;:319-329. CrossRef Google Scholar
Book Review: Creswell, J., & Plano Clark, V. (2007). Designing and Conducting Mixed Methods Research. Thousand Oaks, CA: Sage Yu ChongHo. Organizational Research Methods.2008-aug;:801-804. CrossRef Google Scholar
Behind the Curve: Globalization and International Terrorism Cronin AudreyKurth. International Security.2003-jan;:30-58. CrossRef Google Scholar
The Routledge Handbook of Terrorism Research Schmid Alex. .2011. CrossRef Google Scholar
Rethinking Insecurity, War and Violence .2008. CrossRef Google Scholar
Realism and security studies Wohlforth WilliamC. The Routledge Handbook of Security Studies.. CrossRef Google Scholar
Courts and Trials: A Reference Handbook200412Christopher E. Smith. Courts and Trials: A Reference Handbook. Santa Barbara, CA and Oxford: ABC-Clio 2003. xii $\mathplus$ 263 pp., ISBN: 1 57607 933 3 £34.95/$65 Contemporary world issues Also available as an e-book ISBN 1 57607 934 1 Hodgson Jane. Reference Reviews.2004-jan;:19-20. CrossRef Google Scholar
The Media Strategy of ISIS Farwell JamesP. Survival.2014-nov;:49-55. CrossRef Google Scholar
2012 JSOU and NDIA SO/LIC Division Essays FL JOINTSPECIALOPERATIONSUNIVMACDILLAFB. .2012. CrossRef Google Scholar
Cyberspace as a medium for terrorists Goodman SeymourE, Kirk JessicaC, Kirk MeganH. Technological Forecasting and Social Change.2007-feb;:193-210. CrossRef Google Scholar
Al-Shabaab in Somalia Hansen StigJarle. .2013. CrossRef Google Scholar
Book Review: Heine, J. and Thakur, R., editors, 2011: The Dark Side of GlobalizationHeineJ.ThakurR., editors, 2011: The Dark Side of Globalization. Tokyo and New York: United Nations University Press. xvi $\mathplus$ 302 pp. £22.79 (paperback). ISBN 798-92-808-1194-0 (paperback). Thoburn John. Progress in Development Studies.2012-nov;:82-84. CrossRef Google Scholar
John Calvert. Sayyid Qutb and the Origins of Radical Islamism. New York: Columbia University Press, 2010 Daadaoui Mohamed. Journal of Terrorism Research.2011-jan. CrossRef Google Scholar
ISIL, insurgent strategies for statehood, and the challenge for security studies Katagiri Noriyuki. Small Wars & Insurgencies.2015-mar;:542-556. CrossRef Google Scholar
Globalization, Power, and Security Kay Sean. Security Dialogue.2004-mar;:9-25. CrossRef Google Scholar
Economic Globalization and Transnational Terrorism Li Quan, Schaub Drew. Journal of Conflict Resolution.2004-apr;:230-258. CrossRef Google Scholar
"Baptism by Fire": Boko Haram and the Reign of Terror in Nigeria Maiangwa Uzodike Whetho Onapajo Africa Today.2012. CrossRef Google Scholar
A tentative assessment of the Somali Harakat Al-Shabaab Marchal Roland. Journal of Eastern African Studies.2009-oct;:381-404. CrossRef Google Scholar
Constructivism and securitization studies Balzacq Thierry. The Routledge Handbook of Security Studies.. CrossRef Google Scholar
Far-right Media on the Internet: Culture, Discourse and Power Atton Chris. An Alternative Internet.2004;:61-90. CrossRef Google Scholar
William R. Beer. $\less$italic$\greater$The Unexpected Rebellion: Ethnic Activism in Contemporary France$\less$/italic$\greater$. New York: New York University Press$\mathsemicolon$ distributed by Columbia University Press, New York. 1980. Pp. xxxii, 150. $18.50 The American Historical Review.1981-dec. CrossRef Google Scholar
State Collapse,Al-Shabaab, Islamism, and Legitimacy in Somalia Mwangi OscarGakuo. Politics, Religion & Ideology.2012-dec;:513-527. CrossRef Google Scholar
Globalization and terrorism: the migration of dreams and nightmares Choice Reviews Online.2005-apr;:42-4892. CrossRef Google Scholar
Boko Haram terrorism in Nigeria Onapajo Hakeem, Uzodike UfoOkeke. African Security Review.2012-sep;:24-39. CrossRef Google Scholar
One Voice, Multiple Tongues: Dialoguing with Boko Haram Oyeniyi BukolaAdeyemi. Democracy and Security.2014-jan;:73-97. CrossRef Google Scholar
Why the Department of Homeland Security Needs an Office of Net Assessment Forrest Patrick, Hilliker Alex. Risk, Hazards & Crisis in Public Policy.2012-sep;:1-18. CrossRef Google Scholar
Dictionary of International Security2009108Paul Robinson. Dictionary of International Security. Cambridge and Malden, MA: Polity Press228 pp., ISBN: 978 0 7456 4027 3 (hardback)$\mathsemicolon$ 978 0 7456 4028 0 (paperback) £45 $54.95 (hardback)$\mathsemicolon$ £12.99 $19.95 (paperback) Chalcraft Tony. Reference Reviews.2009-mar;:15-17. CrossRef Google Scholar
Introduction Vaughan-Williams Nick. Terrorism and the Politics of Response.2008;:1-15. CrossRef Google Scholar
institute-for-security-studies-situation-report .. CrossRef Google Scholar
Globalization and Security: RUDOLPH CHRISTOPHER. Security Studies.2003-oct;:1-32. CrossRef Google Scholar
The Definition of Terrorism Schmid AlexP. The Routledge Handbook of Terrorism Research.. CrossRef Google Scholar
Glossary and Abbreviations of Terms and Concepts Relating to Terrorism and Counter-Terrorism Schmid AlexP. The Routledge Handbook of Terrorism Research.. CrossRef Google Scholar
Sheeham, Michael, International Security. An Analytical Survey, Boulder, co, Lynne Rienner Publishers, 2005, 201 p. Fontanel Jacques. Études internationales.2006. CrossRef Google Scholar
Al Shabaab\textquotesingles Foreign Threat to Somalia Shinn David. Orbis.2011-jan;:203-215. CrossRef Google Scholar
Empirical Assessment of Al Qaeda, ISIS, and Taliban Propaganda Skillicorn David. SSRN Electronic Journal.2015. CrossRef Google Scholar
Globalization Steger Manfred. .2013. CrossRef Google Scholar
Danijel Dzino and Ken Parry, eds., Byzantium, Its Neighbours and Its Cultures. (Byzantina Australiensia 20.) Brisbane: Australian Association for Byzantine Studies, 2014. Pp. 294$\mathsemicolon$ some figures. AU $40. ISBN: 978-1-876503-01-7.Table of contents available online at http://www.aabs.org.au/byzaust/byzaus20/ (accessed 15 March 2016) Sophoulis Panos. Speculum.2016-jul;:777-778. CrossRef Google Scholar
The Role of the Security Forces in Combating Terrorism Wither JamesK. Combating Transnational Terrorism.2016;:131-148. CrossRef Google Scholar
The United Nations global counter-terrorism strategy .2008-nov. CrossRef Google Scholar
Prisons, Terrorism and Extremism Silke Andrew. .2014. CrossRef Google Scholar
Boko Haram: the evolution of Islamist extremism in Nigeria Waldek Lise, Jayasekara Shankara. Journal of Policing, Intelligence and Counter Terrorism.2011-oct;:168-178. CrossRef Google Scholar
United States Institute of Peace (USIP) The Grants Register 2018.2018;:733-733. CrossRef Google Scholar
Teaching problem-solving in undergraduate mathematics by M.S. Badger, C.J. Sangwin and T.O. Hawkes with R.P. Burns, J. Mason and S. Pope, pp 142, £5.25 plus postage, available as print on demand at: http://www.lulu.com/shop/product-20405493.html or as free pdf download at: http://mellbreak.lboro.ac.uk/problemsolving/sites/default/files/guide/Guide.pdf (2012). Villiers MichaelDe. The Mathematical Gazette.2014-nov;:547-549. CrossRef Google Scholar
$\less$sc$\greater$jonathan m. wiener$\less$/sc$\greater$. $\less$italic$\greater$Social Origins of the New South: Alabama, 1860–1885$\less$/italic$\greater$. Baton Rouge: Louisiana State University Press. 1978. Pp. xiii, 247. $14.95 The American Historical Review.1979-dec. CrossRef Google Scholar
Security Studies Williams PaulD. .2008. CrossRef Google Scholar
Precipitating the Decline of Al-Shabaab: A Case Study in Leadership Decapitation Butler BrettM. .2015. CrossRef Google Scholar
Globalization and terrorism Zimmermann Ekkart. European Journal of Political Economy.2011-dec. CrossRef Google Scholar
International Journal of Scientific Research and Management, 2018.
Page No.: PS-2018-248-263
Section: Political Science
DOI: https://doi.org/10.18535/ijsrm/v6i8.ps01
Uwak, D., Egemuka, U. E., & Collins, C. (2018). Information and Communication Technology (ICT) and International Terrorism: Boko Haram and Al Shabaab in Perspective. International Journal of Scientific Research and Management, 6(09), PS-2018. https://doi.org/10.18535/ijsrm/v6i8.ps01
HTML Viewed - 893 Times
PDF Downloaded - 125 Times
XML Downloaded - 41 Times | CommonCrawl |
\begin{document}
\title{Terry vs an AI, Round 1: Heralding single-rail (approximate?) 4-GHZ state from squeezed sources.}
\author{Terry Rudolph} \affiliation{Dept. of Physics, Imperial College London, London SW7 2AZ} \affiliation{PsiQuantum, Palo Alto.} \email{[email protected]} \date{\today}
\begin{abstract} The potential for artificial intelligence (AI) to take over the work of physicists should be treated with glee. Here I evaluate one of the scientific discoveries in quantum photonics made by a leading AI in the field, in order to try and gain insight into when I will be allowed to go spend my days sipping mezcal margaritas on a warm beach. My analysis leads me to the distressing conclusion that it may, in fact, be quite a while yet. \end{abstract}
\pacs{}
\maketitle
\emph{This paper is prepared for submission to the Jonathan P. Dowling memorial issue of AVS Quantum Science. I have tried to write it in a polemical but hopefully still fun style that Jon would have appreciated. Any implied criticisms in this paper are only directed at non-human members of the respective collaborations. I am the last person who wants to discourage work in theoretical linear optics by any sort of intelligence(s)!}
\section{Introduction}
Recently the world was introduced to {\texttt{PYTHEUS}}\xspace \cite{PYTHEUS1,PYTHEUS2}. Like his homophonic namesake, {\texttt{PYTHEUS}}\xspace is an explorer, who in just a few short years has already made new scientific discoveries\cite{PYTHEUS2}!
{\texttt{PYTHEUS}}\xspace is an artificial intelligence who cannot (yet) embark on physical voyages, and so his discoveries at present focus on finding linear-optical circuits (interferometers) that take squeezed-light inputs and optimize for creating targeted output photonic states. Now this is a task that I personally am pretty good at. And frankly, while half the world is justifiably panicked about losing their jobs to an AI, I would be absolutely delighted to have this aspect of my research usurped.
Moreover, superhuman chess-playing AI's have not destroyed the game. On the contrary, humans enjoy learning new strategies from them. So part of me wondered if I could learn something from {\texttt{PYTHEUS}}\xspace. But another part of me is just competitive - is {\texttt{PYTHEUS}}\xspace really the Deep Blue of quantum optics? Am I a Kasparaov? Carlsen? Perhaps together {\texttt{PYTHEUS}}\xspace and I could at least be a Niemann?
I decided to pick one example of a discovery made by {\texttt{PYTHEUS}}\xspace, and see if I could re-discover it for myself. But before turning to my concrete attempt at such, let me overview the basis for some of my skepticism that I will be able to hand off my job to an AI any time soon.
I strongly suspect {\texttt{PYTHEUS}}\xspace is relying on the ingenuity of his human collaborators far more than he lets on. Here are a few examples of things in which I have participated that I consider at least mildly interesting ``scientific discoveries'' of the last couple of years. They all involve only understanding scattering of photons through interferometers ({\texttt{PYTHEUS}}\xspace' claimed area of expertise) but I do not believe {\texttt{PYTHEUS}}\xspace has any chance (in his current state of mind) of emulating them without considerable assistance from the humans he is shackled to: \begin{itemize} \item From four single photons a dual-rail Bell state can be created with probability $2/3$, which is significantly greater than $1/4$, the presumed upper bound confirmed by more than a decade's worth of numerical searches (see Section V of Ref.~\onlinecite{ssgpaper} - therein termed ``bleeding'', could there be any stronger way of rubbing it in?!?). \item Bell state measurement vs Bell basis measurement? A useful linear-optical POVM that projects onto dual-rail Bell states can be performed with higher probability than a projective Bell basis measurement. (Section VIIC of Ref.~\onlinecite{ssgpaper}). \item It is possible, with only polynomial growth in the number of modes, to do linear-optical quantum computing using no coherent switch at all, only a ``blocking'' (i.e. absorbing/incoherent) switch\cite{tezmultirail}. Moreover this can be done using only probabilisitic single photon sources. This shows multiple core assumptions about what is necessary for photonic quantum computing are simply false. \item One can deterministically create photonic entanglement which mimics the structure of first quantization using second quantized states\cite{aliens} (``third quantization''), and this enables serious things like hiding into thermal light correlations an alien civilization's distributed quantum computations, as well as mad possibilities like simulating indistinguishable particles that can nevertheless be individually accessed, or performing purely W-state driven measurement-based quantum computing. \end{itemize}
Each of those examples requires one to think outside of standard paradigms, despite being only simple photonic interferometry. The first requires realizing there can be immense power in protocols that have intermediate adapativity and do not terminate in a fixed depth of interferometer. The second requires understanding that projection onto non-orthogonal Bell states is just as useful for almost everything in quantum information as projection into a (partial) Bell basis - and then finding a better-performing, ancilla assisted effective POVM for such a measurement. The third requires realizing that coherent erasure of information can be arranged to underpin all linear optical quantum computing. The fourth requires...? I guess it requires you to be somewhat crazy and desperate to push the ultimate limits of single photon entanglement to try and understand something that photons cannot do, only to realize that nope, they actually can do it! I suspect {\texttt{PYTHEUS}}\xspace could discover some technical aspects of all the above if well guided, but not off his own bat.
A final reason I feel I'm not going to be out of the race with {\texttt{PYTHEUS}}\xspace any time soon is that simplicity is crucial for practical, large-scale photonic engineering. Unless care is taken to reduce experimental complexity, it is easy to end up needing $O(n^2)$ components to implement an $n$ mode experiment, and such should typically be avoided if possible. Seems clear that considerations of ease and robustness will help us poor humans remain in the race a little longer - the best experiments are the ones as dumb as their designers!
\section{A concrete challenge}
Many of the experiments {\texttt{PYTHEUS}}\xspace has been thinking about are for postselected state generation. Personally I find this very uninteresting, because (i) modern quantum technologies need, at a minimum, heralded state generation and (ii) the Reeh-Schlieder theorem already tells me I can postselect an elephant out of the vacuum with a suitable measurement. So as a challenge I decided to focus on {\texttt{PYTHEUS}}\xspace's solution for the heralded generation of a superposition of vacuum and four single photons - i.e. single-rail (approximate) 4-GHZ state - given in Fig.~4 of Ref.~\onlinecite{PYTHEUS2}.
To actually convert the graph of Fig.~4(b) to a standard interferometric setup I enlisted a (purportedly human) assistant Jake Bulmer, who reported back\footnote{\href{https://github.com/jakeffbulmer/terry\_vs\_ai/blob/main/jakes\_translations\_for\_terry\_vs\_AI.ipynb}{\texttt{https://github.com/jakeffbulmer/terry\_vs\_ai/blob/main/ jakes\_translations\_for\_terry\_vs\_AI.ipynb}}} that {\texttt{PYTHEUS}}\xspace claims all we need to do is take 6 single mode squeezed state sources, choosing the specific values for the squeezing parameters of: $r = [0.9350, 0.9350, 0.7849, 0.7849, 0.3403, 0.3403]$ along with two vacuum mode ancillae, and feed them all through the interferometer of Fig.~\ref{fig:PYTHEUSunitary}. After this, heralding single photons on modes $3,4,5,6$ will create the (Fock basis) state $|0000\rangle+\sqrt{\epsilon} |1111\rangle$ in modes $1,2,7,8$, where $\epsilon\approx 1\%$. The probability of the scheme succeeding is also about $1\%$. By changing a scaling factor in the solution we can increase $\epsilon$ but the success probability goes down even further.
\begin{figure}\label{fig:PYTHEUSunitary}
\end{figure}
Problematically, Jake noticed (and I confirmed with Matlab) that it appears there are also small amplitude terms in the output state of the form $|2110\rangle$ and $|0112\rangle$. Are these real, or just an artifact of poor numerical precision??
As you can see we have already encountered a problem many humans are having in their conversations with famous AI's like AlphaZero, DALL-E and so on: they are very good at popping out magical and impressively complicated looking results, but do so without providing any clear explanation as to what is actually going on in their machinations.\footnote{Those of us used to doing standard dumb numerical optimizations, for which impregnable numerical output is of course the norm, are almost tempted to claim that perhaps these AI's are convincing their human friends to totally exaggerate how smart and earth-shattering they are. But I digress.}
Now before going further let me point out that even if correct, the usefulness of this specific problem for real-world challenges such as building a photonic quantum computer is debatable. Firstly, making direct use of states with coherence between different photon numbers is problematic, mainly because loss is the main error in quantum optics and it becomes no longer heraldable. Dephasing between the subspaces of different total photon number is often also an issue. Secondly, heralding entanglement generation at such low success probabilities means that to use it fruitfully the whole setup will need to be \emph{multiplexed}, i.e. repeated many (hundreds) of times - spatially or temporally - and then very large suitable optical switches somehow employed to select a success.
We can greatly ameliorate this latter issue by breaking any heralded state generation protocol into as small a set of heraldable chunks as possible, and multiplexing those separately/sequentially (e.g. see section VI of Ref.~\onlinecite{ssgpaper} for an example, therein termed ``primates'', aptly named in homage of its creators!). But if we are only given a numerical solution of the form ``complicated states in, big interferometer, big mess out, detect'' then determining how to do any such intermediate multiplexing itself becomes a completely new challenge.
In the following I set out to both confirm and understand {\texttt{PYTHEUS}}\xspace's discovery of his ``new actionable concept in physics'', hoping to break it down to pieces which are more conducive to the shallow-learning that meat-based intelligences are capable of. As such I try and explain my thought processes, including erroneous steps I made.
\section{First thoughts}
Jake's decomposition of {\texttt{PYTHEUS}}\xspace's solution uses single mode squeezing. But the squeezing parameters come in pairs, and doing a balanced beamsplitter on two identical single-mode squeezed states produces a two-mode squeezed state (TMSS) of the form \begin{equation}
|S\rangle=s_0|00\rangle+s_1|11\rangle+s_2|22\rangle+s_3|33\rangle+\ldots \end{equation}
Thus one might hope to instead start with three TMSSs which is conceptually easier for me. Even simpler would be if we only had to interfere the idler modes. But there is no way that taking any number of TMSSs, interfering only the idlers then detecting them all can create a coherent superposition between vacuum and any other number state (the same total photon number measured on the idlers must output on the signals).
\begin{figure}
\caption{ Interferometric circuit for which detection of 2 photons yields an output that has coherence only between the vacuum and 4-photon subspaces. $U$ is arbitrary. Note this is not a qubit circuit, here lines are modes connected via interferometers. The vertical line with black dots is a 50:50 beamsplitter which creates $\fket{20}-\fket{02}$ from the $\fket{11}$ part of the input. See Ref.~ \onlinecite{ssgpaper} for more on this notation.}
\label{fig:int1}
\end{figure}
Hmm, what about interfering signals and idlers independently, then detecting some of the signals and some of the idlers? This is more optimistic, but it does seem like a strong restriction (only being able to interfere the signals with each other and the idlers with each other). Ok, well perhaps I can step back and just convince myself of one simpler thing: \emph{ Conjecture: to create a superposition between a vacuum and any state of 4 photons, with no terms containing $1$ or $2$ or $3$ or $\ge5$ photons, one must begin with at least three TMSS's.} I smacked my head on proving just this for a while, it is so obviously true, but to no avail.
\section{Tez's idiocy revealed}
One of my primary stumbling points was the difference between trying to make ``a superposition of vacuum and four photons, plus some (error) terms of more than four photons'' and ``a superposition containing \emph{only} terms of vacuum and four photons''. The former is much easier - for example if you take a single photon and a TMSS you can use the HOM effect to strip out the $|11\rangle$ term from the squeezed state. But you will still have higher order errors ($|33\rangle$ terms etc.)
At some point I suddenly realized I'm being an idiot. Take a general TMSS input into an interferometric circuit like Fig.~\ref{fig:int1}. If you detect exactly two photons then either they both came from the bunched single photons initially in modes 3,4, or they both came from the TMSS. Either way, none of the terms $\fket{nn}$ of $\ket{S}$, with $n=1,3,4,\ldots$ can herald the detection pattern, and so we do have a state with \emph{exactly} vacuum and four photons (though not the desired $\fket{1111}$ state).
From here we can readily get to the slightly more complicated circuit of Fig.~\ref{fig:int2} where $R=\begin{bmatrix} \sqrt{a} & \sqrt{1-a} \\ \sqrt{1-a} &-\sqrt{a}\end{bmatrix}$ and $\ket{\chi}=\sqrt{b} \fket{20}-\sqrt{1-b}\fket{02}.$
Consider we herald on detecting two single photons. Again the output is a superposition of either vacuum or 4 photons. A calculation simple enough even for Maple (a program nobody would mistake for intelligent) tells us that the amplitude for vacuum at the output is $$x_{0000}=-\tfrac{s_0}{\sqrt{2}}(1-a)\sqrt{b}.$$ Defining $\beta_{\pm}=-s_2(\sqrt{1-b}\pm a\sqrt{b})/4$ Maple says the amplitude for output $\fket{1111}$ is $$x_{1111}=\sqrt{2}\beta_+.$$ Moreover the only other nonzero amplitudes are: $$
-x_{1 1 0 2}=
-x_{1 1 2 0}=
x_{0 2 0 2}=
x_{0 2 2 0}=
x_{2 0 0 2}=
x_{2 0 2 0}=\beta_- $$ and $$
x_{0 2 1 1}=
x_{2 0 1 1}=-\beta_+. $$
We see that by choosing $\beta_-=0$, i.e. taking $a^2=b^{-1}-1$ (and $b\in[1/2,1]$) we interfere away the amplitudes of $\fket{1120}$ and $\fket{1102}$ as well as a bunch of other undesirable looking terms. It seems we end up with unavoidable amplitude in $\fket{0211}$ and $\fket{2011}$. These are the exact same error terms as Matlab claims for {\texttt{PYTHEUS}}\xspace's solution. Hmm.
The success probability is $$ P_{succ}=s_0^2(\tfrac{1}{2}-\sqrt{b(1-b)})+s_2^2(1-b).$$ Choosing optimal squeezing for single photon generation ($s_0^2=1/2$, $s_1^2=1/4$, $s_2^2=1/8$) we can find a value of $b$ for which $\epsilon=x_{1111}^2/P_{succ}= 1\%$ to match {\texttt{PYTHEUS}}\xspace. However my $P_{succ}$ will be much higher, because my circuit makes use of a ``pre-given'' input state $\ket{\chi}$ which is a potentially unbalanced superposition of $\fket{20}$ and $\fket{02}$. This reiterates a point from earlier - this smaller initial state could be created offline and then multiplexed.
\begin{figure}
\caption{ Interferometric circuit for which detection of 2 photons yields an output that has coherence only between the vacuum and 4-photon subspaces, but now with 4-mode output that is much closer to the target state. $R$, $\ket{\chi}$ are given in the text.}
\label{fig:int2}
\end{figure}
How could $\ket{\chi}$ be created? The obvious method is from two TMSS's create single photons, use a 50:50 beamsplitter to create the balanced superposition $\fket{20}+\fket{02}$ and then probabilistically ``damp'' one term by coupling one mode to a vacuum ancilla with a beamsplitter of suitable transmittivity and herald on vacuum in the ancilla. Less obvious is to use two TMSS's and interfere the heralds with vacuum ancillae modes, before selecting 2 photon events. Interestingly, for $b>1/2$ this latter procedure is more efficient than the obvious one.
Note that regardless of how I produce $\ket{\chi}$ I will be using 3 TMSS's total, which is equivalent to {\texttt{PYTHEUS}}\xspace's method in terms of initial Gaussian resources. But the $\sqrt{2}$ factor difference between the amplitudes of $x_{1111}$ and the error term amplitudes $x_{0211}$, $x_{2011}$ shows that by adding in a suitable 50:50 beamsplitter the output state of my solution will actually be of the form \begin{equation} \ket{\Psi_{tez}}=\sqrt{c}\ket{0000}+\sqrt{1-c}\ket{2110}. \end{equation} This leads me to admit: \begin{quotation} {\noindent If it is true that {\texttt{PYTHEUS}}\xspace's solution is not equivalent (via linear-optical interferometer) to a state of this form then he wins the match!} \end{quotation}
I suspect, but am not sure, my solution not only incorporates {\texttt{PYTHEUS}}\xspace's as a special case, it is also objectively superior because mine makes clear both how to herald states with a wider variety of amplitudes and success probabilities, as well as to see the options for intermediate multiplexing\footnote{Look {\texttt{PYTHEUS}}\xspace, if you \emph{have} made an error then be a bit of a (human) gentleman about it and simply insist such was your intention all along. You can justify why we'd be interested in this state as follows: Imagine we have the ability to easily create states of the form $\fket{000}+\sqrt{\epsilon}\fket{k11}$ (alt. $\fket{00}+\sqrt{\epsilon}\fket{k2}$) for any $k\ge 1$. Then from $N\approx 1/\epsilon$ copies it is possible with probability $>1/e$ to herald creation of a state which is convertible (via linear optics) into a maximally entangled qu\emph{d}it state, in a nice and robust $d$-rail encoding, with $d=N$ (alt. $d=N/2$). This is a lot of entanglement, created with high probability! Can you see how? If so please apply to [email protected].}. I would like to say the jury is still out, but then I should probably channel Jon Dowling: \begin{quotation}{\noindent NO MATTER WHAT YOU SAY I DECLARE MYSELF THE ONLY ALLOWED CONTESTANT, THE SOLE JUDGE AND THE SUPREME WINNER OF THE MATCH!}\end{quotation}
[Aside: The truth is I can immediately think of many ways of heralding, from Gaussian sources, a superposition comprising strictly only vacuum or the state $|1111\rangle$. For example, with two copies of $|\Psi_{tez}\rangle$ you could simply put the first mode from each on a 50:50 beamsplitter and herald on detecting 2 photons. But its all much easier once you understand that using a TMSS in a quantum scissors protocol \cite{scissors} allows one to asymmetrically chop off terms with $>1$ photons in any mode while simultaneously doing the (Fock basis) bit flip $\ket{0} \leftrightarrow \ket{1}$. Note, however, that these schemes would all use more than the $6$ single-mode squeezed states of {\texttt{PYTHEUS}}\xspace's claimed solution.]
\section{Musings}
For reasons stated earlier I am not necessarily interested in the particular example considered here per se. But I am very interested in whether {\texttt{PYTHEUS}}\xspace or his friends can help move the needle in the quest to build a quantum computer. It seems to me the main way this could happen is for us to teach {\texttt{PYTHEUS}}\xspace about the primary sources of error in photonic quantum computing (imperfect multimode sources, losses, manufacturing imperfections in passive interferometers and so on), and then ask him to think about the most robust methods of creating the (highly constrained) types of photonic entanglement which can actually be used for fault tolerant computation.
\end{document} | arXiv |
\begin{document}
\title{Recycling qubits for the generation of Bell nonlocality between independent sequential observers}
\author{Shuming Cheng} \affiliation{The Department of Control Science and Engineering, Tongji University, Shanghai 201804, China} \affiliation{Shanghai Institute of Intelligent Science and Technology, Tongji University, Shanghai 201804, China} \affiliation{Institute for Advanced Study, Tongji University, Shanghai, 200092, China}
\author{Lijun Liu} \affiliation{College of Mathematics and Computer Science, Shanxi Normal University, Linfen 041000, China}
\author{Travis J. Baker} \affiliation{Centre for Quantum Computation and Communication Technology (Australian Research Council),
Centre for Quantum Dynamics, Griffith University, Brisbane, QLD 4111, Australia}
\author{Michael J. W. Hall}
\affiliation{Department of Theoretical Physics, Research School of Physics, Australian National University, Canberra ACT 0200, Australia}
\date{\today}
\begin{abstract}
There is currently much interest in the recycling of entangled systems, for use in quantum information protocols by sequential observers. In this work, we study the sequential generation of Bell nonlocality via recycling one or both components of two-qubit states. We first give a description of two-valued qubit measurements in terms of measurement bias, strength, and reversibility, and derive useful tradeoff relations between them. Then, we derive one-sided monogamy relations for unbiased observables, that strengthen the recent Conjecture in [S. Cheng {\it et al.}, Phys. Rev. A \textbf{104}, L060201 (2021) ] that if the first pair of observers violate Bell nonlocality then a subsequent independent pair cannot, and give semi-analytic results for the best possible monogamy relation. We also extend the construction in [P. J. Brown and R. Colbeck, Phys. Rev. Lett. \textbf{125}, 090401 (2020)] to obtain (i)~a broader class of two-qubit states that allow the recycling of one qubit by a given number of observers on one side, and (ii)~a scheme for generating Bell nonlocality between arbitrarily many independent observers on each side, via the two-sided recycling of multiqubit states. Our results are based on a formalism that is applicable to more general problems in recycling entanglement, and hence is expected to aid progress in this field.
\end{abstract}
\maketitle
\section{Introduction}\label{Sec1: Introduction}
Quantum entanglement is not only fundamental to understanding quantum mechanics, but also is an indispensable resource in various information tasks, such as quantum teleportation~\cite{Bennett93} and secure quantum key distribution~\cite{Ekert91}. Hence, it is of importance to study how to efficiently use this entanglement resource. Recently, the possibility that entanglement from the same source can be recycled multiple times, by sequential pairs of independent observers, has been shown by Silva {\it et al.}~\cite{Silva15}. This has attracted great interest both theoretically~\cite{Mal16,Curchod17,Tavakoli18,Bera18,Sasmal18,Shenoy19,Das19,Saha19,Kumari19,Brown20,Maity20,Bowles20,Roy20,Zhang21,Cheng21} and experimentally~\cite{Schiavon17,Hu18,Choi20,Foletto20,Feng20,Foletto21,Jie21}.
As illustrated in Fig.~\ref{fig:fig1}, recycling entangled systems typically requires a first pair of observers, Alice~1 and Bob~1, passing their measured systems onto a second pair of observers, Alice~2 and Bob~2, who then pass them onto a third pair, etc. In this scenario, sufficient entanglement can remain, following each measurement, to allow multiple pairs of observers to sequentially implement quantum information protocols such as quantum key distribution~\cite{Ekert91,Ekert14} and randomness generation~\cite{Pironio10,Foletto21}.
In this work, we consider the problem of recycling entangled systems to generate sequential sharing of Bell nonlocality~\cite{Silva15}. For example, if an observer $A$ on one side chooses between two-valued measurements $X$ or $X'$ at random, and an observer $B$ on the other side similarly chooses between $Y$ or $Y'$, then Bell nonlocality can be revealed from the joint measurement statistics via the Clauser-Horne-Shimony-Holt (CHSH) parameter~\cite{Clauser69,Brunner14} \begin{equation} \label{chsh} S(A,B):= \langle XY\rangle+\langle XY'\rangle+\langle X'Y\rangle - \langle X'Y'\rangle
\end{equation} with $\langle XY\rangle=\sum_{x, y} x y\, p(x, y|X,Y)$, where $x, y\in \{-1, 1\}$ label the corresponding outcomes of measurements $X, Y$. Violation of the Bell-CHSH inequality $S(A,B)\leq2$ certifies the sharing of Bell nonlocality between these two observers.
Notably, it was shown by Brown and Colbeck that, given a pair of entangled qubits, a single observer is able to share Bell nonlocality with each one of an arbitrarily long sequence of independent observers on the other side~\cite{Brown20}. Surprisingly, however, we have recently found strong analytic and numerical evidence that, under the same assumptions considered in~\cite{Brown20}, it is impossible to recycle {\it both} qubits such that Bell nonlocality is shared between sequential pairs of observers on each side~\cite{Cheng21}. In particular, the evidence supports the conjecture that observers Alice~1 and Bob~1 in Fig.~\ref{fig:fig1} can violate a CHSH inequality only if Alice~2 and Bob~2 cannot, and similarly for the pairs (Alice~1, Bob~2) and (Alice~2, Bob~1). This restriction of qubit recycling to one side may be viewed as a type of sharing monogamy, and we have given corresponding one-sided monogamy relations that are valid for large classes of states and measurements~\cite{Cheng21}.
\begin{figure*}
\caption{Sequential sharing with multiple observers on each side. A source S generates two qubits on each run, which are received by observers Alice~1 and Bob~1 ($A_1$ and $B_1$ in the main text). They each make one of two local measurements on their qubit with equal probabilities; record their result; and pass their qubit onto independent observers Alice~2 and Bob~2, respectively ($A_2$ and $B_2$ in the main text). It is known that Alice~1 can demonstrate Bell nonlocality with each of an arbitrary number of Bobs in this way, by recycling the second qubit~\cite{Brown20}. However, we have recently given strong analytic and numerical evidence for the conjecture that Alice~1 and Bob~1 can demonstrate Bell nonlocality in this manner only if Alice~2 and Bob~2 cannot, and vice versa~\cite{Cheng21}. A similar result holds for the pairs (Alice~1, Bob~2) and (Alice~2, Bob~1)).
}
\label{fig:fig1}
\end{figure*}
In this paper we continue to investigate the sequential sharing of Bell nonlocality via recycling the components of entangled systems, and in particular generalise several results in~\cite{Brown20} and~\cite{Cheng21}. We start with a brief review in Sec.~\ref{Sec2. Qunit Observables} on the characterisation of general two-valued qubit observables introduced in~\cite{Cheng21}, and give a measurement model that provides a simple interpretation of the strength and bias of such observables. In Sec.~\ref{Sec4. Measurements Reversibility}, we give a general formalism for describing sequential scenarios, based on quantum instruments~\cite{Davies70,Ozawa84}, and review the optimal reversibility properties of square-root measurements in this context. We also give natural definitions of the maximum reversibility and minimum decoherence of a qubit observable; tradeoff relations between these quantities and the strength and bias of the observable; and connections with the class of weak measurements considered by Silva {\it et al.}~\cite{Silva15}.
The above results provide the tools needed in Sec.~\ref{sec:monog} for obtaining several one-sided monogamy relations (only proved for a special case in~\cite{Cheng21}), for the sequential generation of Bell nonlocality via measurements of unbiased observables. We also give numerical evidence that even stronger monogamy relations hold for this case, and obtain semi-analytic forms for the best possible such relation. In Sec.~\ref{Sec6. One-sided} we apply our tools to scenarios in which recycling is possible for arbitrary numbers of observers. First, if the source is not restricted to generation and measurement of a single qubit pair we show, by generalising the construction by Brown and Colbeck in~\cite{Brown20}, that Bell nonlocality can be sequentially generated between arbitrarily many observers on each side, via recycling multiqubit states. Second, a different generalisation of the Brown-Colbeck construction yields a larger class of two-qubit states for which a single Alice can share Bell nonlocality with a given number of Bobs.
Conclusions are given in Sec.~\ref{Sec8. Conclusions}.
\section{Two-valued qubit observables} \label{Sec2. Qunit Observables}
In this section, we recap the description of general two-valued qubit observables given in~\cite{Cheng21} (see also~\cite{Yu2010}), and note several important properties for later use. A simple measurement model for such observables is also noted.
\subsection{Strength and bias}\label{subsec2.1}
We can label the outcomes of a general two-valued observable $X$ by $\pm 1$. It is then described by a positive operator valued measure (POVM) $\{X_+,X_-\}$, with $X_\pm\geq 0$, $X_++X_-=\mathbbm{1}$, and probability distribution $p_\pm=\tr{\rho X_\pm}$. The observable is equivalently represented by the operator $X= X_+ - X_-$, where $X_\pm=\half(\mathbbm{1}\pm X)$, $-\mathbbm{1}\leq X\leq \mathbbm{1}$ and $\langle X\rangle=p_+-p_-$. This representation is particularly useful for the purposes of the CHSH parameter~(\ref{chsh}), as the expectation value of the product of two such observables $X$ and $Y$, acting on respective components of a quantum system, is given by \begin{align}
\langle XY\rangle :&= \sum_{x,y=\pm 1} xy\,p(x,y|X, Y) \nonumber\\ &= \sum_{x,y=\pm1} xy\langle X_x\otimes Y_y\rangle \nonumber\\ & = \sum_{x,y=\pm1} xy\left\langle \frac{1+xX}{2}\otimes\frac{1+yY}{2}\right\rangle\nonumber\\ & = \langle X\otimes Y\rangle . \label{prodxy} \end{align}
For a qubit, the operator $X$ can be decomposed as \begin{equation} X = {\mathcal B} \mathbbm{1} + \mathcal{S}\bm \sigma\cdot\bm x \label{observable}
\end{equation} with respect to the Pauli spin operator basis $\boldsymbol{\sigma}\equiv (\sigma_1, \sigma_2, \sigma_3)$. Here ${\mathcal B}$ defines the {\it outcome bias} of the observable; $\mathcal{S}\geq0$ denotes its {\it strength}~\cite{Shenoy19} or sharpness~\cite{Yu2010,Choudhary13,Kunjwal14} (and is also called its information gain~\cite{Silva15}); and $\bm x$ is a unit direction associated with the observable, with $|\bm x|:=(\bm x \cdot \bm x)^{1/2}=1$. The requirement $-\mathbbm{1}\leq X\leq \mathbbm{1}$ is equivalent to the constraint \begin{equation} \label{sbcon}
\mathcal{S} + |{\mathcal B}| \leq 1 \end{equation} on strength and bias.
It follows for the case of maximum strength, $\mathcal{S}=1$, that the bias must vanish, i.e., $X=\bm\sigma\cdot\bm x$ is the projective observable corresponding to spin in direction $\bm x$. Conversely, a minimum strength, $\mathcal{S}=0$, corresponds to the trivial observable $X={\mathcal B}\mathbbm{1}$, equivalent to tossing a two-sided coin with biased outcome probabilities $p_\pm=\half(1\pm{\mathcal B})$ and average outcome $\langle X\rangle={\mathcal B}$.
For later purposes it is useful to note that any two-qubit state $\rho$ can be parameterised in the compact form \begin{equation} \label{bloch} \rho=\frac14\sum_{\mu,\nu=0}^4 \Theta_{\mu\nu} \, \sigma_\mu\otimes\sigma_\nu,~~\Theta:=\begin{pmatrix} 1 & \bm b\\ \bm a^\top & T \end{pmatrix} , \end{equation} with $\sigma_0=\mathbbm{1}$. Here $\bm a:=\langle \bm \sigma\otimes \mathbbm{1}\rangle$ and $\bm b:=\langle \mathbbm{1}\otimes \bm \sigma\rangle$ refer to Alice's and Bob's Bloch vectors respectively, and $T:=\langle \bm\sigma\otimes\bm\sigma^\top\rangle$ is the spin correlation matrix. Consequently, Eq.~(\ref{prodxy}) can be rewritten as \begin{equation} \label{prodxy2} \langle XY\rangle = \begin{pmatrix} {\mathcal B}_X& \mathcal{S}_X \bm x^\top\end{pmatrix} \begin{pmatrix} 1 & \bm b^\top\\ \bm a & T \end{pmatrix} \begin{pmatrix} {\mathcal B}_Y\\ \mathcal{S}_Y \bm y \end{pmatrix} , \end{equation} for $X={\mathcal B}_X\mathbbm{1}+\mathcal{S}_X\bm\sigma\cdot\bm x$ and $Y={\mathcal B}_Y\mathbbm{1}+\mathcal{S}_Y\bm\sigma\cdot\bm y$. In the case of projective observables $X=\bm \sigma\cdot \bm x$ and $Y=\bm\sigma\cdot \bm y$, the right hand side simplifies to the familiar expression $\bm x^\top T\bm y$. In contrast, for trivial observables $X=Y=\mathbbm{1}$ the right hand side simplifies to 1, implying that a corresponding CHSH parameter of $S(A,B)=2$ in Eq.~(\ref{chsh}) can be always be obtained via trivial observables.
\subsection{A simple measurement model}\label{subsec2.2}
A simple interpretation of a general qubit observable $X$ as a noisy projective observable, with the strength, bias and post-measurement state having correspondingly simple interpretations, is as follows.
In particular, suppose that one either (i)~measures the projective observable $\bm\sigma\cdot\bm x$, with a `success' probability $\mathcal{S}$, or (ii) otherwise assigns outcomes $\pm 1$ by flipping a coin having biased outcome probabilities $q_\pm=\half(1\pm\epsilon)$, with probability $1-\mathcal{S}$. The resulting measurement statistics are therefore generated by the POVM elements \begin{equation} X_\pm = \mathcal{S} \frac{\mathbbm{1}\pm\bm\sigma\cdot\bm x}{2} + (1-\mathcal{S})\frac{1\pm\epsilon}{2}\mathbbm{1} .
\end{equation} This corresponds to the observable \begin{equation} X = X_+-X_- = \epsilon (1-\mathcal{S})\mathbbm{1} + \mathcal{S}\bm\sigma\cdot\bm x ,
\end{equation} having strength $\mathcal{S}$ and bias ${\mathcal B}=\epsilon(1-\mathcal{S})$. Note that constraint~(\ref{sbcon}) is equivalent to the property $|\epsilon|\leq1$.
There are many different ways to measure a given observable, and the post-measurement state depends on the measurement details (see Sec.~\ref{subsec4.1}). However, it is of interest to consider the post-measurement state for the simple implementation above if the projective measurement is assumed to leave the qubit in the corresponding eigenstate of $\bm \sigma\cdot\bm x$, while the coin flip leaves the qubit unchanged. It follows that if the state prior to the measurement is described by density operator $\rho$, then the post-measurement state is described by \begin{align} \label{simplemeas} \rho_{\mathcal{S}}&=\mathcal{S}\left( \p{x}\rho\p{x}+\p{-x}\rho\p{-x}\right) + (1-\mathcal{S})\rho \nonumber\\ &= \p{x}\rho\p{x}+\p{-x}\rho\p{-x} +(1-\mathcal{S})\left(\p{x}\rho\p{-x}+\p{-x}\rho\p{x}\right) \end{align} where $\p{x}:=\half(\mathbbm{1}+\bm\sigma\cdot\bm x)$. Thus, the diagonal elements of the state with respect to the $\bm\sigma\cdot\bm x$ basis are unchanged, while the off-diagonal elements are scaled by a factor $1-\mathcal{S}$, implying that the latter provides a measure of the reversibility of the measurement. We will see in Sec.~\ref{subsec4.3}, however, that measurement implementations having larger degrees of reversibility are possible for all $0<\mathcal{S}<1$.
\section{Measurements and reversibility} \label{Sec4. Measurements Reversibility}
In the context of sequential generation of entanglement, as depicted in Fig.~\ref{fig:fig1}, the effect of a measurement on the subsequent state of a quantum system is critical. In this section we first consider the general form of the post-measurement states, and then focus on the special case of square-root measurements. The latter have been previously argued to correspond to the maximally reversible measurements of any given observable. For two-valued qubit observables this leads to natural measures of maximum reversibility and minimum decoherence, which are completely determined by the strength and bias of the observable.
\subsection{General considerations}\label{subsec4.1}
We briefly review measurements on general quantum systems here, and specialise to the case of qubit systems in the following subsections.
A measurement on a quantum system described by density operator $\rho$, with outcome $x$, will leave the system in some corresponding state $\rho_x$ with probability $p_x$ (we do not restrict to $x=\pm1$ here). The most general description of such a measurement is an {\it instrument}~\cite{Davies70,Ozawa84}, i.e., a set of completely positive (CP) maps, $\{\phi_x\}$, satisfying $\phi_x(\rho) = p_x \rho_x$. Taking the trace of each side then yields \begin{equation} p_x = \tr{\phi_x(\rho) }=\tr{\mathbbm{1}\phi_x(\rho)} = \tr{\phi^*_x(\mathbbm{1})\rho} , \end{equation} where $\chi^*$ denotes the dual map of linear map $\chi$ (defined via $\tr{A\chi(B)}= \tr{\chi^*(A)B}$ for all $A,B$). It follows that the observable measured by the instrument is described by the POVM $\{X_x\}$ with \begin{equation} X_x := \phi^*_x(\mathbbm{1}). \end{equation} Moreover, the density operator describing an ensemble of such systems after measurement is given by the completely-positive trace-preserving (CPTP) map \begin{equation} \label{cptp} \phi(\rho) := \sum_x p_x \rho_x = \sum_x \phi_x(\rho). \end{equation} In the simplest case, \begin{equation} \label{kraus} \phi_x(\rho) = M_x \rho M_x^\dagger,\qquad X_x = M^\dagger_x M_x , \end{equation} for a set of `measurement operators' $\{M_x\}$, so that each post-measurement state $\rho_x$ is pure when $\rho$ is pure. More generally, however, each $\phi_x(\rho)$ is a sum of such terms.
If a local measurement described by the CPTP map $\phi$ is made on the first component of an ensemble of bipartite quantum systems described by $\rho$, it follows that the post-measurement state of the ensemble is given by $\rho'=(\phi\otimes I)(\rho)$. For the purpose of explicit calculations, it is convenient to choose trace-orthogonal basis sets $\{\tilde\sigma_\alpha\}$ and $\{\tilde\tau_\mu\}$ for the Hermitian operators of the first and second components, respectively, such that $\tr{\tilde\sigma_\alpha\tilde\sigma_\beta}=c\delta_{\alpha\beta}$ and $\tr{\tilde\tau_\mu\tilde\tau_\nu}=d\delta_{\mu\nu}$ for two constants $c$ and $d$. This gives the generalised Bloch representation $\rho=(cd)^{-1}\sum_{\alpha\mu} \tilde\Theta_{\alpha\mu}\tilde\sigma_\alpha\otimes\tilde\sigma_\mu$, with $\tilde\Theta_{\alpha\mu}:=\langle \tilde\sigma_\alpha\otimes\tilde\sigma_\mu\rangle$, generalising Eq.~(\ref{bloch}). Hence, a local measurement on the first component of the system, taking $\rho$ to $\rho'=(\phi\otimes I)(\rho)$ with $\phi$ as in Eq.~(\ref{cptp}), takes $\tilde\Theta$ to $\tilde\Theta'$ with \begin{align}
\tilde\Theta'_{\alpha\mu}
&= \tr{(\phi\otimes I)(\rho)\,\tilde\sigma_\alpha\otimes\tilde\tau_\mu}\nonumber\\
&= \frac{1}{cd} \sum_{\beta,\nu}\tilde\Theta_{\beta\nu} \tr{(\phi\otimes I)(\tilde\sigma_\beta\otimes \tilde\tau_\nu)\,\tilde\sigma_\alpha\otimes\tilde\tau_\mu}\nonumber\\
&=\frac{1}{cd} \sum_{\beta,\nu}\tilde\Theta_{\beta\nu} \tr{\phi(\tilde\sigma_\beta)\tilde\sigma_\alpha \otimes \tilde\tau_\nu\tilde\tau_\mu}\nonumber\\
&=\frac{1}{cd} \sum_{\beta,\nu}\tilde\Theta_{\beta\nu} \tr{\phi(\tilde\sigma_\beta)\tilde\sigma_\alpha }\,\tr{ \tilde\tau_\nu\tilde\tau_\mu}\nonumber\\
&= \frac{1}{c} \sum_\beta \tilde\Theta_{\beta\mu} \tr{\phi(\tilde\sigma_\beta)\tilde\sigma_\alpha }.
\label{thetaprime} \end{align} Thus, \begin{equation} \label{ktheta} \tilde\Theta' = \mathcal{K}\tilde\Theta, \qquad\mathcal{K}_{\alpha\beta}:= c^{-1}\tr{\tilde\sigma_\alpha\phi(\tilde\sigma_\beta)}. \end{equation} and the effect of the measurement on an ensemble corresponds to left multiplication of $\tilde\Theta$ by the matrix $\mathcal{K}$. More generally, if local measurements described by CPTP maps $\phi$ and $\chi$ are made on the first and second components, respectively, then the corresponding state $\rho''=(\phi\otimes\chi)(\rho)$ corresponds to transforming $\tilde\Theta$ to \begin{equation} \label{kthetal} \tilde\Theta''= \mathcal{K} \tilde\Theta \mathcal{L}^\top,\qquad \mathcal{L}_{\mu\nu}:= d^{-1}\tr{\tilde\tau_\mu\color{blue} \chi \color{black} (\tilde\tau_\nu)}, \end{equation} similarly to Eq.~(\ref{ktheta}) above. This easily generalises to sequences of local measurements on each side (see Sec.~\ref{sec:arbab}).
\subsection{Square-root measurements and\\ maximum reversibility}\label{subsec4.2}
There are many possible ways of measuring a general POVM observable $\{X_x\}$. For example, the square-root measurement corresponds to the instrument $\{\phi_x\}$ defined by $\phi_x(\rho):=X_x^{1/2}\rho X_x^{1/2}$, with corresponding CPTP map \begin{equation} \label{sqrt} \phi_{1/2}(\rho)=\sum_x\phi_x(\rho)=\sum_x X_x^{1/2}\rho X_x^{1/2} . \end{equation} This example is of fundamental interest, as {\it any} instrument $\{\phi^G_x\}$ describing a measurement of the POVM $\{X_x\}$ has the form $\phi^G_x=\psi_x\circ\phi_x$, for suitable CPTP maps $\psi_x$~\cite{Brown20}. Thus, any measurement of $X$ formally corresponds to first carrying out the square-root measurement, and then applying a quantum channel to the state depending on the result obtained.
It follows, noting that $\psi_x$ is reversible if and only if it is unitary, that a general measurement of $X$ can be no more reversible than a square-root measurement of $X$. Moreover, if the outcomes are not known (e.g., to a second observer who receives the qubit in a sequential scenario), then a general measurement can only be `reversed' to the square-root measurement if $\psi_x(\rho) = U\rho \,U^\dagger$ for some unitary transformation $U$. In this sense the square-root measurement is the maximally reversible measurement of $X$, up to a unitary transformation.
The above property of square-root measurements leads to a natural measure of the maximum reversibility for any measurement of a two-valued qubit observable. In particular, the action of a square-root measurement of $X\equiv\{X_+,X_-\}$ on a qubit in state $\rho$ may be calculated from Eqs.~(\ref{observable}) and~(\ref{sqrt}) as~\cite{Cheng21} \begin{equation} \phi_{1/2}(\rho) = \p{x}\rho\p{x} + \p{-x}\rho\p{-x}+ \mathcal R\left(\p{x}\rho\p{-x}+\p{-x}\rho\p{x}\right), \label{rhoprime} \end{equation} where $\p{x}=\half(1+\bm\sigma\cdot\bm x)$ and the parameter $\mathcal R$ is given by \begin{equation} \mathcal R = \half\sqrt{(1+{\mathcal B})^2-\mathcal{S}^2} + \half\sqrt{(1-{\mathcal B})^2-\mathcal{S}^2}, \label{reversibility} \end{equation} Thus, $\mathcal R=0$ for projective measurements ($\mathcal{S}=1, {\mathcal B}=0$), and $\mathcal R=1$ for trivial measurements ($\mathcal{S}=0$). More generally, $\mathcal R$ scales the off-diagonal elements of $\rho$, making it a suitable measure of the reversibility of the square-root measurement. Accordingly, it is also a measure of the maximum reversibility of any measurement of $X$. We will often just refer to it as the reversibility in what follows.
The interpretation of $\mathcal R$ as a measure of maximum reversibility may also be more directly justified in some cases. For example, Silva {\it et al.} introduced a class of weak measurements of unbiased qubit observables, i.e., with ${\mathcal B}=0$ in Eq.~(\ref{observable}), for which the post-measurement state has the form~\cite{Silva15} \begin{equation} \rho_F:= \p{x}\rho\p{x} + \p{-x}\rho\p{-x}+ F\left(\p{x}\rho\p{-x}+\p{-x}\rho\p{x}\right), \label{rhof} \end{equation} where $F$ is a `quality factor' that depends on the properties of the pointer state used in the measurement. Comparing Eqs.~(\ref{rhoprime}) and~(\ref{rhof}), it is seen that $F$ is a measure of the reversibility of such a weak measurement. However, as shown in~\cite{Cheng21}, \begin{equation} \label{frineq} F\leq \mathcal R, \end{equation} with equality holding for a set of optimal pointer states. Thus the reversiblity of such weak measurements is explicitly bounded above by the maximum reversiblity $\mathcal R$. It will be shown in the next subsection that $\mathcal R$ also explicitly upper bounds the reversibility of the simple measurement protocol in Eq.~(\ref{simplemeas}), for both biased and unbiased observables.
Moreover, as noted in~\cite{Cheng21}, if the measurement of a general qubit observable $X=\{X_+,X_-\}$ is implemented with Kraus operators $M_\pm$ as in Eq.~(\ref{kraus}, then the average state disturbance, as quantified by the fidelity $\mathcal{F}$ in~\cite{Banaszek01}, is upper bounded by \begin{equation} \mathcal{F}\leq (\mathcal R+2)/3,\label{fidelity} \end{equation} where the equality is saturated for the square-root measurement with $M_\pm=X_\pm^{1/2}$. Thus, there is a direct connection between maximum reversibilty and maximum fidelity for this case.
Finally, for the purposes of applying the Horodecki criterion~\cite{Horodecki95} to the post-measurement state of a two-qubit ensemble, we also need to determine how the spin matrix $T$ transforms under local measurements. For the case of a local measurement on the first qubit, one finds from Eqs.~(\ref{bloch}) and~(\ref{ktheta}) (with $\tilde\sigma_\alpha=\tilde\tau_\alpha=\sigma_\alpha$ and $c=d=2$) that $T$ is mapped to \begin{equation} \label{tprime} T'= \half\tr{\phi(\mathbbm{1})\bm \sigma} \bm b^\top + KT, \end{equation} where $K$ is the $3\times3$ matrix with coefficients $K_{jk}=\mathcal{K}_{jk}$. It follows that $T$ transforms most simply when the first term vanishes, i.e., when (i)~$\bm b=\bm 0$ or (ii)~$\phi(\mathbbm{1})=\mathbbm{1}$. Note that condition~(ii) is equivalent to the map $\phi$ being unital. More generally, if $\phi$ and $\chi$ are unital, or if the local states are maximally mixed, then Eq.~(\ref{kthetal}) yields the simple transformation law \begin{equation} \label{ktl} T'' = KTL^\top \end{equation} for the spin correlation matrix, following local measurements on each side, with $L_{jk}:=\mathcal{L}_{jk}$.
In particular, noting from Eq.~(\ref{sqrt}) that square-root measurements are unital, i.e., $\phi_{1/2}(\mathbbm{1})=\mathbbm{1}$, it follows that local square-root measurements of the qubit observables $X={\mathcal B}_X\mathbbm{1}+\mathcal{S}_X\bm\sigma\cdot\bm x$ and $Y={\mathcal B}_Y\mathbbm{1}+\mathcal{S}_Y\bm\sigma\cdot\bm y$ result in a spin correlation matrix of the form given in Eq.~{\ref{ktl}). Explicit calculation of $K$ and $L$, from Eq.~(\ref{rhoprime}) and $\mathcal K$ and $\mathcal L$ in Eqs.~(\ref{ktheta}) and~(\ref{kthetal}), then gives~\cite{Cheng21} \begin{equation} \label{txy} T^{XY}=K^X T K^Y,~~K^X:=\mathcal R_XI_3+(1-\mathcal R_X)\bm x\bm x^\top, \end{equation} where $I_3$ is the $3\times3$ identity matrix, and $\mathcal R_X, \mathcal R_Y$ label the maximum reversibilities of observables $X, Y$, respectively. This result will be used in obtaining the one-sided monogamy relations in Sec.~\ref{sec:monog}.
\subsection{Tradeoffs between strength, reversibility\\ and decoherence}\label{subsec4.3}
It has been shown previously that the strength and maximum reversibility of a qubit observable $X={\mathcal B}\mathbbm{1}+\mathcal{S}\bm \sigma\cdot\bm x$ satisfy the tradeoff relation~\cite{Cheng21} \begin{equation} \label{rssum} \mathcal R^2 + \mathcal{S}^2 \leq 1, \end{equation} with equality for the case ${\mathcal B}=0$. Thus, the greater the strength or sharpness of the observable, the less reversibly it can be measured, and vice versa. This tradeoff is closely related to the information-disturbance relation of Banaszek~\cite{Banaszek01}, for the case of qubit measurements~\cite{Cheng21}, and will be crucial to the derivation of one-sided monogamy relations in Sec.~\ref{sec:monog}.
Here we briefly note several further connections between strength, reversibility, bias and information-disturbance; compare the maximum reversibility to that of the simple measurement protocol in Eq.~(\ref{simplemeas}); and introduce a natural measure of the minimum decoherence of a qubit measurement.
First, we extend tradeoff relation~(\ref{rssum}) to the inequality chain \begin{equation} \label{chain} 1-\mathcal{S} \leq \mathcal R^2 \leq 1-\mathcal{S}^2. \end{equation} Thus, a given strength sets both upper and lower bounds on the maximum reversibility. To obtain these bounds, note first from Eq.~(\ref{reversibility}) that $\mathcal R=0$ is only possible if $\mathcal{S}=1\pm{\mathcal B}$, yielding in turn ${\mathcal B}=0$ and $\mathcal{S}=1$, and that $\mathcal R=1$ is only possible if $\mathcal{S}=0$, since otherwise Eq.~(\ref{reversibility}) gives $\mathcal R<\half(1+{\mathcal B})+\half(1-{\mathcal B})=1$. Hence, Eq.~(\ref{chain}) is certainly valid for $\mathcal R=0$ or 1. Moreover, for $0<\mathcal R<1$ it follows directly from Eq.~(\ref{reversibility}) that~\cite{Cheng21} \begin{equation} \mathcal R^2+\mathcal{S}^2 = 1 - {\mathcal B}^2(1/\mathcal R^2 - 1), \end{equation} immediately implying the right hand inequality of Eq.~(\ref{chain}). Further, rewriting the above equality as \begin{equation}
|{\mathcal B}|= \mathcal R\sqrt{1-\frac{\mathcal{S}^2}{1-\mathcal R^2}}, \end{equation} and substituting into Eq.~(\ref{reversibility}) gives \begin{equation} \mathcal R = \max\left\{ \mathcal R,\sqrt{1-\frac{\mathcal{S}^2}{1-\mathcal R^2}}\right\}, \end{equation} which immediately implies the left hand inequality of Eq.~(\ref{chain}).
The lower bound in Eq.~(\ref{chain}) has several applications. For example, it implies that the reversibility of the simple measurement protocol in Eq.~(\ref{simplemeas}), i.e., $\mathcal R_{\rm simp}=1-\mathcal{S}$, is always upper bounded by the maximum reversibility $\mathcal R$. In particular, we have \begin{equation} \label{rsimp} \mathcal R_{\rm simp}=1-\mathcal{S} \leq \sqrt{1-\mathcal{S}} \leq \mathcal R , \end{equation} with strict inequality for $0<\mathcal{S}<1$. This result also implies that the lower bound in Eq.~(\ref{chain}) is stronger than the `disturbance-reversibility' relation given in Theorem~2 of~\cite{Lee20}, for the case of qubit measurements, as the latter relation reduces to $\mathcal R+\mathcal{S}\leq1$ for this case. Lastly, combining constraint~(\ref{sbcon}) with the lower bound in Eq.~(\ref{chain}) gives \begin{equation}
|{\mathcal B}|\leq \mathcal R^2, \end{equation} i.e., the outcome bias sets a lower bound on the maximum reversibility of the measurement.
Noting that the maximum reversibility $\mathcal R$ in Eq.~(\ref{rhoprime}) scales the off-diagonal elements of the square-root measurement, it is natural to define a corresponding ``minimal decoherence" by~\cite{Cheng21} \begin{equation} {\cal D}= \rt{1 - \mathcal R^2}. \label{decoherence} \end{equation} Equation~(\ref{chain}) is then equivalent to \begin{equation} {\cal D} \geq \mathcal{S} \geq {\cal D}^2. \label{decoherence2} \end{equation} In particular, the minimal decoherence of any qubit measurement is at least as large as the strength of the observable being measured.
Finally, we note that the strength and bias of a given observable can be simply parameterised in terms of its maximum reversibility via \begin{equation}
\mathcal{S} = \sqrt{1-\mathcal R^2}\cos\alpha,~~~ {\mathcal B}=\mathcal R\sin\alpha,~~~ |\alpha|\leq \sin^{-1}\mathcal R, \end{equation} as can be checked by direct substitution into Eq.~(\ref{reversibility}). This parameterisation is useful for numerical searches over general observables, and for deriving further tradeoff relations such as the lower bound \begin{equation} \mathcal R^2+\mathcal{S}^2 = 1-(1-\mathcal R^2)\sin^2\alpha\geq 1-(1-\mathcal R^2)\mathcal R^2\geq\frac34 , \end{equation} complementary to Eq.~(\ref{rssum}).
\section{One-sided monogamy relations for unbiased observables} \label{sec:monog}
\subsection{Overview} \label{sec:overview}
We now consider the scenario in Fig.~\ref{fig:fig1}, in which observers $A_1$ and $B_1$ make measurements on a pair of entangled qubits, and pass them on to observers $A_2$ and $B_2$, respectively. As discussed in the Introduction, we have previously given strong support for the conjecture that, in the scenario where $A_1$ and $B_1$ each choose between two observables with equal probabilities, they can violate the CHSH inequality if and only if $A_2$ and $B_2$ cannot~\cite{Cheng21}. Here we numerically and analytically investigate this conjecture further, for the particular case of {\it unbiased} observables, including strengthening it and proving several one-sided monogamy relations for this case.
In the light of the discussion in Sec.~\ref{subsec4.2}, we limit our consideration to the scenario where $A_1$ and $B_1$ make square-root measurements of their observables, i.e., to maximally-reversible measurements~\cite{Brown20}. Hence, if $A_1$ measures either $X$ or $X'$ with equal probability, and $B_1$ measures $Y$ or $Y'$ with equal probability, it follows via Eq.~(\ref{txy}) that the spin correlation matrix $T$ of the initial shared state is transformed to $KTL$, where \begin{align} \label{kdef} K&:= \half(K^X+K^{X'}) \nonumber\\ &~= \frac{\mathcal R_X+\mathcal R_{X'}}{2}I_3 + \frac{1-\mathcal R_X}{2}\bm x\bm x^\top+ \frac{1-\mathcal R_{X'}}{2}\bm x'\bm x'^\top \end{align} and \begin{align} \label{ldef} L&:= \half(K^Y+K^{Y'}) \nonumber\\ &~= \frac{\mathcal R_Y+\mathcal R_{Y'}}{2}I_3 + \frac{1-\mathcal R_Y}{2}\bm y\bm y^\top+ \frac{1-\mathcal R_{Y'}}{2}\bm y'\bm y'^\top . \end{align} This transformation rule allows us to avoid having to explicitly optimise over the set of observables that can be measured by $A_2$ and $B_2$~\cite{Cheng21}. In particular, we can apply the Horodecki criterion to the post-measurement state~\cite{Horodecki95}, to conclude that $A_2$ and $B_2$ can violate the CHSH inequality if and only if \begin{equation} \label{sa2b2} S^*(A_2,B_2)=2\sqrt{s_1(KTL)^2+s_2(KTL)^2} >2 . \end{equation}
In previous work, a search over the possible values of $S(A_1,B_1)$ and $S^*(A_2,B_2)$ strongly supported the conjecture that the pairs $(A_1,B_1)$ and $(A_2,B_2)$ cannot both violate the CHSH inequality~\cite{Cheng21} (see also Fig.~\ref{fig:monog} below). It was further shown, analytically, that for the case of unbiased observables $X, X',Y,Y'$ satisfying the assumptions of equal strengths $\mathcal{S}_X=\mathcal{S}_{X'}, \mathcal{S}_Y=\mathcal{S}_{Y'}$ and orthogonal relative angles $\bm x\cdot \bm x'=0=\bm y \cdot \bm y'$ on each side, one has the one-sided monogamy relation~\cite{Cheng21} \begin{equation} \label{monogorthog}
|S(A_1,B_1)| + S^*(A_2,B_2) \leq \frac{8\sqrt{2}}{3} < 4 . \end{equation} It immediately follows from this relation that the quantities $S(A_1,B_1)$ and $S^*(A_2,B_2)$ cannot both be greater than 2, thus proving the conjecture under these assumptions. A numerical search and quadratic one-sided monogamy relations also supported a similar conjecture for the pairs $(A_1,B_2)$ and $(A_2,B_1)$~\cite{Cheng21}, but the latter case will not be considered further here.
In the remainder of this Section we will strengthen the above results in several ways. First, we will give numerical evidence for the following. {\flushleft \bf Conjecture for unbiased observables: } {\it For arbitrary unbiased observables $X,X'$ and $Y,Y'$, that are independently measured by $A_1$ and $B_1$ with equal respective probabilities, the one-sided monogamy relation \begin{equation} \label{unbiasedmonog}
|S(A_1,B_1)| + S^*(A_2,B_2) \leq 4 \end{equation} is always satisfied.}\\ This conjecture immediately implies that the pairs $(A_1,B_1)$ and $(A_2,B_2)$ cannot both violate the CHSH inequality, for any measurements of unbiased observables by the first pair, without any assumptions on their strengths and relative angles. The numerical evidence further shows that stronger one-sided monogamy relations must exist, and we obtain some semi-analytic results for the form of the optimal such relation.
We will also analytically support the above conjecture, by (i)~proving that the monogamy relation~(\ref{unbiasedmonog}) holds for all unbiased observables with equal strengths on each side (irrespective of the relative angles); and (ii)~proving that the stronger monogamy relation~(\ref{monogorthog}) holds for all unbiased observables with orthogonal relative angles on each side (irrespective of their strengths).
\begin{figure}
\caption{ One-sided monogamy relations for biased and unbiased observables. For general biased observables, a global optimisation algorithm was employed in~\cite{Cheng21} to determine the joint range of possible values of $S(A_1, B_1)$ and $S^*(A_2, B_2)$, the results of which are reproduced here as the blue dots. When restricted to unbiased observables, new numerical results are displayed as the solid orange curve, which lies strictly below the dashed red line corresponding to the conjectured monogamy relation~(\ref{unbiasedmonog}), and which completely coincides with the blue dots for the case $|S(A_1,B_1)|\geq2$. We have further verified that the solid orange curve can be achieved by measuring unbiased observables on a singlet state. The detailed form of the solid orange curve is discussed in Sec.~\ref{sec:semi}.
}
\label{fig:monog}
\end{figure}
\subsection{Numerical evidence for the conjecture} \label{sec:num}
The joint range of achievable values of $S(A_1,B_1)$ and $S^*(A_2,B_2)$ can be numerically determined via a global numerical optimisation algorithm, as described in~\cite{Cheng21}. The algorithm searches over all two-valued observables $X, X',Y,Y'$ that can be measured by $A_1$ and $B_1$, and over all pure states (convexity implies that only pure states need be considered), i.e., over a total of 17 free parameters~\cite{Cheng21}. More precisely, a differential evolution optimizer is implemented to seek solutions to the problem \begin{equation} \begin{split} \max\limits_{\alpha,X,X',Y,Y'} & S^*(A_2,B_2) \\ \text{s.t.} \qquad& S(A_1,B_1) = s, \end{split} \label{eq:optimization_problem} \end{equation} for fixed values of $s\in[0,2\sqrt{2}]$. The codes used in the simulations reported here are freely available at~\cite{paper_codes}.
The results of this search are reproduced here as the blue dots in Fig.~\ref{fig:monog}, which plots the numerically-determined maximum value of $S^*(A_2,B_2)$ for each possible value of $S(A_1,B_1)$ (only a subset of points is plotted, for ease of viewing).
It is seen that $S^*(A_2,B_2)$ can reach the maximum value of $2\sqrt{2}$, and thus allow the pair $(A_2,B_2)$ to violate the CHSH inequality, for any value $|S(A_1,B_1)|\leq 2$ (e.g., via $A_1$ and $B_1$ making non-disturbing trivial measurements $X=X'=Y=Y'={\mathcal B} \mathbbm{1}$ on a singlet state and $A_2$ and $B_2$ making the optimal CHSH measurements). In contrast, for $|S(A_1,B_1)|>2$, i.e., when the pair $(A_1,B_1)$ can violate the CHSH inequality, the results show that $S^*(A_2,B_2)$ is strictly less than 2. Thus, the dotted blue curve confirms the general conjecture in~\cite{Cheng21} that it is impossible for both pairs to violate the CHSH inequality.
Figure~\ref{fig:monog} also presents new numerical results, for the case where $(A_1,B_1)$ are restricted to measurements of {\it unbiased} observables, corresponding to the solid orange curve. These results were generated by the same method described above but with the biases of the observables set equal to zero. Noting that the dashed red line in Fig.~\ref{fig:monog} corresponds to equality in Eq.~(\ref{unbiasedmonog}), these numerical results therefore strongly support the above conjecture that the one-sided monogamy relation~(\ref{unbiasedmonog}) holds for measurements of unbiased observables. We have also numerically verified that for this case the same solid orange curve is obtained under the restriction to a singlet state, in agreement with the reasoning given~\cite{Cheng21}, and that it is also obtained under a further restriction of measurement directions to the equatorial plane.
It is of interest that the dotted blue and solid orange curves are the same, up to numerical error, for any given violation of the CHSH inequality by $A_1$ and $B_1$, i.e., the corresponding maximum possible value of $S^*(A_2,B_2)$ can be achieved even if $A_1$ and $B_1$ are restricted to measure unbiased observables.
\subsection{Semi-analytic optimal monogamy relations} \label{sec:semi}
The numerical results depicted in Fig.~\ref{fig:monog} support the conjectured one-sided monogamy relation in Eq.~(\ref{unbiasedmonog}), but they also indicate that this relation is not optimal. In particular, an optimal monogamy relation for unbiased observables would reproduce the numerically-generated orange boundary curve in Fig~\ref{fig:monog}. It is therefore of interest to probe the numerical results more closely, to gain information about the possible analytic form of this curve. Some success in this direction is achieved below for several portions of the boundary curve, and we refer to the results, guided by both numerical and analytic analysis, as `semi-analytic' monogamy relations. Strictly analytic but less general relations will be derived in Sec.~\ref{sec:analytic}.
To find suitable ansatzes for the optimal measurement strengths and directions that generate the orange boundary curve in Fig.~\ref{fig:monog}, we begin from the observation in Sec.~\ref{sec:num} that the same curve is numerically generated under the restrictions that (i) the initially-shared state is a singlet state ($T=-I_3$) and (ii) $A_1$ and $B_1$'s observables are confined to the equatorial plane of the Bloch sphere (henceforth setting the $z$-component of all measurement Bloch vectors to zero). Noting that the singlet state is rotationally invariant, we choose $\bm x = (0,1,0)^\top$ without loss of generality.
The numerical results indicate that the optimal measurement parameters under the above restrictions take different forms in three piecewise regions of the orange boundary curve in Fig.~\ref{fig:monog}, given by $|S(A_1,B_1)|\leq 2$, $2<|S(A_1,B_1)| \lesssim 2.72$, and $|S(A_1,B_1)|\gtrsim 2.72$. These regions are therefore considered in turn below.
\begin{figure}
\caption{
Semi-analytic monogamy relations for unbiased observables. The orange solid curve reproduces the numerically-generated optimal curve in Fig.~\ref{fig:monog}. The black dotted and dashed curves match the optimal curve in the regions $|S(A_1,B_1)\in[0,2]$ and $|S(A_1,B_1)\gtrsim2.72$ respectively. These curves are defined in Eqs.~\eqref{eq:case1curve} and \eqref{eq:equal_stengths_orthogonal}, found by numerically motivated ansatzes for the optimal observables measured by $A_1$ and $B_1$ (see main text). }
\label{fig:semianalytics}
\end{figure}
\subsubsection{First region of the optimal boundary curve}
For the first region, i.e., $|S(A_1,B_1)|\leq 2$, the numerical results suggest the ansatz $\mathcal{S}_{X'}=0, \mathcal{S}_{Y}=\mathcal{S}_{Y'}$ and $\bm x = \bm x' = \bm y = \bm y'=(0,1,0)^\top$ for the measurement parameters that generate corresponding section of the orange curve in Fig.~\ref{fig:monog}. This implies via Eqs.~(\ref{kdef}) and~(\ref{ldef}) that $KTL$ is diagonal, with largest singular values \begin{align} s_1(KTL) &= 1 ,~ s_2(KTL) = \frac{1}{2} \left(1+\sqrt{1-\mathcal{S}_{X}^2}\right) \sqrt{1-\mathcal{S}_{Y}^2}. \label{eq:second_singular_val} \end{align} Now, since $S^*(A_2,B_2)$ in Eq.~\eqref{sa2b2} is an increasing function of $s_2(KTL)$, we therefore seek to find the strengths $\mathcal{S}_{X}$ and $\mathcal{S}_{Y}$ which maximize the latter. To this end, noting that $S(A_1,B_1) = 2\mathcal{S}_X\mathcal{S}_Y$ under our ansatz, we introduce the Lagrangian objective function \begin{align} \mathcal{L}(\mathcal{S}_{X},\mathcal{S}_{Y},\xi) &:= \frac{1}{2} \left(1+\sqrt{1-\mathcal{S}_{X}^2}\right)\rt{\left(1-\mathcal{S}_{Y}^2\right)} \nonumber\\ &- \xi(2\mathcal{S}_X\mathcal{S}_Y-s) \end{align} where $\xi$ is a Lagrange multiplier. The stationary points of the Lagrangian, $\nabla\mathcal{L} =0$, occur when \begin{align} 4\xi\mathcal{S}_Y+ \frac{\mathcal{S}_X\sqrt{1-\mathcal{S}_Y^2}}{\sqrt{1-\mathcal{S}_X^2}} &= 0 \\ 4\xi\mathcal{S}_X+ \frac{\mathcal{S}_Y(1+\sqrt{1-\mathcal{S}_X^2})}{\sqrt{1-\mathcal{S}_Y^2}} &= 0 \\ 2\mathcal{S}_X\mathcal{S}_Y &= s \end{align}
Rewriting the first two of the Lagrangian equations in terms of the maximum reversibilities $\mathcal R_X=\sqrt{1-\mathcal{S}_X^2}$ and $\mathcal R_Y=\sqrt{1-\mathcal{S}_Y^2}$ and equating $\xi$ in these equations yields $\mathcal R_Y=\sqrt{\mathcal R_X}$, and the corresponding CHSH parameters evaluate to \begin{align} S(A_1,B_1) &= 2(1-\mathcal R_X)\sqrt{1+\mathcal R_X}, \\ S^*(A_2,B_2) &= \sqrt{4+(1+\mathcal R_X)^2\mathcal R_X} , \end{align}
in terms of $\mathcal R_X$. Plotting the parameters as $\mathcal R_X$ varies over $[0,1]$ then gives the black dotted curve in Fig.~\ref{fig:semianalytics}, which is seen to perfectly match the orange boundary curve of Fig.~\ref{fig:monog}, up to numerical error, for the region $|S(A_1,B_1)|\leq 2$. We hence propose these expressions parameterise the exact form of the boundary curve for this region.
The Lagrangian equations can also be solved algebraically by Mathematica, to give the optimal value of $S^*(A_2,B_2)$ as an explicit function of $s:=|S(A_1,B_1)|$. In particular, if $h(s)$ denotes the smallest real root of the cubic polynomial \begin{align}
p(z) &:= \left(s^2-4\right)z^3 - \left(3 s^2-16\right)z^2 + \left(3 s^2-16\right)z +s^2 , \end{align} then the solution for $\mathcal{S}_Y$ can be written as the square root of the largest root of the quadratic polynomial \begin{equation}
q(z) :=4 h(s) z^2 + s^2[1-h(s)]z - s^2. \end{equation} Denoting this solution by $\mathcal{S}_Y(s)$ (which we do not give explicitly in terms of $s$ here, due to its complicated form), and using $\mathcal{S}_X = s/(2\mathcal{S}_Y)$ and Eq.~\eqref{eq:second_singular_val}, we arrive at the function \begin{equation}
S^*(A_2,B_2) = \sqrt{ 4 + \frac{1 - \mathcal{S}_Y(s)^2}{4} \left(2+\sqrt{4-\frac{s^2}{\mathcal{S}_Y(s)^2}} \right) }
\label{eq:case1curve} \end{equation} directly relating the two CHSH parameters. This function again corresponds to the black dotted line in Fig.~\ref{fig:semianalytics}, and replacing equality by $\leq$ yields our proposed optimal one-sided monogamy relation for this region.
\subsubsection{Second region of the optimal boundary curve}
For the intermediate region $2<|S(A_1,B_1)|\lesssim 2.72$, the numerical results indicate that the orange boundary curve in Fig.~\ref{fig:monog}, i.e., the solution to Eq.~\eqref{eq:optimization_problem}, occurs when $\mathcal{S}_{Y} = \mathcal{S}_{Y'}$, $B_1$'s measurement directions are determined by a single angle $\theta$ via $\bm y = (\sin\theta,\cos\theta,0)^\top, \bm y'= (-\sin\theta,\cos\theta,0)^\top$ and $A_1$'s measurement directions are determined by \emph{either} choosing $\bm x'= (1,0,0)^\top$ or $\bm x'= (\sin2\theta,\cos2\theta,0)^\top$. However, although this ansatz for the measurement parameters reduces the optimization problem to only $4$ unknown variables, $\mathcal{S}_X, \mathcal{S}_{X'}, \mathcal{S}_Y$ and $\theta$, we have not been able to solve it by the same methods as the previous case to obtain an explicit form for the orange boundary curve in Fig.~\ref{fig:monog} in this region.
\subsubsection{Third region of the optimal boundary curve}
For the final region of the orange curve, $2.72 \lesssim |S(A_1,B_1)| \leq 2\sqrt{2}$, the numerics indicate that $S^*(A_2,B_2)$ obtains its extreme values when $A_1$ and $B_1$ measure observables of equal strength with orthogonal relative angles, corresponding to the ansatz $\mathcal{S}_X = \mathcal{S}_{X'} = \mathcal{S}_Y = \mathcal{S}_{Y'} =:\mathcal{S}$ and $\bm x = (0,1,0)^\top, \bm x' = (1,0,0)^\top, \bm y = 2^{-1/2}(1,1,0)^\top, \bm y' = 2^{-1/2}(-1,1,0)^\top$ (note these directions are the optimal CHSH directions for projective measurements~\cite{Brunner14}). Under this ansatz the two largest singular values of $KTL$ are identical, and evaluate to $\frac{1}{4} \left(\mathcal{S}^2-2 \left(\sqrt{1-\mathcal{S}^2}+1\right)\right)$, so that \begin{equation} S^*(A_2,B_2) = \frac{\mathcal{S}^2-2 \left(1+\sqrt{1-\mathcal{S}^2}\right)}{\sqrt{2}}, \end{equation} which depends on only $\mathcal{S}$, which is uniquely determined by the constraint in Eq.~\eqref{eq:optimization_problem}, whence $S(A_1,B_1)=4\mathcal{S}^2/\sqrt{2}$. Upon rearranging and substituting, we find \begin{equation} S^*(A_2,B_2) = \sqrt{2}-\frac{S(A_1,B_1)}{4}+\sqrt{2-\frac{S(A_1,B_1)}{\sqrt{2}}} . \label{eq:equal_stengths_orthogonal} \end{equation}
This is plotted as the black dashed curve in Fig.~\ref{fig:semianalytics}, and is seen to be indistinguishable from the numerically-generated orange optimal curve in Fig.~\ref{fig:monog} for this region (and is also a good approximation to the optimal curve for small values of $|S(A_1,B_1)|$). Hence, replacing equality by $\leq$ yields our proposed optimal one-sided monogamy relation for this region. Note that since this region of the orange curve matches the optimal curve for the general case of biased observables, Eq.~(\ref{eq:equal_stengths_orthogonal}) also applies to the general case.
Finally, it is straightforward to show analytically that the proposed form in Eq.~\eqref{eq:equal_stengths_orthogonal} is indeed optimal for the case $S(A_1,B_1)=2\sqrt{2}$, i.e., that the maximum possible value of $S^*(A_2,B_2)$ for this case is $1/\sqrt{2}$. In particular, it is known that it is possible to obtain a value of $S(A_1,B_1)=2\sqrt{2}$ for a two-qubit state only if the state is maximally entangled and if projective measurements having orthogonal relative angles are made on each side~\cite{Tsirelson80,McKague12}. Hence, $s_1(T)=s_2(T)=1$, the reversibilities vanish, and $K=L={\rm diag}[\half,\half,0]$ via Eqs.~(\ref{txy}), (\ref{kdef}) and~(\ref{ldef}), and substituting in Eq.~(\ref{sa2b2}) then gives $S^*(A_2,B_2)=2[s_1(\frac14 T)^2+s_2(\frac14 T)^2]^{1/2}=1/\sqrt{2}$ as claimed.
This analytic result also shows the conjectured one-sided monogamy relation~(\ref{unbiasedmonog}) cannot be strengthened to the form $|S(A_1, B_1)|^d+S^*(A_2, B_2)^d\leq (2\rt{2})^d$ for $d\geq 1.76$. In particular, we need the point $(2,2)$ on or below any bounding curve (to imply the conjecture), implying the upper bound for $x^d+y^d$ can be no greater than $2^d+2^d=2^{d+1}$. Hence, to ensure the point $(2\sqrt{2},1/\sqrt{2})$ is also on or below the curve, we require $(2\sqrt{2})^d+(1/\sqrt{2})^d\leq 2^{d+1}$, which gives $d\lesssim 1.758$.
\subsection{Analytic monogamy relations} \label{sec:analytic}
Here we prove the one-sided monogamy relations~(\ref{monogorthog}) and~(\ref{unbiasedmonog}) for unbiased observables, for the respective cases of orthogonal directions and equal strengths on each side. We begin by showing that it is sufficient to prove them for the singlet state.
\subsubsection{Only the singlet state need be considered}
Note first, for unbiased observables, that $S(A_1,B_1)$ is linear in the spin correlation matrix $T$ of the initial state, while Eq.~(\ref{sa2b2}) can be rewritten as
\begin{equation} S^*(A_2,B_2) = 2\| KTL\|^{(2)}_{(2)},
\end{equation} where $\|M\|^{(p)}_{(q)}:=[\sum_{j=1}^q s_j(M)^p]^{1/p}$ is the singular-value matrix norm defined as per Eq.~(IV.19) of~\cite{Bhatia97}. Second, any spin correlation matrix $T$ can be represented as a convex mixture, $T=\sum_k p_k T_k$, of at most four spin correlation matrices of maximally entangled states (corresponding to the four Bell states defined by the local bases in which $T$ is diagonal)~\cite{Horodecki96}. Hence, indicating the dependence on $T$ explicitly and noting that the triangle inequality holds for absolute values and norms, we have \begin{align}
&|S(A_1,B_1|T)|+S^*(A_2,B_2|T)\nonumber\\
&~~~~~\leq \sum_k p_k\left[|S(A_1,B_1|T_k)| + S^{*}(A_2,B_2|T_k)\right]\nonumber\\
&~~~~~\leq \max_{T_{\rm me}} \left[|S(A_1,B_1|T_{\rm me})| + S^{*}(A_2,B_2|T_{\rm me})\right]\nonumber\\
&~~~~~\leq \max_{X,X',Y,Y'} \left[|S(A_1,B_1|T_0)| + S^{*}(A_2,B_2|T_0)\right] . \label{singletmax} \end{align} Here, the maximum in the third line is over the spin correlation matrices of maximally entangled two-qubit states; $T_0:=-I_3$ is the spin correlation matrix of the singlet state; and the maximum in the last line is over the (compact) set of unbiased observables. The last line follows since all maximally entangled states differ from the singlet state only by local rotations, implying that maximising over a rotationally-invariant set of local observables (such as the set of unbiased observables), for a given maximally entangled state, is equivalent to maximising over the same set for the singlet state. It follows that only the singlet state need be considered for the purposes of proving the monogamy relations, as claimed.
\subsubsection{Upper bounds for the CHSH parameters}
The final ingredients required for deriving our analytic monogamy relations are upper bounds for the CHSH parameters $S(A_1,B_1|T_0)$ and $S(A_2,B_2|T_0)$ appearing in Eq.~(\ref{singletmax}), for the case of unbiased observables.
First, for zero bias observables $X,X',Y,Y'$ with fixed strengths $\mathcal{S}_X, \mathcal{S}_{X'}, \mathcal{S}_Y,\mathcal{S}_{Y'}$ and relative measurement angles $\cos\theta=\bm x\cdot\bm x', \cos\phi=\bm y\cdot\bm y'$, measured on a singlet state, we have the tight upper bound \begin{align}
|S(A_1,B_1|T_0)| \leq S_0,
\label{iplus}
\end{align} with \begin{align} (S_0)^2&:= (\mathcal{S}_{X}^2+\mathcal{S}_{X'}^2)(\mathcal{S}_{Y}^2+\mathcal{S}_{Y'}^2) \nonumber\\
&~~+2\mathcal{S}_X\mathcal{S}_{X'}(\mathcal{S}_{Y}^2-\mathcal{S}_{Y'}^2)\cos\theta \nonumber\\
&~~+2\mathcal{S}_Y\mathcal{S}_{Y'}(\mathcal{S}_{X}^2-\mathcal{S}_{X'}^2)\cos\phi \nonumber\\
&~~+4\mathcal{S}_{X}\mathcal{S}_{X'}\mathcal{S}_{Y}\mathcal{S}_{Y'} \sin\theta\sin\phi .
\label{iw}
\end{align}} This upper bound is proved in Appendix~\ref{appa}, and will be generalised elsewhere~\cite{Cheng21b}. Note that it simplifies to the maximum quantum value of $2\sqrt{2}$ for the case of unit strengths and orthogonal measurement directions.
Second, from Eq.~(\ref{sa2b2}) above, noting $T_0=-I_3$, we have \begin{align}
S^{*}(A_2,B_2|T_0) &= 2\sqrt{s_1(KL)^2+s_2(KL)^2}\nonumber\\ & \leq 2\sqrt{s_1(K)^2s_1(L)^2+s_2(K)^2s_2(L)^2} , \label{bhatia}
\end{align} where the last line follows from Theorem~IV.2.5 of~\cite{Bhatia97}. Our strategy for obtaining analytic one-sided monogamy relations is to find upper bounds for the sums of these inequalities, that are independent of $X, X', Y,Y'$, under suitable assumptions.
\subsubsection{Monogamy for orthogonal directions on each side}
We now have the tools to prove the following result. {\flushleft \bf Theorem 1:} {\it For square root measurements of arbitrary unbiased observables $X,X'$ and $Y,Y'$, made by $A_1$ and $B_1$ with equal respective probabilities, with orthogonal angles $\bm x\cdot\bm x'=0=\bm y\cdot\bm y'$ on each side, the one-sided monogamy relation
\begin{equation} \label{thm3}
|S(A_1,B_1)| + S^*(A_2,B_2) \leq \frac{8\sqrt{2}}{3} \sim 3.77
\end{equation}
is always satisfied.}
This theorem strengthens the result proved in~\cite{Cheng21}, which required a further assumption of equal strengths on each side, and supports the Conjecture for unbiased observables in Sec.~\ref{sec:overview}. We outline its proof below, with the details left to Appendix~\ref{appb1}.
First, from Eq.~(\ref{iw}) and the orthogonality assumption, \begin{align}
S_0^2 &= (\mathcal{S}_{X}^2+\mathcal{S}_{X'}^2)(\mathcal{S}_{Y}^2+\mathcal{S}_{Y'}^2) +4\mathcal{S}_{X}\mathcal{S}_{X'}\mathcal{S}_{Y}\mathcal{S}_{Y'} \nonumber\\
& \leq 2(\mathcal{S}_{X}^2+\mathcal{S}_{X'}^2)(\mathcal{S}_{Y}^2+\mathcal{S}_{Y'}^2) \nonumber\\
&= 2(2-\mathcal R_X^2 - \mathcal R_{X'}^2) (2-\mathcal R_Y^2 - \mathcal R_{Y'}^2) , \label{thm3first} \end{align} where the inequality follows using $ab\leq \half(a^2+b^2)$ and the last line using the identity $\mathcal{S}^2= 1-\mathcal R^2$ for unbiased observables as per Eq.~(\ref{reversibility}).
Second, again under the orthogonality assumption, the singular values of the matrices $K$ and $L$ in Eqs.~(\ref{kdef}) and~(\ref{ldef}) can be calculated (see Appendix~\ref{appb1}), and Eq.~(\ref{bhatia}) applied to give \begin{align}
S^*(A_2,B_2|T_0)^2&\leq \frac14 (1+\mathcal R_X)^2(1+\mathcal R_{Y})^2 \nonumber\\
&~~+ \frac14 (1+\mathcal R_{X'})^2(1+\mathcal R_{Y'})^2 .
\label{thm3second} \end{align}
Finally, Eqs.~(\ref{singletmax}), (\ref{iplus}), (\ref{thm3first}) and~(\ref{thm3second}) may be shown to lead to (see Appendix~\ref{appb1}) \begin{align}
&|S(A_1, B_1)| + S^*(A_2, B_2)\nonumber \\
&~~~\leq \max_{x, y\in [0,1]} \rt{2}(2-x^2-y^2)+\half\rt{(1+x)^4+(1+y)^4}\nonumber\\
&~~~=8\sqrt{2}/3 ,
\label{thm3third} \end{align} as claimed. Note that this bound coincides with the one derived in~\cite{Cheng21}, which requires the equal strength assumption. Hence, the above inequality is achieved for $\mathcal{S}_X=\mathcal{S}_Y =2\sqrt{2}/3\sim 0.943, \mathcal R_X=\mathcal R_Y=1/3$, and the optimal CHSH directions.
\subsubsection{Monogamy for equal strengths on each side}
We can also use the above tools to prove a further one-sided monogamy relation. {\flushleft \bf Theorem 2:} {\it For square root measurements of arbitrary unbiased observables $X,X'$ and $Y,Y'$, made by $A_1$ and $B_1$ with equal respective probabilities, with equal strengths $\mathcal{S}_X=\mathcal{S}_{X'}$ and $\mathcal{S}_Y=\mathcal{S}_{Y'}$ on each side, the one-sided monogamy relation
\begin{equation} \label{thm4}
|S(A_1,B_1)| + S^*(A_2,B_2) \leq 4
\end{equation}
is always satisfied.}
This theorem similarly strengthens the result proved in~\cite{Cheng21}, which required a further assumption of orthogonal directions on each side, and again supports the Conjecture for unbiased observables in Sec.~\ref{sec:overview}. Its proof is outlined below, with details given in Appendix~\ref{appb1}.
First, it follows from Eq.~(\ref{iw}) and the equal strengths assumption that \begin{align}
S_0^2 &= 4\mathcal{S}_X^2\mathcal{S}_Y^2 (1+ \sin\theta\,\sin\phi)\nonumber\\
&\leq 4\mathcal{S}_{X}^2\mathcal{S}_{Y}^2\rt{(1+\sin^2\theta)(1+\sin^2\phi)} \nonumber \\
&=4(1-\mathcal R_X^2)(1-\mathcal R_Y)^2\rt{(2-c_X)(2-c_Y)},
\label{thm4first} \end{align} where the second line follows via the Schwarz inequality for the vectors $(1,\sin\theta)$, $(1,\sin\phi)$, and the third line using $\mathcal R^2=1-\mathcal{S}^2$ for unbiased observables as per Eq.~(\ref{reversibility}) and defining $c_X:=\cos^2 \theta$, $c_Y:=\cos^2 \phi$.
Second, again under the equal strengths assumption, the singular values of the matrices $K$ and $L$ in Eqs.~(\ref{kdef}) and~(\ref{ldef}) can be calculated (see Appendix~\ref{appb2}), and Eq.~(\ref{bhatia}) applied to give \begin{align}
2 S(A_2,B_2|T_0)^2
&= \left[(1+\mathcal R_X)^2+(1-\mathcal R_X)^2 \cos^2 \theta \right] \nonumber\\
&\qquad \times \left[(1+\mathcal R_Y)^2+(1-\mathcal R_Y)^2 \cos^2 \phi \right] \nonumber\\
& ~~+ 4[(1-\mathcal R_X^2) |\!\cos\theta|]\,[(1-\mathcal R_Y^2)|\!\cos \phi|] .
\label{thm4second} \end{align}
Finally, Eqs.~(\ref{singletmax}), (\ref{iplus}), (\ref{thm4first}) and~(\ref{thm4second}) may be shown to lead to (see Appendix~\ref{appb2}) \begin{align}
&|S(A_1,B_1)|+S^*(A_2,B_2)\leq 2\max_{x, c \in [0,1]}\left[\sqrt{2-c}\,(1-x^2) \right. \nonumber\\ &\qquad\qquad+\left. \sqrt{ [(1+x)^2+(1-x)^2c]^2 + 4 (1-x^2)^2c}/\rt{8}\right] \nonumber\\
&\qquad\qquad\qquad~~\qquad\qquad\leq 4,
\label{thm4third} \end{align} as claimed in Theorem~2. Particularly, it follows from Eqs.~(\ref{thm4first}) and (\ref{thm4second}) the first inequality is achieved by $A_1$ and $B_2$ performing measurements with the same strength and relative angle, i.e., $\mathcal{S}_{X}=\mathcal{S}_{Y}$ and $\sin\theta=\sin\phi$, while the second is further saturated with $x=0$ and $c=1$, or equivalently, $\mathcal{S}_{X}=1$ and $\sin\theta=0$, implying that parallel directions and zero reversibilities are optimal for this case.
\subsubsection{Generalisation to a class of weak measurements}
The above analytic monogamy relations are proved for the case of square-root measurements. However, it is expected, from the argument given in Sec.~\ref{subsec4.2} (see also~\cite{Brown20}), that they also hold for arbitrary measurements, similarly to the conjectured monogamy relation~(\ref{unbiasedmonog}) for unbiased observables. In this regard, it is worth noting that Theorems~1 and~2 indeed hold for the class of weak measurements introduced by Silva {\it et al.}~\cite{Silva15}.
In particular, for this class of measurements the reversibilities $\mathcal R_X, \mathcal R_{X'},\mathcal R_Y,\mathcal R_{Y'}$ of the post-measurement state are replaced by corresponding `quality factors' $F_X, F_{X'},F_Y,F_{Y'}$ as per Eq.~(\ref{rhof}). Further, inequalities~(\ref{thm3first}) and~(\ref{thm4first}) remain valid under this replacement, since $\mathcal{S}^2=1-\mathcal R^2\leq1-F^2$ as per Eq.~(\ref{frineq}), and the proofs of the theorems then follow exactly as for the case of square-root measurements.
Likewise, noting Eq.~(\ref{rsimp}), Theorems~1 and~2 also hold for the case of unbiased observables measured as per the simple measurement protocol in Eq.~(\ref{simplemeas}). Similar generalisations apply to the one-sided monogamy relations in~\cite{Cheng21}.
\section{Qubit recycling for multiple observers}\label{Sec6. One-sided}
In this section, we use the techniques developed in Sec.~\ref{Sec4. Measurements Reversibility} to study the problem of generating Bell nonlocality between multiple pairs of independent observers. We show that this is possible for the case of multiple observers on both sides, if they share sufficiently many pairs of qubits, via a simple extension of a construction by Brown and Colbeck for the case of a single Alice and many Bobs~\cite{Brown20}. We also give an alternative extension of this construction that allows a single Alice to generate Bell nonlocality with many Bobs for a larger class of single two-qubit states.
\subsection{Arbitrarily many Alices and Bobs} \label{sec:arbab}
Assume now that there are $M$ Alices on one side and $N$ Bobs on the other in Fig.~\ref{fig:fig1}, where each observer independently chooses between a set of two or more measurements to make on their component of a general bipartite state $\rho$. We will denote the $m$-th Alice and the $n$-th Bob by $A_m$ and $B_n$, respectively.
It follows that if $\phi_m$ and $\chi_n$ are the CPTP maps describing the effect of local measurements made by each $A_m$ and $B_n$, on an ensemble initially described by state $\rho$, then the post-measurement state of the ensemble shared by $A_m$ and $B_n$ is given by \begin{equation} \rho''=(\phi_m\circ\dots\phi_2\circ\phi_1)\otimes (\chi_n\circ\dots \chi_2\circ\chi_1)(\rho). \end{equation} Using the notation developed in Sec.~\ref{subsec4.1}, this post-measurement state is equivalently described by the matrix \begin{equation} \tilde\Theta''= {\cal K}_m \dots {\cal K}_1 \Theta {\cal L}^\top_1 \dots {\cal L}_n^\top , \end{equation} with ${\cal K}_{m\alpha\beta}:= c^{-1}\tr{\tilde\sigma_\alpha\phi_m(\tilde\sigma_\beta)}$ and ${\cal L}_{n\mu\nu}:= d^{-1}\tr{\tilde\tau_\mu\chi_n(\tilde\tau_\nu)}$, generalising Eq.~(\ref{kthetal}). It further follows, for the case of a shared two-qubit state, that if either the local Bloch vectors vanish or the CPTP maps are unital, the corresponding spin correlation matrix $T$ is transformed to \begin{equation} \label{ktlgen} T'' = K_m\dots K_1 T L_1^\top\dots L_n^{\top}, \end{equation} generalising Eq.~(\ref{ktl}).
Now, the validity of the conjectures made in~\cite{Cheng21} would imply that for $M,N>1$ it is not possible for all pairs $(A_m,B_n)$ to generate Bell nonlocality if each observer is restricted to choosing between two equally-likely measurements on a single two-qubit state. However, this does not preclude the possibility that each pair can generate Bell nonlocality via making a greater number of measurements on higher-dimensional quantum systems, as demonstrated by the simple example below.
In particular, suppose that the observers share $M$ two-qubit states, and that for their local component of the $q$-th two-qubit state $A_m$ and $B_n$ independently choose between equally-likely measurements of unbiased qubit observables $X_{mq}, X^\prime_{mq}$ and $Y_{nq}, Y^\prime_{nq}$, respectively. For each pair $(A_m,B_n)$ there is then a corresponding Bell inequality \begin{equation} S_{mn}:= \max_q\{S_q(A_m,B_n)\} \leq 2 , \end{equation} where $S_q(A_m,B_n)=\langle X_{mq}Y_{nq}\rangle + \langle X_{mq}Y'_{nq}\rangle + \langle X'_{mq}Y_{nq}\rangle - \langle X'_{mq}Y'_{nq}\rangle$ denotes the CHSH parameter corresponding to their measurements on the $q$-th two-qubit state. For square-root measurements the local measurement operations are unital, so that the values of each $S_{mn}$ can be calculated via Eqs.~(\ref{chsh}), (\ref{ktl}) and~(\ref{ktlgen}).
To show that every one of the above Bell inequalities can be violated, with $S_{jk}>2$, we extend a construction given by Brown and Colbeck that allows a single Alice to violate the CHSH inequality with each Bob via recycling a single shared qubit state~\cite{Brown20}. The idea is to apply this construction to the $m$-th qubit pair, to ensure that $S_m(A_m,B_n)>2$ for each $n$. First, label the measured observables in the Brown-Colbeck construction by $X,X'$ for the single Alice, $A$, and by $Y_n,Y'_n$ for the $n$-th Bob, $B_n$, so that \begin{equation} \label{brown} S(A,B_n) >2, \qquad n=1,2,\dots,N \end{equation}
by construction. Second, choose the observables measured by $A_m$ and $B_n$ on the $q$th qubit pair to be \begin{equation} X_{mq}:=\left\{ \begin{matrix} X,&m=q\\ \mathbbm{1}, & m\neq q\end{matrix}\right. , ~~~~ X'_{mq}:=\left\{ \begin{matrix} X',&m=q\\ \mathbbm{1}, & m\neq q\end{matrix}\right. , \end{equation} \begin{equation} Y_{nq}=Y_n,\qquad Y'_{nq}=Y'_n . \end{equation} Since square-root measurements of the identity operator do not disturb the system, it immediately follows that \begin{equation} S_m(A_m,B_n)=S(A,B_n) >2, \end{equation} and hence that $S_{mn}>2$ as required.
The above example, and its converse with $M$ Alices and one Bob, show that each pair can independently generate Bell nonlocality by each observer choosing between suitable local measurements on a shared $4^{\min\{M,N\}}$ dimensional quantum system. Note that a related example by Cabello~\cite{Cabello}, based sharing only two qubit pairs, is unsuitable in this context, as the observers do not make independent measurements (all entanglement in the first and second qubit pairs in this example is destroyed by the projective measurements made by $B_1$ and $A_1)$, respectively, implying that no later pair of observers can independently generate Bell nonlocality).
It would be of interest to find examples requiring less measurements and/or dimensions. For example, it is known for $M=N=2$ that each of two Alices can steer each of two Bobs, and vice versa, via recycling of a single qubit pair~\cite{Jie21}.
\subsection{Multiple Bobs}
The problem of one-sided qubit recycling, with one Alice and $N>1$ Bobs, has been well studied in previous work~\cite{Silva15,Mal16,Curchod17,Tavakoli18,Bera18,Sasmal18,Shenoy19,Das19,Saha19,Kumari19,Brown20,Maity20,Bowles20,Roy20}. In particular, Theorem~2 of~\cite{Brown20} shows that it is possible to generate CHSH Bell nonlocality between one Alice and arbitrarily many Bobs via recycling of a two-qubit state and unbiased observables, under the condition that two largest singular values of the initial spin correlation matrix $T$ satisfy \begin{equation} \label{browncon} s_1(T)=1, \qquad s_2(T)>0. \end{equation} Brown and Colbeck raised the interesting question of whether this condition was necessary as well as sufficient~\cite{Brown20}. Here we answer the simpler but related question, of whether this condition is necessary and sufficient for the case of a {\it fixed} number of Bobs. We show that it is only sufficient for this case, by constructing suitable two-qubit states with $s_1(T)<1$.
First, for a fixed number $N$ of Bobs, consider an initial two-qubit state $\rho$ satisfying the Brown-Colbeck condition~(\ref{browncon}) above, so that Eq.~(\ref{brown}) is satisfied for suitable unbiased observables $X,X'$ measured by Alice and $Y_n, Y'_n$ measured by the $n$-th Bob. Further, define the class of states \begin{equation} \rho_p := p\, \rho + \frac{1-p}{4}\mathbbm{1}\otimes\mathbbm{1},\qquad p \in (0, 1), \label{counterexample} \end{equation} corresponding to adding isotropic noise to $\rho$. The associated correlation matrix is then $T_p=p\,T$, with singular values \begin{equation} \label{tp} {s}_1(T_p)=p\,s_1(T)=p, ~~{s}_2(T_p)=p\,s_2(T)>0. \end{equation}
Further, if Alice and each Bob choose the same measurement strategy as for $\rho$, then it follows (recalling that the observables are unbiased) that the corresponding CHSH parameters are given by \begin{equation} S_p(A, B_n)=p\, S(A, B_n). \end{equation} Finally, defining \begin{equation} S_{\min} := \min \{S(A, B_1), S(A, B_2), \dots, S(A, B_N)\}>2, \end{equation} where the upper bound follows from Eq.~(\ref{brown}), and \begin{equation}
p_{\min}:= \frac{2}{S_{\min}}<1,
\end{equation} we have \begin{equation} S_p(A,B_n) \geq pS_{\min} > 2(p/p_{\min}) >2~~{\rm for}~~p\geq p_{\min}. \end{equation} Thus, Alice can violate the CHSH inequality with each of the $N$ Bobs for $p\in [p_{\min}, 1)$ in Eq.~(\ref{tp}), implying that condition~(\ref{browncon}) is not necessary, as claimed. The original question posed by Brown and Colbeck, however, as to whether condition~(\ref{browncon}) is necessary for states suitable for sharing Bell nonlocality for {\it all} values of $N$, remains open.
\section{Conclusions} \label{Sec8. Conclusions}
We have studied the sequential generation of Bell nonlocality between independent observers via recycling the components of entangled systems. First, general two-valued qubit observables are characterised in~Eq.~(\ref{observable}) via the {\it outcome bias}, {\it strength}, and measurement direction, and a measurement model is provided to interpret these parameters of such observables. Based on quantum instruments, we then introduced a general formalism for measurements to describe the sequential scenarios, and review the optimal reversibility properties of square-root measurements. For measurements of a given qubit observable, the maximum reversibility and minimum decoherence are naturally defined in Eqs.~(\ref{reversibility}) and~(\ref{decoherence}) respectively. Moreover, we obtain tradeoff relations~(\ref{chain}) and~(\ref{decoherence2}) between these quantities and the strength and bias of the observable. Further, using these relations for the case of unbiased observables, we analytically obtained the strong one-sided monogamy relations in Theorems~1 and~2, as per Eqs.~(\ref{thm3}) and~(\ref{thm4}). We also provided compelling numerical evidence as displayed in Fig.~\ref{fig:monog} to support the more general conjecture in Eq.~(\ref{unbiasedmonog}) for the sequential generation of Bell nonlocality and to obtain semi-analytic results for the best possible monogamy relation as displayed in Fig.~\ref{fig:semianalytics}. Finally, we applied our tools to scenarios of arbitrary numbers of observers on one and/or two sides. We generalised the construction in~\cite{Brown20} to show that if sufficiently many pairs of entangled qubits and measurements are allowed, then arbitrarily many pairs of observers on each side can sequentially share Bell nonlocality. Moreover, a larger class of two-qubit states than in~\cite{Brown20} was shown to allow a single Alice to share Bell nonlocality with a given number of Bobs, implying that the conditions discussed in~\cite{Brown20} are sufficient but not necessary when the number of Bobs is fixed.
There are many interesting questions left open for future work. For example, is it possible to further pin down the form of the numerically optimal orange curve in Figs.~\ref{fig:monog} and~\ref{fig:semianalytics}? Are there more efficient numerical and analytical tools to prove or disprove the one-sided monogamy conjectures in this work and Ref.~\cite{Cheng21}? Can Bell nonlocality be generated by recycling two qubits if more than two measurements are allowed per observer? (as is the case for Einstein-Podolsky-Rosen steering~\cite{Jie21}).
It will be shown elsewhere that the bound in Eq.~(\ref{iplus}) can be extended to a generalised Horodecki criterion for nonprojective observables~\cite{Cheng21b}. Finally, we note that since our approach is based on an instrumental formalism which can incorporate the most general measurements, our analysis can be applied to similar problems in the sequential sharing of other quantum properties, such as EPR-steering and entanglement, including for cases in which more measurement settings as per observer are allowed. It would also be worth investigating if our results and methods are applicable to sequential sharing of random access codes~\cite{Mohan19,Anwer20,Foletto20b} and preparation-contextuality~\cite{Anwer21}, particularly if observer $A_1$ in such scenarios prepares states for observer $B_1$ via measurements on an entangled state (and/or vice versa).
\acknowledgements We thank Peter Brown, Ad\'an Cabello and Howard Wiseman for helpful discussions and comments. S. C.~is supported by the Fundamental Research Funds for the Central Universities (No.~22120210092) and the National Natural Science Foundation of China (No.~62088101). L. L.~is supported by National Natural Science Foundation of China (No.~61703254). T. J. B.~is supported by the Australian Research Council Centre of Excellence CE170100012, and acknowledges the support of the Griffith University eResearch Service \& Specialised Platforms Team and the use of the High Performance Computing Cluster ``Gowonda'' to complete this research.
\appendix
\section{Derivation of the upper bound~(\ref{iw})} \label{appa}
First, for general qubit observables $X,X',Y,Y'$, with $X={\mathcal B}_X\mathbbm{1}+\mathcal{S}_X \bm\sigma\cdot\bm x$, etc., define the unit vectors \begin{equation} \label{xdef}
\bm x_1 = \frac{\bm x+ \bm x'}{|\bm x+\bm x'|},~~~\bm x_2 = \frac{\bm x- \bm x'}{|\bm x - \bm x'|}, ~~~\bm x_3=\bm x_1\times \bm x_2, \end{equation} \begin{equation} \label{ydef}
\bm y_1 = \frac{\bm y+ \bm y'}{|\bm y + \bm y'|},~~~\bm y_2 = \frac{\bm y- \bm y'}{|\bm y - \bm y'|}, ~~~\bm y_3=\bm y_1\times \bm y_2. \end{equation} It follows that \begin{equation} \label{x1x2} \bm x =\cos\frac{\theta}{2} \bm x_1+\sin \frac{\theta}{2} \bm x_2, ~~~\bm x' =\cos\frac{\theta}{2} \bm x_1-\sin \frac{\theta}{2} \bm x_2, \end{equation} \begin{equation} \label{y1y2} \bm y =\cos\frac{\phi}{2} \bm y_1+\sin \frac{\phi}{2} \bm y_2,~~~ \bm y' =\cos\frac{\phi}{2} \bm y_1-\sin \frac{\phi}{2} \bm y_2 , \end{equation} where $\cos \theta=\bm x\cdot \bm x'$ and $\cos \phi=\bm y\cdot\bm y'$, i.e., $0\leq\theta\leq\pi$ is the angle between $\bm x$ and $\bm x'$ and $0\leq\phi\leq\pi$ is the angle between $\bm y$ and $\bm y'$.
Second, for unbiased observables measured on a singlet state we have ${\mathcal B}=0$ and $T=T_0=-I_3$, and the CHSH parameter reduces via Eqs.~(\ref{chsh}) and~(\ref{prodxy2}) to \begin{align}
S(A_1,B_1|T_0)&\leq -\mathcal{S}_X\mathcal{S}_{Y} \bm x \cdot\bm y - \mathcal{S}_X\mathcal{S}_{Y'} \bm x\cdot\bm y' \nonumber\\ & \qquad- \mathcal{S}_{X'}\mathcal{S}_{Y} \bm x' \cdot\bm y + \mathcal{S}_{X'}\mathcal{S}_{Y'} \bm x'\cdot\bm y'\nonumber\\
&= -\sum_{j,k} W_{jk} \bm x_j\cdot \bm y_k\nonumber\\
&= -\tr{WR^\top},
\label{wm} \end{align} \color{black} where $W$ is the $3\times3$-matrix \begin{equation} W:=\begin{pmatrix}
A\cos\frac{\theta}{2}\cos\frac{\phi}{2} & B\cos\frac{\theta}{2}\sin\frac{\phi}{2} & 0\\ C\sin\frac{\theta}{2}\cos\frac{\phi}{2} & -D\sin\frac{\theta}{2}\sin\frac{\phi}{2} &0 \\ 0& 0 & 0 \end{pmatrix} \end{equation} with \begin{align}
A &= \mathcal{S}_{X}\mathcal{S}_{Y}+\mathcal{S}_{X}\mathcal{S}_{Y'}+\mathcal{S}_{X'}\mathcal{S}_{Y}-\mathcal{S}_{X'}\mathcal{S}_{Y'}\nonumber\\
B &= \mathcal{S}_{X}\mathcal{S}_{Y}-\mathcal{S}_{X}\mathcal{S}_{Y'}+\mathcal{S}_{X'}\mathcal{S}_{Y}+\mathcal{S}_{X'}\mathcal{S}_{Y'} \nonumber\\
C &= \mathcal{S}_{X}\mathcal{S}_{Y}+\mathcal{S}_{X}\mathcal{S}_{Y'}-\mathcal{S}_{X'}\mathcal{S}_{Y}+\mathcal{S}_{X'}\mathcal{S}_{Y'} \nonumber\\
D &= -\mathcal{S}_{X}\mathcal{S}_{Y}+\mathcal{S}_{X}\mathcal{S}_{Y'}+\mathcal{S}_{X'}\mathcal{S}_{Y}+\mathcal{S}_{X'}\mathcal{S}_{Y'}, \label{abcd} \end{align} and $R$ is the $3\times3$ matrix with coefficients \begin{equation} R_{jk} := \bm x_j \cdot \bm y_k . \end{equation} Note that $W$ contains information about the local measurement strengths and relative measurement directions for each side, while $R$ contains information about the relative measurement directions between the two sides.
Third, note that \begin{align} (RR^\top)_{jk}&=\sum_m (\bm x_j\cdot\bm y_m) (\bm x_k\cdot\bm y_m)\nonumber\\ &= \bm x_j^\top \left(\sum_m \bm y_m\bm y_m^\top\right) \bm x_k =\delta_{jk}, \end{align} using the orthonormal basis properties of $\{\bm x_j\}$ and $\{\bm y_k\}$, and so $R$ is an orthogonal matrix, i.e., a rotation or reflection (indeed, since the basis sets are right-handed by construction, $R$ is a rotation). Hence, \begin{align}
|S(A_1,B_1)|T_0)| \leq \max_R |\tr{WR^\top}| \end{align} where the maximum is over all orthogonal matrices $R$.
Fourth, suppose that $W=R'DR''$ is a singular value decomposition of $W$, for orthogonal matrices $R',R''$ and diagonal matrix $D={\rm diag}[s_1(W),s_2(W),s_3(W)]$ with singular values $s_1(W)\geq s_2(W)\geq s_3(W)\geq0$. Substitution then gives \begin{align}
|S(A1,B1)| &\leq \max_R |\tr{DR''R^TR'}| \nonumber\\
&= \max_{\tilde R} |\tr{D\tilde R}| \nonumber\\
&= \max_{\tilde R} |\sum_j s_j(W) \tilde{\bm x}_j\cdot\tilde{\bm y}_j| \nonumber\\
&\leq \max_{\tilde R} \sum_j s_j(W) |\tilde{\bm x}_j\cdot\tilde{\bm y}_j| \nonumber\\
&\leq S_0:=\sum_j s_j(W) \end{align} where $\tilde R:=R''R^TR'$ is an orthogonal matrix, implying there are local coordinate systems $\{\tilde{\bm x}_j\}$ and $\{\tilde{\bm y}_j\}$ such that $\tilde{R}=\tilde{\bm x}_j\cdot\tilde{\bm y}_k$, and we have used $\tilde{\bm x}_j\cdot\tilde{\bm y}_j\leq 1$ with equality for $\tilde{\bm x}_j\equiv \tilde{\bm y}_j$. Note that the upper bound is achievable by construction.
Finally, to show that $S_0$ above has the explicit formula given in Eq.~(\ref{iw}) of the main text, let $\tilde W$ denote the upper $2\times2$ submatrix of $W$, and $w_\pm$ denote the eigenvalues of $\tilde W^\top \tilde W$ (i.e., the nonzero eigenvalues of $W^\top W$). It follows that $S_0=\sqrt{w_+}+\sqrt{w-}$. The identities \begin{equation} w_+ + w_- = \tr{\tilde W^\top \tilde W},~ w_+w_-=\det(\tilde W^\top \tilde W)=\det(\tilde W)^2, \nonumber \end{equation} then imply that \begin{align} S_0^2&=
(\sqrt{w_+}+\sqrt{w_-})^2 \nonumber\\
&= w_++w_- + 2\sqrt{w_+w_-} \nonumber\\
&= \tr{\tilde W^\top \tilde W} + 2 |\det(\tilde W)| .
\label{cor2proof} \end{align} Explicit calculation of the trace and determinant yields Eq.~(\ref{iw}), as desired.
\section{Derivation of one-sided monogamy relations} \label{appb}
\subsection{Proof of Theorem 1} \label{appb1}
First, as already noted in Eq.~(\ref{thm3first}) of the main text, it follows from Eq.~(\ref{iw}) and the orthogonality assumption $\bm x\cdot\bm x'=0=\bm y\cdot\bm y'$ that \begin{align}
S_0^2 &= (\mathcal{S}_{X}^2+\mathcal{S}_{X'}^2)(\mathcal{S}_{Y}^2+\mathcal{S}_{Y'}^2) +4\mathcal{S}_{X}\mathcal{S}_{X'}\mathcal{S}_{Y}\mathcal{S}_{Y'} \nonumber\\
& \leq 2(\mathcal{S}_{X}^2+\mathcal{S}_{X'}^2)(\mathcal{S}_{Y}^2+\mathcal{S}_{Y'}^2) \nonumber\\
&= 2(2-\mathcal R_X^2 - \mathcal R_{X'}^2) (2-\mathcal R_Y^2 - \mathcal R_{Y'}^2) . \label{Sa1b1} \end{align}
Further, again using the orthogonality assumption, one has $I_3=\bm x\bm x^\top+\bm x'\bm x'^\top + \bm x''\bm x''^\top$, with $x'':=\bm x\times\bm x'$, and Eq.~(\ref{kdef}) for the matrix $K$ simplifies to \begin{align}
K&=\frac{1+\mathcal R_{X'}}{2} \bm x\bm x^\top + \frac{1+\mathcal R_{X}}{2} \bm x'\bm x'^\top + \frac{\mathcal R_X+\mathcal R_{X'}}{2}\bm x''\bm x''^\top . \nonumber \end{align} Hence, assuming $\mathcal R_{X'}\leq \mathcal R_X$ without any loss of generality, the first two singular values of $K$ can be directly read off as \begin{equation} s_1(K)=\half(1+\mathcal R_X),~~s_2(K)=\half(1+\mathcal R_{X'}). \end{equation} Similarly, assuming $\mathcal R_{Y'}\leq \mathcal R_Y$ without any loss of generality, we find via Eq.~(\ref{ldef}) that \begin{equation} s_1(L)=\half(1+\mathcal R_Y),~~s_2(L)=\half(1+\mathcal R_{Y'}) . \end{equation} Using Eq.~(\ref{bhatia}) then gives \begin{align}
S^*(A_2,B_2|T_0)^2&\leq \frac14 (1+\mathcal R_X)^2(1+\mathcal R_{Y})^2 \nonumber\\
&~~+ \frac14 (1+\mathcal R_{X'})^2(1+\mathcal R_{Y'})^2 \nonumber\\
&\leq \frac14\sqrt{(1+\mathcal R_X)^4 +(1+\mathcal R_{X'})^4} \nonumber\\
&~~\times\sqrt{(1+\mathcal R_Y)^4 +(1+\mathcal R_{Y'})^4} , \label{Sa2b2} \end{align}
using $\bm a\cdot \bm b\leq |\bm a||\bm b|$ for $\bm a=((1+\mathcal R_X)^2,(1+\mathcal R_{X'}^2))$, etc.
Now, defining the functions \begin{align}
g_1(x,y) &:= 2^{1/4}\rt{2-x^2 - y^2}, \nonumber \\
g_2(x,y) &:= 2^{-1/2}[(1+x)^4 +(1+y)^4]^{1/4}, \nonumber \end{align} for $x,y\in[0,1]$, it immediately follows from Eqs.~(\ref{iplus}), (\ref{Sa1b1}) and~(\ref{Sa2b2}) that \begin{align}
&|S(A_1, B_1)|T_0| + S^*(A_2, B_2|T_0)\nonumber \\
&\leq g_1(\mathcal R_X, \mathcal R_{X'}) g_1(\mathcal R_Y, \mathcal R_{Y'})+g_2(\mathcal R_X, \mathcal R_{X'})g_2(\mathcal R_Y, \mathcal R_{Y'}) \nonumber \\
&\leq \rt{g_1(\mathcal R_X, \mathcal R_{X'})^2+g_2(\mathcal R_X, \mathcal R_{X'})^2} \nonumber\\
&~~~\times \rt{g_1(\mathcal R_Y, \mathcal R_{Y'})^2+g_2(\mathcal R_Y, \mathcal R_{Y'})^2} \nonumber \\
& \leq \max_{x, y\in [0,1]} \left\{ g_1(x, y)^2+g_2(x, y)^2 \right\} \nonumber \\
&= \max_{x, y\in [0,1]} G(x, y), \label{Gxy} \end{align}
where the third line follows from $\bm a\cdot \bm b\leq |\bm a||\bm b|$ for $\bm a=(g_1(\mathcal R_X,\mathcal R_{X'}),g_2(\mathcal R_X,\mathcal R_{X'}))$, etc., and \begin{equation} G(x,y):=\rt{2}(2-x^2-y^2)+\half\rt{(1+x)^4+(1+y)^4} \end{equation} as per the second line of Eq.~(\ref{thm3third}) of the main text.
Solving $\partial G/\partial x=0=\partial G/\partial y$ yields the conditions \begin{equation} \label{gxgy} g(x)=g(y) = \frac{1}{2\rt{2}\rt{(1+x)^4+(1+y)^4}}, \end{equation} for a local extremum of $G(x,y)$, where \begin{equation} g(x):= \frac{x}{(1+x)^3}. \end{equation} Now, it is easy to check that $g(x)=g(y)$ has one solution, $y=x$, for $x\leq x_0:=\sqrt{5}-2$; two solutions, $y=x$ and $y=x^*$, for $x_0<x\neq\half$; and one solution, $y=x=\half$, for $x=\half$ (corresponding to the maximum of $g(x)$). For the solution $y=x^*$, conditions~(\ref{gxgy}) simplify to \begin{equation} \frac{x}{(1+x)^3} = \frac{1}{2\rt{2}\rt{(1+x)^4+(1+x^*)^4}} , \end{equation} yielding \begin{equation} x^*=\frac{(x+1)\sqrt{x(x+4)}-x(x+3)}{2x} , \end{equation} However, substituting this into the right hand side of Eq.~(\ref{gxgy}), and plotting both sides over the range $[x_0,1]$ leads to $y=x^*=\half$, and hence via $g(x)=g(y)$ that $x=y=\half$. Hence only the universal solution $y=x$ can generate local extrema. But for this solution, the above conditions simplify to $1+x=4x$, yielding a local maximum value of $G$ at $x=y=\frac13$, with \begin{equation} \label{Gthird} G(1/3,1/3) = \frac{8\sqrt{2}}{3} . \end{equation} To show that this is the global maximum of $G(x,y)$, one needs to check its values on the boundaries of the domain. One finds that $f(x,1)=f(1,x)\leq 3.49< G(\frac13,\frac13)$ and $f(x,0)=f(0,x)\leq 3.71<G(\frac13,\frac13)$ for $x\in [0,1]$. Hence, $G(\frac13,\frac13)$ is indeed the global maximum.
Hence, combining Eqs.~(\ref{singletmax}), (\ref{Gxy}) and~(\ref{Gthird}), we have the additive monogamy relation \begin{align}
|S(A_1,B_1)| + S^*(A_2,B_2) \leq \frac{8\rt{2}}{3} \end{align} for unbiased observables with orthogonal measurement directions on each side, as claimed in Theorem~1.
\subsection{Proof of Theorem 2} \label{appb2}
First, as per Eq.~(\ref{thm4first}) of the main text, it follows from Eq.~(\ref{iw}) and the equal strengths assumption that \begin{align}
S_0^2 &= 4\mathcal{S}_X^2\mathcal{S}_Y^2 (1+ \sin\theta\,\sin\phi)\nonumber\\
&\leq 4\mathcal{S}_{X}^2\mathcal{S}_{Y}^2\rt{(1+\sin^2\theta)(1+\sin^2\phi)} \nonumber \\
&=4(1-\mathcal R_X^2)(1-\mathcal R_Y)^2\rt{(2-c_X)(2-c_Y)} . \end{align} Hence, taking square roots and recalling that the geometric mean is never greater than the arithmetic mean, \begin{align}
S_0 &\leq 2\sqrt{\sqrt{2-c_X}(1-\mathcal R_X^2)\rt{2-c_Y}(1-\mathcal R_Y)^2} \nonumber\\
&\leq \sqrt{2-c_X}\, [1-\mathcal R_X^2]+\rt{2-c_Y}[1-\mathcal R_Y^2] .
\label{iwbound} \end{align}
Second, substituting~Eq.~(\ref{x1x2}) of Appendix~\ref{appa} into Eq.~(\ref{kdef}), we have \begin{equation} K=\mathcal R_XI_3+(1-\mathcal R_X)(\cos^2\frac{\theta}{2}\bm x_1\bm x_1^\top + \sin^2\frac{\phi}{2}\bm x_2\bm x_2^\top) \end{equation} for equal reversibilities, in terms of the orthogonal unit vectors $\bm x_1$ and $\bm x_2$. Since $K$ is a symmetric matrix, this immediately allows us to read off the two largest singular values of $K$ as the corresponding two largest eigenvalues of $K$, \begin{equation}
\lambda_\pm(K) =\half (1+\mathcal R_X)\pm \half(1-\mathcal R_X)|\cos\theta|. \end{equation} Similarly, the two largest singular values of $L$ follow from Eqs.~(\ref{ldef}) and~(\ref{y1y2}) as \begin{equation}
\lambda_\pm(L) =\half (1+\mathcal R_Y)\pm \half(1-\mathcal R_Y)|\cos\phi|. \end{equation} Hence, using $(a+b)(c+d)+(a-b)(c-d)=2(ac+bd)$, the bound in Eq.~(\ref{bhatia}) simplifies to \begin{align}
2 S(A_2,B_2|T_0)^2 &\leq 2\lambda_+(K)^2\lambda_+(L)^2 + 2\lambda_-(K)^2\lambda_-(L)^2 \nonumber\\
&= \left[(1+\mathcal R_X)^2+(1-\mathcal R_X)^2 \cos^2 \theta \right] \nonumber\\
&\qquad \times \left[(1+\mathcal R_Y)^2+(1-\mathcal R_Y)^2 \cos^2 \phi \right] \nonumber\\
& ~~+ 4[(1-\mathcal R_X^2) |\!\cos\theta|]\,[(1-\mathcal R_Y^2)|\!\cos \phi|] \nonumber\\
&= PQ +4MN = (P,2M)\cdot (Q,2N) \nonumber\\
&\leq \sqrt{P^2+4M^2}\sqrt{Q^2+4N^2} \nonumber\\
&=f(\mathcal R_X,c_X)^2 f(\mathcal R_Y,c_Y)^2 \nonumber\\
&\leq \frac14 [f(\mathcal R_X,c_X)^2+f(\mathcal R_Y,c_Y)^2]^2
\label{relaxedS2} \end{align} where \begin{equation} f(x,c)^4:= [(1+x)^2+(1-x)^2c]^2 + 4 (1-x^2)^2c . \end{equation}
Finally, combining Eqs.~(\ref{iwbound}) and~(\ref{relaxedS2}) gives
\begin{align}
S_0+S^*(A_2,B_2)&\leq g(\mathcal R_X,c_X) + g(\mathcal R_Y,c_Y)\nonumber\\
&\leq 2\,\max_{x, c \in [0,1]}g(x, c), \end{align} where the right hand side corresponds to the first upper bound in Eq.~(\ref{thm4third}), with \begin{equation} g(x,c):= \sqrt{2-c}\,(1-x^2) + f(x,c)^2/\sqrt{8}. \end{equation} Numerically maximising $g(x,c)$ over $x$ and $c$ gives $g_{\max}=2$, corresponding to $x=0$ and $c=1$. This implies that parallel directions and zero reversibilities are optimal for this case, and yields, via Eqs.~(\ref{singletmax}) and~(\ref{iplus}), the one-sided monogamy relation \begin{equation}
|S(A_1,B_1)|+|S(A_2,B_2)| \leq 4 \end{equation} for unbiased observables with equal strengths on each side, as claimed in Theorem~2.
\end{document} | arXiv |
Boundary Of A Set Of Points Matlab
A boundary point of a set may or may not belong to the set. You give it some inputs, and it spits out one of two possible outputs, or classes. A Finite Element Solution of the Beam Equation via MATLAB S Rao. Set the boundary conditions. So if you pick any complex number inside the Main Cardioid as the value of c, and iterate the formula z 2 + c, you will tend toward a single complex value, which we can refer to as z[∞] = Z. A summer block can be found in the "commonly used blocks" library, and in the "math" library. In a future post, I'll demonstrate how to calculate the security weights for various points on this efficient frontier using the two-fund separation theorem. \$\begingroup\$ The ball tree method in scikit-learn does this efficiently if the same set of points has to be searched through repeatedly. To find the intercept set :. Mayank Agarwal is racing and he is coming for all of you India vs Bangladesh: Mayank Agarwal hit his second double hundred in last 5 innings as India reached 493 for 6 at stumps on Day 2 in Indore. Starting at a specific seed_point , connected points equal or within tolerance of the seed value are found, then set to new_value. Working with Point Clouds. If the shrink factor is 0, then the boundary traced is the traditional convex hull. For example, you might measure the rate of °ow of water at certain times and use these to determine the total amount of water that °owed. This code calculates the y -coordinates of points on a line given their x -coordinates. This is an initial value problem (IVP). Specify a single output to return a structure containing information about the solution, such as the solver and evaluation points. where the first button corresponds to Point mode, the second to Contour mode, and the third to Area mode. Earth's nine life-support systems: Nitrogen and phosphorus cycles We fix around 121 million tonnes of nitrogen a year, far more than nature does – and nature cannot cope 6. 4-if I set boundaries, I'll hurt others Though your 'no' may cause some discomfort for another, there are times when a 'no' is necessary to take care of yourself. When performed in Matlab, the singular values ˙i will be sorted in descending order, so ˙9 will be the smallest. first point is the same as the last point. This is useful when you don't want to immediately compute an answer, or when you have a math "formula" to work on but don't know how to "process" it. Call bwtraceboundary to trace the boundary from the specified point. If the homography is overdetermined, then ˙9 0. • When using a mixed boundary condition a function of the form au(x)+b ∂nu(x) = constant is applied. Drawing the graph. It is, however, limited to retrieving data from, and information about, existing netCDF files. As in the previous section, we recommend using the Ground Truth Labeler app in MATLAB Automated Driving System Toolbox. Repeat this process for different points on the watershed boundary to verify that the boundaries are correct. x was the last monolithic release of IPython, containing the notebook server, qtconsole, etc. Let $$A$$ be a subset of a topological space $$X$$, a point $$x \in X$$ is said to be boundary point or frontier point of $$A$$ if each open set containing at $$x. It would be nice if we could simply tell it to plot every k'th edge in a surface. Trinidad and Tobago D. You give it some inputs, and it spits out one of two possible outputs, or classes. Set Q of all rationals: No interior points. The extent to which this condition modi es the general character of the. We will modify the MATLAB code to set the load to zero for Laplace's equation and set the boundary node values to \(\sin(3\theta)\). The fields have the following sizes: field quantity interior resolution resolution with boundary points pressure. This boundary conditions interpolates the values from a set of supplied points in space and time uniformJumpAMI This boundary condition provides a jump condition, using the cyclicAMI condition as a base. By systematically removing the boundary edges beginning with the longest edge until you meet the given criteria you arrive at a concave hull. Zoom into the Root Locus by right-clicking on an axis and selecting Properties followed by the label Limits. est is determined by specifying the values of all its components at a single point x = a. Interior and Boundary. Creating a Subset Shapefile from an Existing Shapefile This tip sheet covers how to create a smaller subset data set from a larger data set in ArcGIS. Then data will be a 6x3 matrix of points (each row is a point). %% Projected all point Q onto a plane Oxy simply by removing the 3rd coordinate. For example, mphinterp is a LiveLink™ for MATLAB® function that you can combine with the std, or standard deviation, function in MATLAB to obtain the standard deviation of temperature on the bottom surface of the glass layer, evaluated at a set of arbitrary coordinate points. One important part of a model M-file is the selections that are made in order to set up properties for the domain, boundaries, etc. Boundary is the polygon which is formed by the input coordinates for vertices, in such a way that it maximizes the area. MATLAB: Consider a set of data points (1, y1),(2, y2), (100, y100) where the y's are random numbers with a normal distribution with mean 0 and variance 1 (note that this implies that the probability of the y-values being less than zero is the same as the probability of being greater than zero). A GVF snake can even be initialized across the boundaries, a situation that often confounds traditional snakes and balloons. Identifying edges and boundary points – 2D Mesh – Matlab April 21, 2015 beni22sof Leave a comment Go to comments A triangulation algorithm often gives as output a list of points, and a list of triangle. We begin with the data structure to represent the triangulation and boundary conditions, introduce the sparse matrix, and then discuss the assembling process. The images are divided into a training set of 200 images, and a test set of 100 images. I am trying to plot the decision boundary of a perceptron algorithm and I am really confused about a few things. A set of vectors that form a basis for the null space of A can be found from the command null: >> A A = 1 1 1 1 >> null ( A ) ans = - 0. These boundaries set theoretical limits on changes to the environment, and include ozone depletion, freshwater use, ocean acidification, atmospheric aerosol pollution and the introduction of. To set a variable to a single number, simply type something like z =1. 1 Point values Just like the pre-processor, the post-processor window has a set of different editing modes: Point, Contour, and Area. Call bwtraceboundary to trace the boundary from the specified point. • When using a Neumann boundary condition, one prescribes the gradient normal to the boundary of a variable at the boundary, e. Use Livelink for Matlab for post-processing. STBoundary (). This is a very useful tool in all types of scientific and math based research allowing the user. 1 meters, but zero for r>0. k contains the indices of the input points that lie on the boundary. Mean or Covariance. Perceptron's Decision Boundary Plotted on a 2D plane. The most flexible approach is to extract the data using mphinterp(. beam according to the boundary conditions and applied loads. Boundary is the polygon which is formed by the input coordinates for vertices, in such a way that it maximizes the area. One important part of a model M-file is the selections that are made in order to set up properties for the domain, boundaries, etc. Matlab 2018a (MATLAB 9. We can use the command >> x= 0:0. You prepare data set, and just run the code! Then, SVM and prediction results for new samples can be…. Now the ODE tells us the derivative of $\vec{z}$ at any point if we know it's value, and a derivative lets us calculate the value at a neighboring point relative to the value at the current point. These are computed directly with basic Matlab operations and also using the Matlab's function freqz and grpdelay for comparison. To use the matlab code, you will also need to download the pre-trained model parameters (the current implementation only handles grayscale, but parameters for rgb are also provided). We will modify the MATLAB code to set the load to zero for Laplace's equation and set the boundary node values to \(\sin(3\theta)\). 7 obvious name: "two-point BVP" Example 2 above is called a "two-point BVP" a two-point BVP includes an ODE and the value(s) of the solution at two different locations. M_Map is a set of mapping tools written for Matlab (it also works under Octave). In this case the basin boundary is again a fractal set (its box-counting dimension is about 1. I must be of class SINGLE and grayscale. The geometry of the points near the edge has an impact on the K iteration. Eliminate toxic persons from your life - those who want to manipulate you, abuse you, and control you. If you where looking to do it by logic there is the raytrace method. So let me define a set of all the vectors that are a member of Rn where they satisfy the equation a times my vector x is equal to the 0 vector. Provided by Bjorn Sandvik, thematicmapping. j is the set of symmetric positive semidefinite matrices of the same dimension. Boundary events are used by several signal processing functions that process continuous data. Unlike the convex hull, the boundary can shrink towards the interior of the hull to envelop the points. A path can have crossover with another path and mutate. Here's how to course-correct when things have gotten toxic with a family member. Points on and inside a boundary. The Leticia Pact offers an opportunity to ensure that the responsibility of protecting the Amazon forest and its sustainable resource use is shared among Amazon countries. Add this line: 'clrmap','hsv', Note the difference between the optional argument 'cptcmap_pm' used for loading cpt files, and 'clrmap' used here for loading Matlab colormap files. 18 Boundary Layer Flow of a Newtonian Fluid on a Flat Plate 328. You would create a set of invisible objects with the correct colors, and then insert them into the legend with their DisplayName properties set to the values. Use this property to set the color of points in point cloud. nearest stream that flows to the discharge point you are studying. Typically if we have two points namely y1, x1 and y2, x2 and we would like to know the value of y=f(x) at a value of x lying between x1 and x2 it is not difficult to show that from the. This is where Are's entry comes into play. MATLAB: Consider a set of data points (1, y1),(2, y2), (100, y100) where the y's are random numbers with a normal distribution with mean 0 and variance 1 (note that this implies that the probability of the y-values being less than zero is the same as the probability of being greater than zero). At this point we've left the built-in functions of Shapely and we'll have to write some more code. MATLAB Commands – 4 Special Variables and Constants ans Most recent answer. A Windows version of MATLAB is available to students to put on their personal computers - see your professor or Chris Langley to find out how to get this program. In today's episode: Bribery is the new word of the moment among House. Then pick a distance from that point. But you can think of this as a continuous slider f (n) = u * g (n) + (1-u) * h (n) instead of only these three algorithms. # The A* Algorithm. Siegel ( a unique parameter ray landing with irrational external angle). There are three cases for the value of ˙9: If the homographyis exactly determined, then ˙9 = 0, and there exists a homographythat ts the points exactly. Line Charts in MATLAB ®. The Point Cloud Library (or PCL) is a large scale, open project for 2D/3D image and point cloud processing. For example, you might measure the rate of °ow of water at certain times and use these to determine the total amount of water that °owed. The concern in this Annex to Sustainability through the Dynamics of Strategic Dilemmas is to indicate features associated with the Mandelbrot set (hereafter the M-set) in order to point to their significance in configuring complex experience -- rather than in describing natural phenomena, as is normally the case. 1 % This Matlab script solves the one-dimensional convection. Boundary definition, something that indicates bounds or limits; a limiting or bounding line. However, to see the points you must specify a marker symbol, for example, plot(X,Y,'o'). 1-5) The data must exist as vectors in the MATLAB workspace. For the range, enter a minimum or a maximum value or both. Kurt Volker, a former State Department envoy to Ukraine, leaves a closed House meeting on Oct. Other properties can be set inside the plot command. values = set(H,Name) returns the possible values for the specified property. The floatingObject tutorial calculates the motion of a free floating box in water subjected to a dambreak using the interDyMFoam solver with dynamic meshes. Cross-boundary collaboration has economic, socio-political, and environmental advantages, substantially reducing conservation costs ([ 10 ][10]). benjamin ma (view profile) Discover what MATLAB. I am sure this is an easy fix, I am just unsure on how to do it. It can also accommodate other types of BVP problems, such as those that have any of the following:. choose the point below that lies along the boundary of exactly two Voronoi regions. What if you want this polynomial to go through certain points. Interior and Boundary Points of a Set in a Metric Space. Change the real-axis limits to -25 to 5 and the imaginary axis limits to -2. There is no need to have the final points in these match the initial points; that is, when arrays as described are used in situations where they are interpreted as polygon vertices, the polygon is automatically closed. k contains the indices of the input points that lie on the boundary. We apply the method to the same problem solved with separation of variables. These are the critical points of the equation and one can linearize the functions P and Q near these points to get a good idea of the type of phase trajectory expected in the neigborhood by looking at the solution of the linear fractional equation dy/dx=(a*x+b*y)/(c*x+d*y) whenever the critical point lies at x=y=0. The points are sorted into a tree structure in a preprocessing step to make finding the closest point quicker. The points (x(k),y(k)) form the boundary. Develop a support system of people who respect your right to set boundaries. The Fortran 77 code TWPBVP was originally developed by Jeff Cash and Margaret Wright and is a global method to compute the numerical solution of two point boundary value problems (either linear or non-linear) with separated boundary conditions. Then, this initial bounding box is partitioned into a grid of smaller cubes, and grid points near the boundary of the convex hull of the input are used as a coreset, a small set of points whose optimum bounding box approximates the optimum bounding box of the original input. FRAMES is a 2 x NUMKEYPOINTS, each colum storing the center (X,Y) of a keypoint frame (all frames have the same scale and orientation). As in the previous section, we recommend using the Ground Truth Labeler app in MATLAB Automated Driving System Toolbox. It integrates a system of first-order ordinary differential equations. When the box is checked, all of the areas used in the analysis will be included in the result, regardless. Inf Infinity. Chapter 4733-37 Standards for Boundary Surveys. 2: a) Improved Non-rigid registration,. Therefore, you can specify the same color for all points or a different color for each point. Typically if we have two points namely y1, x1 and y2, x2 and we would like to know the value of y=f(x) at a value of x lying between x1 and x2 it is not difficult to show that from the. Other properties can be set inside the plot command. The form y = ax. Solving Laplace's Equation With MATLAB Using the Method of Relaxation By Matt Guthrie Submitted on December 8th, 2010 Abstract Programs were written which solve Laplace's equation for potential in a 100 by 100. Distribute a set of nodes interior to the domain. Here we just want one plot, so we give it the range, the domain, and the format. my method thus far: i obtained a first guess at the centre of the inscribed circle and the three segmen. benjamin ma (view profile) Discover what MATLAB. New travel datasets cannot be. My input instances are in the form $[(x_{1},x_{2}), y]$, basically a 2D input instan. Only when we believe, deep down, that we are enough can we say "Enough!" The Dare • Make a mantra. Breeze had a fumble recovery and a pick-six against the Trojans. \Introduction to MATLAB for Engineering Students" is a document for an introductory course in MATLAB°R 1 and technical computing. A turning point. 1 Point values Just like the pre-processor, the post-processor window has a set of different editing modes: Point, Contour, and Area. Very new to MatLab, plotting a tetrahedron using a set of four points? Hey guys, So this is the first time I've ever used MatLab and I've been told to draw a shape using 4 points that I have been given:. It is used for freshmen classes at North-western University. MATLAB: Workshop 14 - Plotting Data in MATLAB page 4 function linearplot (1) Function to plot a set of (x,y) data with specified symbol and annotate plot with x-axis label, y-axis label, and title. This is a MATLAB software suite, created by JAC Weideman and SC Reddy, consisting of seventeen functions for solving differential equations by the spectral collocation (a. MATLAB FUNCTIONS AND APPLICATION SCRIPTS FOR EDUCATIONAL USE William J. In today's blog, I define boundary points and show their relationship to open and closed sets. If U is open and if a boundary point y lies in U, then every open set around y contains infinitely many points not in U. Even if we numbered the grid points irregularly, we would still have this small number of non-zero points. This article was written in 2002 and remains one of our most popular posts. The interior, boundary, and exterior of a subset together partition the whole space into three blocks (or fewer when one or more of these is empty). 1-7) Explore various parametric and nonparametric fits, and compare fit results graphically and numerically. 5 exactly and the decision boundary that is this straight line, that's the line that separates the region where the hypothesis predicts Y equals 1 from the region where the hypothesis predicts that y is equal to zero. Polygons in two dimensions are generally represented in MATLAB with two arrays, locations for the X vertices and Y vertices. a ε-neighborhood that lies wholly in , the complement of S. To find the x-intercept, we have to find the value of x where y = 0 -- because at every point on. Let $$A$$ be a subset of a topological space $$X$$, a point $$x \in X$$ is said to be boundary point or frontier point of $$A$$ if each open set containing at $$x. Now I have to find the trace a boundary around this shape (around whole points, that are grouped together). 0, the language-agnostic parts of the project: the notebook format, message protocol, qtconsole, notebook web application, etc. Boundary events are used by several signal processing functions that process continuous data. The boundary point is so called if for every r>0 the open disk has non-empty intersection with both A and its complement (C-A). The Treaty of Paris was a starting point for future agreements, and a few disagreements. Excerpt from GEOL557 Numerical Modeling of Earth Systems by Becker and Kaus (2016) 1 Finite difference example: 1D implicit heat equation 1. Boundary point indices, returned as a vector or matrix. 1 Suppose, for example, that we want to solve the first order differential equation y′(x) = xy. (A obvious shifts in. Completing Section 10, Geographical Data The Verbal Boundary Description and Boundary Justification. For 2-D problems, k is a column vector of point indices representing the sequence of points around the boundary, which is a polygon. In the real world, boundaries are rarely so uniform and straight, so we were naturally led to experiment with the convex hull of the points. Draw the Voronoi diagram for this set of points. The polynomial 2x4 + 3x3 − 10x2 − 11x + 22 is represented in Matlab by the array [2, 3, -10, -11, 22] (coefficients. For example, a normalized FontSize of 0. The circles mark the values which were actually computed (the points are chosen by Matlab to optimize accuracy and efficiency). Polygons in two dimensions are generally represented in MATLAB with two arrays, locations for the X vertices and Y vertices. DOCUMENTING BOUNDARIES. If working on a server, you may need to do it like this:. Using alpha shapes, a set of points can be assigned a polygon by using a set of circles of a specific radius: imagine an arbitrary shape drawn around the points and proceed to remove as much of this shape as possible using circles of a specific radius. Finite Difference Method using MATLAB. The protests that started in June over a now-shelved extradition bill have snowballed into an anti-China campaign amid anger over what many view as Beijing's interference in Hong Kong's autonomy. Using MATLAB, there are several ways to identify elements from an array for which you wish to perform some action. Also you'll have to adjust the range of the grid created to that of the data. Implementation of MultiMATLAB. You can colorize and/or resize the points according to a generic frequency field named "N", or you can use a more typical field, such as altitude, population, or category. Use this dataset with care, as several of the borders are disputed. What you share is. The boundary point is so called if for every r>0 the open disk has non-empty intersection with both A and its complement (C-A). May need to trace inner boundary (outermost pixels of foreground) or outer boundary (innermost pixels of background): bwtraceboundary command in MATLAB Or if foreground, background labeled 1, -1, may use a zero level set searching method to get subpixel coordinates of boundary: contour command in MATLAB. The contents of the points p, boundaries b and triangles t arguments are explained in the section ffreadmesh(). For second order differential equations, which will be looking at pretty much exclusively here, any of the following can, and will, be used for boundary conditions. Let i) Say is not an isolated point Show is a limit point. It's easiest to set boundaries when you first start a job; that's when the basics are up in the air in terms of start and end times for the work day, overtime circumstances, working from home, etc. It is used for freshmen classes at North-western University. If you're keen to learn more about email and webmail, you may find this recent article on Email as a Service of great. Then the boundary condition, V 100V is applied to all grid points with y 0. For more complicated geometries the distance function can be computed by. For other properties, set returns a statement indicating that Name does not have a fixed set of property values. (2) Produces a pop-up window with plot but returns no values. Lagrangian Pts. Thanks to Sourceforge and Github for hosting the project. There are three cases for the value of ˙9: If the homographyis exactly determined, then ˙9 = 0, and there exists a homographythat ts the points exactly. Completing Section 10, Geographical Data The Verbal Boundary Description and Boundary Justification. The boundary fill algorithm can be implemented by 4-connected pixels or 8-connected pixels. To state also the obvious, for Julia sets having interior prisoner points, all points in the set lie on the boundary (i. What I have is: Coordinates of all the properties. We then look at more great free online graph makers for Stem and Leaf Plots, Box and Whisker Plots, Histograms, Scatter Plots, Straight Line Graphs, Quadratics, Parabolas, Cubics, and Trigonometry Functions. called a random set. Eliminate toxic persons from your life - those who want to manipulate you, abuse you, and control you. 8 edition of Fox News's Hannity. TWO-DIMENSIONAL LAMINAR BOUNDARY LAYERS 1 Introduction. Hence: p is a boundary point of a set if and only if every neighborhood of p contains at least one point in the set and at least one point not in the set. We will be using distmesh to generate the mesh and boundary points from the unit circle. Point pattern analysis is the evaluation of the pattern, or distribution, of a set of points on a surface. MATLAB x = Anb to solve for Tn+1). The boundary point is so called if for every r>0 the open disk has non-empty intersection with both A and its complement (C-A). 1) We can use MATLAB's built-in dsolve(). The axes of a plot are a separate object in Matlab, and can be controlled by using set, get and other commands. Now I have to find the trace a boundary around this shape (around whole points, that are grouped together). The modified arctic boundary is set to move in late Monday afternoon. You can obtain a vector ts and a matrix ys with the coordinates of these points using [ts,ys] = ode45(f,[t0,t1],[y10;y20]). 5 exactly and the decision boundary that is this straight line, that's the line that separates the region where the hypothesis predicts Y equals 1 from the region where the hypothesis predicts that y is equal to zero. STBoundary (). Chapter 4733-37 Standards for Boundary Surveys. Methods of this type are initial-value techniques, i. Gunakala Department of Mathematics and Statistics The University of the West Indies St. Points matrix. Symbolic Math in Matlab. 1 day ago · It's Friday, November 15th, 2019, 52 days since House Democrats began impeachment proceedings. 0, the language-agnostic parts of the project: the notebook format, message protocol, qtconsole, notebook web application, etc. They are very easy to use. Limit points are also called accumulation points of Sor cluster points of S. A point of is an isolated point of if it has a neighborhood which does not contain any other points of. Then, this initial bounding box is partitioned into a grid of smaller cubes, and grid points near the boundary of the convex hull of the input are used as a coreset, a small set of points whose optimum bounding box approximates the optimum bounding box of the original input. %% Projected all point Q onto a plane Oxy simply by removing the 3rd coordinate. The key point is that the mean value equations follow a regular pattern and so can be defined by a short MATLAB program. Mouse pointer goes out of boundaries on right side of screen. This is the union of all skeleton segments SA()= S k ()A k=0 K S k ()A =()A kB ()A kB B Mathematically, a skeleton can be written as a series of openings and closing where k indicates k successive erosions of A by B Note that this maximum disk (which is smaller) also touches the boundary at two points. До зустрічі! (My computer tells me. The test set predictions are computed using algorithm 3. Thanks to Urs Boeringer for adding a patch to WriteSegy, to enable use of an arbitrary set of TraceHeader values. In the case of open sets, that is, sets in which each point has a neighborhood contained within the set, the boundary points do not belong to the set. In linear programming, A. Output is a set of continuous points along the boundary of object in 2D image. We have also generated figure-ground labelings for a subset of these images which may be found here We have used this data for both developing new boundary detection algorithms, and for developing a benchmark for that task. A tighter boundary, but it ignores those places in the "H" where the points dip inward. ∂nu(x) = constant. This process continues until all points up to the boundary color for the region have been tested. Algorithms for the Solution of Two-Point Boundary Value Problems. limit point · closed set · closure · interior · boundary This section introduces several ideas and words (the five above) that are among the most important and widely used in our course and in many areas of mathematics. Calculating the Efficient Frontier In this post, I'll demonstrate how to calculate and plot the efficient frontier using the expected returns and covariance matrix for a set of securities. The code is used below to draw the black boundary around the blue points. 8/18 Introduction to DistMesh for Matlab ! Algorithm (Conceptual); Step 1. Solving Boundary Value Problems. For more complicated geometries the distance function can be computed by. Breeze had a fumble recovery and a pick-six against the Trojans. Dark markers in Figure 1 stand for interior points, while light markers represent boundary points. 1) We can use MATLAB's built-in dsolve(). In the figure above, the points P,Q and R are shown. To graph the solution set of a linear inequality with two variables, first graph the boundary with a dashed or solid line depending on the inequality. As in the previous section, we recommend using the Ground Truth Labeler app in MATLAB Automated Driving System Toolbox. Cross-boundary collaboration has economic, socio-political, and environmental advantages, substantially reducing conservation costs ([ 10 ][10]). To find a numerical solution to equation (1) with finite difference methods, we first need to define a set of grid points in the domainDas follows: Choose a state step size Δx= b−a N (Nis an integer) and a time step size Δt, draw a set of horizontal and vertical lines across D, and get all intersection points (x j,t n), or. With a little more effort, the result can obviously be enhanced. If a point is found to be of fill color or of boundary color, the function does not call its neighbours and returns. Unlike IVPs, a boundary value. 05 The filter can easily be designed with the truncated-and-windowed impulse response algorithm implemented in fir1 (or using fdatool) if we use a Kaiser window. Boundary definition is - something that indicates or fixes a limit or extent. The function vdp1. In each pane, the coloured points represent training data and the decision boundaries are black lines. Moreover, our algorithm is trainable. A SIMPLE MESH GENERATOR IN MATLAB PER-OLOF PERSSON AND GILBERT STRANG∗ Abstract. Consider the same system of linear equations. com states that Matlab was originally created by Cleve Moler, a Numerical Analyst in the. M_Map is a set of mapping tools written for Matlab (it also works under Octave). est is determined by specifying the values of all its components at a single point x = a. The function bvp4c solves two-point boundary value problems for ordinary differential equations (ODEs). pngto create each of the four results below. \$\begingroup\$ The ball tree method in scikit-learn does this efficiently if the same set of points has to be searched through repeatedly. Variable Types: The only type of variable in MATLAB is an array. Let x be any point of A, and V a neighborhood of x contained in A. k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). This is useful when you don't want to immediately compute an answer, or when you have a math "formula" to work on but don't know how to "process" it. While MATLAB can be run interactively from the command line, you can write a MATLAB program by composing a text file that contains the commands you want MATLAB to perform in the order in which they appear in the file. A piecewise-de ned polynomial is de ned in Matlab by a vector containing the breaks and a matrix de ning the polynomial coe cients. org If you have any comments about this dataset, please post them on this page. This boundary conditions interpolates the values from a set of supplied points in space and time uniformJumpAMI This boundary condition provides a jump condition, using the cyclicAMI condition as a base. To change the shape of the summer to rectangular, or to add additional inputs or change the sign, double click on the summer. This is an initial value problem (IVP). " The web page of the MathWorks, Inc. Dark markers in Figure 1 stand for interior points, while light markers represent boundary points. For each vertex of the mesh, compute a descriptor that defines it's location with respect to the background triangulation. k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). Note the difference between a boundary point and an accumulation point. To set a variable to a single number, simply type something like z =1. On the other hand, the problem becomes a boundary-value problem if the conditions are needed for both initial and flnal points. Perhaps, you want the curve to cross (0, 0) and (2, 0). Trinidad and Tobago K. <== Figure 1 Given the coordinates in the above set, How can I get the coordinates on the red boundary. To add text to multiple points, specify x and y as vectors with equal length. The cylinder in question is the set of all points whose distance from the line is 4. Does anyone have any suggestions on how to produce a polyline boundary from a set of points, much like Matlab's 'boundary' function? It's important that the result is not 'self-crossing'. where xl represents the left endpoint of the boundary and xr represents the right endpoint of the boundary, and initial condition u(0,x) = f(x). Mat 14:22-23, Gal 6:5 MYTH REALITY. The convex hull of a geometric object (such as a point set or a polygon) is the smallest convex set containing that object. As required arguments, you must specify a binary image, the row and column coordinates of the starting point, and the direction of the first step. An example image obtained from a matlab link on a function obtaining the boundary of points shows boundaries (orange and red) of a set of 2D points. Matlab itself has a set of built-in colour scales (type: doc colormap), we provided a functionality of using them. from set of data "column" at (all row & column 1,2,3) % generate boundary of data points. \begin{align} \quad \partial A = \overline{A} \cap \overline{X \setminus A} \quad \blacksquare \end{align}. the point •Search for all triangles whose circumcircle contain the point (d> patch(x,y,'color');. Digital Image Processing ( Examples. The code is intended for educational purposes only - it contains bugs. Perceptron's Decision Boundary Plotted on a 2D plane. 23 Navy, 52-20 in South Bend. In addition, it has powerful graphics capabilities and its own programming language. A Windows version of MATLAB is available to students to put on their personal computers - see your professor or Chris Langley to find out how to get this program. MATLAB: Workshop 15 - Linear Regression in MATLAB page 5 where coeff is a variable that will capture the coefficients for the best fit equation, xdat is the x-data vector, ydat is the y-data vector, and N is the degree of the polynomial line (or curve) that you want to fit the data to. Calculating the distance between two points and Learn more about lambda, distance calculation, points, stl 3d, plot MATLAB Answers. Then pick a distance from that point. These rules are intended to be the basis for all surveys relating to the establishment or retracement of property boundaries in the state of Ohio. 25,81) gives 51 points between 0. | CommonCrawl |
Biological activities and chemical compositions of slime tracks and crude exopolysaccharides isolated from plasmodia of Physarum polycephalum and Physarella oblonga
Tuyen T.M Huynh1,
Trung V. Phung2,
Steven L. Stephenson3 &
Hanh T.M Tran1
BMC Biotechnology volume 17, Article number: 76 (2017) Cite this article
The myxomycetes derive their common name (slime molds) from the multinucleate trophic stage (plasmodium) in the life cycle, which typically produces a noticeable amount of slimy materials, some of which is normally left behind as a "slime track" as the plasmodium migrates over the surface of a particular substrate. The study reported herein apparently represents the first attempt to investigate the chemical composition and biological activities of slime tracks and the exopolysaccharides (EPS) which cover the surface of the plasmodia of Physarum polycephalum and Physarella oblonga.
Chemical analyses indicated that the slime tracks and samples of the EPS consist largely of carbohydrates, proteins and various sulphate groups. Galactose, glucose and rhamnose are the monomers of the cabohydrates present. The slime tracks of both species and the EPS of Phy. oblonga contained rhamnose, but the EPS of Ph. polycephalum had glucose as the major monomer. In term of biological activities, the slime tracks displayed no antimicrobial activity, low anticancer activity and only moderate antioxidant activity. However, EPSs from both species showed remarkable antimicrobial activities, especially toward Candida albicans (zone of inhibition ≥20 mm). Minimum inhibitory concentrations of this fungus were found to be 2560 μg/mL and 1280 μg/mL for EPS from Phy. oblonga and Ph. polycephalum, respectively. These EPS samples also showed moderate antioxidant activities. However, they both displayed cytotoxicity towards MCF-7 and HepG2 cancer cells. Notably, EPS isolated from the plasmodium of Phy. oblonga inhibited the cell growth of MCF-7 and HepG2 at the half inhibitory concentration (IC50) of 1.22 and 1.11 mg/mL, respectively.
EPS from Ph. polycephalum plasmodium could be a potential source of antifungal compounds, and EPS from Phy. oblonga could be a potential source of anticancer compounds.
Exopolysaccharides (EPSs) are macromolecules mainly composed of carbohydrate residues, which are secreted by microorganisms into the surrounding environment. EPSs can serve as centers for bacterial cell aggregation, as nutrient sources and also form a protective barrier for the cell against harsh external conditions [1]. Microbial EPSs have gained a great deal of interest due to their potential biological activities [2]. EPSs isolated from bacteria and fungi have been found to possess inhibitory activities against gram positive and negative bacteria and the H1N1 virus [3,4,5]. EPSs isolated from bacteria and fungi have a significant scavenging ability against superoxide, hydroxyl and DPPH radicals [6,7,8]. Microbial EPSs also represent a promising source of anticaner agents. Cell-bound galactan exoplysaccharide of Lactobacillus plantarum, at a concentration of 600 μg/mL, showed cytotoxic effects of about 56.34% against the human liver carcinoma (HepG2) cell line [9]. Osama et al. [5] found that EPS isolated from Bacillus marinus showed a strong antitumor property against breast cancer (MCF-7) cell lines and alveolar basal epithelia (A-549) cell lines at concentration of 100 μg/mL. In addition, EPS from Aspergillus aculeatus displayed a strong anti-proliferation effect on human cervical carcinoma cells (Hela), human breast carcinoma cells (MCF-7) and gastric carcinoma cells (MGC-803) with inhibition rates of 53.9%, 29.1% and 34.1%, respectively, at a concentration of 1000 μg/mL for 48 h [10].
The myxomycetes are a group of primitive phagotrophic eukaryotes. The myxomycete life cycle consists of two very different trophic stages—uninucleate amoebae and a distinctive multinucleate structure, the plasmodium. Under favorable conditions, the plasmodium converts into fruiting bodies [11]. Having the characteristics of both fungi and protozoans make the myxomycetes an unusual group of microorganisms. More than 100 secondary metabolites have been isolated from myxomycetes, and many of among those are novel bioactive compounds [12]. In addition to potential antimicrobial compounds such as a new glycerolipid (bahiensol) isolated from the plasmodium of Didymium bahiense [13], stigmasterol and fatty acids obtained from plasmodial extracts of Phy. oblonga [14], some remarkable anticancer compounds from myxomycetes have also been reported. Cyclic phosphatidic acid (CPA), a novel bioactive lipid isolated from Ph. polycephalum was found to have ability to inhibit cancer cell invasion and metastasis [15]. In addition, two new bisindole alkaloids isolated from the fruiting bodies of Lycogala epidendrum showed cytotoxicity against HeLa cells and Jurkat cells with relatively low IC50 values [16]. In similar research, Kehokorins A, a novel dibenzofuran isolated from the fruiting bodies of Trichia favoginea var. persimilis was found to have significantly high cytotoxicity toward HeLa cells with an IC50 value of 1.5 μg/mL [17].
Among the myxomycetes, those members of the Physarales (e.g., Physarum polycephalum) often form large plasmodia and are relatively easy to culture on synthetic media. When cultured in liquid media, microplasmodia are formed instead of plasmodia. Both microplasmodia and plasmodium lack cell walls. On solid media, the plasmodium is a slimy mass of protolasm which is capable of moving around. In the absence of cell walls, the slime sheath represents the only protection from injury and the environment, and material from the slime sheath is left behind as a slime track as the plasmodium migrates over the surface of a given substrate [18]. There have been a few studies of the chemical composition of EPSs isolated from microplasmodia in liquid culture, but there appear to be no studies of the properties of EPS and slime tracks isolated from solid cultures of myxomycete plasmodia. The chemical characteristics of the EPSs seem to strongly depend upon the culture media used. McCormick et al. [19] found that Ph. polycephalum microplasmodial cultures started to produce more EPS when the cells were converted into spherules and reported that the EPS is a sulfated galactose polymer containing trace amounts of rhamnose. Simon and Henney [20] reported that the EPS was a glycoprotein. More recently, Sperl [21] found the EPSs produced by Ph. polycephalum consisted of two galactans with different ratios of phosphorous and sulfur. To the best of our knowledge, there has been only one report on the biological activity of myxomycete EPS, and this was published by Asgari and Henney [22]. Their research found that the EPS secrected by the microplasmodia of Physarum flavicomum in liquid culture was composed mainly of glycoprotein and could inhibit the cell growth and division of Bacillus subtilis.
Given the fact that microbial EPSs have been found to have potential biological activities and myxomycete plasmodia produce a noticeable amount of slimy materials, it seemed worthwhile to evaluate the biological activities (antimicrobial, antioxidant and anticancer activities) and to determine the chemical characteristics of slime tracks and EPS samples isolated from Physarella oblonga and Physarum polycephalum. These two species were chosen because of their sample availability and their ease to culture.
EPS production of Phy. oblonga and Ph. polycephalum
The medium used for cultivation of myxomycete plasmodia was adapted from the research of Henney and Henney [23]. We attempted to replace glucose in the original medium with other carbon sources (e.g., oyster mushroom powder [since the oyster mushroom is one of the favorite food sources of some myxomycete plasmodia in the nature], rice bran and galactose). However, preliminary results showed that Ph. polycephalum preferred glucose and Phy. oblonga grew better in water agar without glucose (Phy. oblonga has agar hydrolytic activity). As such, for slime track and EPS production, typical plasmodia of Ph. polycephalum and Phy. oblonga were transferred to nutrient and water agar, respectively, and incubated under dark condition at 25 °C for 7 days (Fig 1). The amounts of slime track material and EPSs obtained are presented in Table 1. The amounts of slime track material obtained from both species were higher than those of EPSs still in contact with the plasmodium.
Plasmodium and slime track
Table 1 Amounts of slime track and EPSs isolated from cultures of Phy. oblonga and Ph. polycephalum
Chemical composition of the slime track and EPS samples from Phy. oblonga and Ph. polycephalum
The carbohydrate, protein and sulfate contents of EPSs are listed in Table 2. The total carbohydrate content of the samples varied from 55 to 82% according to the phenol-sulfuric acid method. Sulfated groups and protein made up small proportions (Table 2). In general, the EPS and slime track of Phy. oblonga had greater amounts of carbohydrate compared to those of Ph. polycephalum. However, the samples from the latter species had higher percentages of sulfate content. When comparisons are made between the slime track and EPS samples of each species, the amounts of carbohydrates of the slime tracks were higher than that of the EPS, and this applied for both species.
Table 2 Total carbohydrate, protein and sulfate contents of the slime track and EPSs
The slime tracks and EPSs were depolymerized by using the TFA hydrolysis method. The monosaccharide compositions of the EPSs produced by Phy. oblonga and P. polycephalum were detected by TLC, and their quantities were measured by GC-FID analysis. The data obtained data are displayed in Fig. 2 and Table 3.
Chromatograms of GC analysis of the monosaccharide composition of slime tracks and EPSs. The chromatogram of Phy. oblonga EPS (a), Phy. oblonga slime trạck (b), Ph. polycephalum EPS (c), Ph. polycephalum slime track (d) and standard sugars (e) was developed using values of GC. Galactose (Gal), glucose (Glc), rhamnose (Rha) were used as standard sugars. Inositol (IS) was used as internal reference
Table 3 Monomer compositions of crude EPSs obtained from cultures of Ph. polycephalum and Phy. oblonga
Table 3 showed that the slime track and EPS samples contained glucose, galactose and rhamnose and rhamnose was the major monosaccharide of the EPS from Phy. oblonga and the slime tracks of both species, for which it accounted for 66.37%, 62.58% and 71.46%, respectively. In contrast, EPS from Ph. polycephalum was composed mainly of glucose (50.87%).
The present study is the first to determine the monomer compositions of EPSs isolated from Phy. oblonga. However, with Ph. polycephalum, the results reported have varied from one study to another. Extracellular slime from broth cultures (containing glucose as the carbon source) of Ph. polycephalum was found to contain galactose, sulfate, and trace amounts of rhamnose [19]. However, Simon and Henny [20] found that slime production of Ph. rigidum, Ph. flavicomun and Ph. polycephalum contained a single sugar component of galactose when cultured on media containing glucose as the carbon source. Similar results for Ph. polycephalum were also reported by Farr et al. [24]. In general, monomer compositions and their ratios in microbial EPSs are influenced by the carbon source in the culture medium [25, 26]. However, with the myxomycetes, there would appear to be some other factors involved. Ph. polycephalum in our study was cultured on a glucose-based solid medium, but the monomer composition was completely different from what has been reported in other studies. It is possible that plasmodia produce different kinds of slime material as compared to microplasmodia.
Antimicrobial activity of EPSs against pathogens
Antimicrobial activities of the EPS and slime track samples as determined by the agar diffusion method are presented in Table 4.
Table 4 Antimicrobial activities of EPS and slime track samples from Phy. oblonga and Ph. polycephalum
The results indicate that there were significant differences in antimicrobial activities among the samples. The slime tracks of both two species did not exhibit any inhibitory activity against the strains of microbes tested. This could be explained by the theory that myxomycete plasmodia leave slime tracks behind when migrating simply to mark the area which has been exploited for food resources [12]. In contrast, isolated EPS from plasmodia showed promising activities towards S. aureus and C. albicans, whereas C. albicans was found to be the most susceptible to the EPSs from both species (zone of inhibition ≥20 mm) (Table 4). The antimicrobial activities of the EPSs which are still in contact with the plasmodia would be explained by the possibility that these compounds would protect the plasmodia from external factors, including other microorganisms.
The results obtained for antimicrobial activities in the present study agree with those reported in some previous studies relating to the antimicrobial property of microbial EPSs. Asgari and Henney [22] found that the cell growth and division of Bacillus subtilis (a gram positive bacterium) was inhibited by slime secreted by Ph. flavicomun. The degradation of the cell wall caused morphological changes such as swollen cells or cell lysis. Li et al. [27] found that EPS from Lactobacillus plantarum exhibited inhibitory activities against S. aureus and C. albicans. EPS from Enterobacter faecalis showed significantly high activity toward C. albicans [28].
The MIC values of the EPS samples from Ph. polycephalum and Phy. oblonga were studied against C. albicans and S. aureus. The data obtained are shown in Table 5.
Table 5 Minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) or minimum fungicidal concentration (MFC) of EPS and slime track samples from Phy. oblonga and Ph. polycephalum
With respect to their ability against S. aureus, the MIC value of the Ph. polycephalum EPS was almost the same with that of Phy. oblonga. However, EPS from Ph. polycephalum showed much better antifungal activity, since the MIC value (1280 μg/mL) of the EPS from this species against C. albicans was just about a half that from Phy. oblonga (2560 μg/mL) and twice when compared with the standard antifungal drug (640 μg/mL) (Table 5). However, this EPS is not yet purified. Nevertheless, EPS from Ph. polycephalum appears to have the potential for treatment of C. albicans. However, it should be noted that the MBC or MFC values are higher than the MIC values. This suggests that the compound would easily inhibit microbial growth at low concentrations, but leading to actual microbial death would require higher doses.
Antioxidant activity
In this part of our study, in vitro antioxidant activities of the EPS samples with the concentration range of 0–6.0 mg/mL from Phy. oblonga and Ph. polycephalum were determined by DPPH assay and compared with that of ascorbic acid. Figure 3 illustrates that there was not a major difference observed between radical scavenging ability of slime track and EPS extracts from Phy. oblonga and Ph. polycephalum at an initial concentration 1.0 mg/mL. However, at the higher sample concentrations, EPS isolated from a plasmodium showed higher radical scavenging ability than EPS isolated from the slime track material in both species. EPS from Phy. oblonga showed maximum DPPH scavenging activity (80.41%) at a concentration of 6 mg/ml, whereas that of ascorbic acid was 99.56%.
Antioxidant activities of EPSs of Phy. oblonga and Ph. polycephalum in vitro. POS and POP are slime track and EPS samples from Phy. oblonga, whereas PPS and PPP are slime track and EPS samples from Ph. polycephalum
The EC50 is the concentration of antioxidant needed to obtain a 50% antioxidant effect, and is typically used as a parameter to express or compare the antioxidant capacity of different compounds. Lower EC50 values show a higher antioxidant activity [28]. EC50 values of the EPS samples and ascorbic acid are displayed in Table 6.
Table 6 EC50 values of the slime track and EPS samples from Phy. oblonga and Ph. polycephalum
According to the EC50 data, the slime track and EPS samples from Phy. oblonga showed higher scavenging abilities than those from Ph. polycephalum. These data also indicated that EPSs and slime tracks from Ph. polycephalum and Phy. oblonga have comparable antioxidant capacity with some common edible mushrooms [29,29,32]. However, the antioxidant activities these samples were far smaller than ascorbic acid.
In vitro cancer cell line cytotoxicity assays
In this experiment, crude EPS and slime track samples from Phy. oblonga and Ph. polycephalum were subjected to in vitro cytotoxicity SRB assay with fibroblast and cancer cell lines. Cells were treated with EPSs ranging from 0.25 to 1.5 mg/mL and incubated for 48 h, and then the cell inhibitory rate was measured by using a spectrophotometer. The data obtained data are shown in Fig. 4.
Growth inhibition of MCF-7 (a) and HepG2 (b) cancer cell lines by treating with crude EPS extracts from Phy. oblonga and Ph. polycephalum in comparison with camptothecin standard (CPT) with the concentration of 0.005 μg/mL via SRB assay. POS, POP, PPS and PPP were represented in the Phy. oblonga slime track, Phy. oblonga EPS, Ph. polycephalum slime track and Ph. polycephalum EPS
The results indicate that EPSs possess different levels with respect to their toxicity effects against the cancer cell lines. At low concentrations (0.25–0.5 mg/mL), none of negatives effect on the proliferation of cancer cell lines were observed. However, EPSs were found to show anti-proliferation when the concentration increased from 0.75 to 1.5 mg/mL.
EPSs isolated from a plasmodium showed higher inhibition rates against the cancer cell lines than EPSs isolated from the slime tracks. Most notably, EPS from Phy. oblonga showed significantly higher inhibitory activities against MCF-7 and HepG2 when compared to that of Ph. polycephalum.
The half inhibitory concentrations (IC50) of the EPS sample from Phy. oblonga toward MCF-7 and HepG2 were found as 1.22 and 1.11 mg/mL, respectively. However, these activities are not comparable to the positive control (camptothecin).
Microbial EPS have been found to have anti-proliferation effects against HepG2 and MCF-7 cells. Wang et al. [9] reported that at the concentration of 600 μg/ml, purified EPS from Lactobacillus plantarum could suppress proliferation of HepG2 cells by 56.34% when treated for 72 h. In addition, Osama et al. [5] found that (IC50) of purified EPS from Bacillus marinus in the MCF-7 was 118.0 μg/mL after 48 h.
Culturing myxomycete plasmodia is challenging, but it is possible with the right medium components (e.g., carbon source) selectively used for each species. For example, agar is more suitable for cultivation of Phy. oblonga, but glucose is a better carbon source for Ph. polycephalum.
The slime track and EPS samples from Ph. polycephalum and Phy. oblonga were found to consist of glucose, galactose and rhamnose. Among these, rhamnose was the major monomer of the EPS from Phy. oblonga and the slime tracks from both species, but EPS from Ph. polycephalum contained mainly glucose. This difference may be because of the use of different carbon sources or it could be simply just because of the unique nature of each species. However, since monomer composition is one of the major factors other than molecular weight, structure of the polymeric backbone and degree of branching which decide the biological activities of microbial EPSs. Thus, when one tries to enhance the amount of EPS production by altering medium composition and cultivation condition, the effect those conditions on EPS compositions and subsequently EPS activities should be taken in consideration along with the amount of EPS.
The slime tracks from both two species showed no antimicrobial activity, low anticancer activity, and moderate antioxidant activity. These results support the theory that function of the slime tracks of myxomycetes relates more to marking the area which has been exploited for food resources as the plasmodia migrate from one area to another.
On the other hand, EPS samples from the two species displayed significant inhibitory activities against C. albicans and S. aureus, and both of them had anticancer activities against MCF-7 and HepG2. More importantly, EPS from Phy. oblonga was found to have significantly higher inhibitory activities. The IC50 values of this sample on MCF-7 and HepG2 were 1.22 mg/mL and 1.11 mg/mL, respectively. The differences in biological activities of the slime track and the EPS which is still in contact with the plasmodia suggest that they probably have different functions for the particular species of myxomycetes. EPS purification should be considered in future works to enhance the biological activities.
Myxomycetes are a unique group of microorganisms which could be a potential source of bioactive compounds.
The strain of Ph. polycephalum used in the present study was obtained as a sclerotium from the Carolina Biological Supply Company (Burlington, North Carolina, USA). The Phy. oblonga plasmodium was generated from a fruiting bodies collected from a moist chamber culture prepared from forest floor litter.
Nutrient agar was used for the plasmodial culture of Ph. polycephalum (1.0 L of the nutrient agar contained 100 mL of a basal salt solution, 5.0 g of glucose, 2.5 g of yeast extract, 20.0 g of agar, and 900 mL of distilled water adjusted to pH 5.5). The basal salt solution contained 29.78 g of citric acid, 33.10 g of K2HPO4, 2.50 g of NaCl, 1.00 g of MgSO4.7H2O, 0.50 g of CaCl2.2H2O, and 1000 mL distilled water [23, 33].
Water agar was used for the Ph. polycephalum sclerotium and spore germination and plasmodial culture of Phy. oblonga (1.0 L of water agar consisted of 15 g of agar and 1000 mL of water).
Pathogenic microorganisms, including Bacillus cereus VTCCB 1005, Escherichia coli JM 109, Samonella typhi ATCC 19430, Staphylococcus aureus ATCC 43300 and Candida albicans ATCC 141 were used. The bacteria were grown on LB agar medium (1.0 L of LB agar containing 10.0 gof NaCl; 5.0 g of yeast extract; 10.0 g of peptone; 20.0 g of agar and 1000 mL of distilled water adjusted to pH 7.0) and the fungus was grown on Saboroud agar medium (1.0 L of Saboroud agar containing 40.0 g of glucose; 10.0 g of peptone; 20.0 g of agar and 1000 mL of distilled water adjusted to pH 5.5).
Cancer cell lines (breast carcinoma MCF-7 and liver carcinoma HepG2 cells) and fibroblast cells were grown in DMEM 10% FBS medium and mantained at 37 ° C in a 5% CO2 incubator.
Plasmodial culture
Spore gernination, sclerotium activation and inoculum preparations of Ph. polycephalum and Phy. oblonga were carried out following Tran et al. [34, 35]. For plasmodial cultures and EPS production, a small piece of agar a containing actively growing plasmodium covered oatmeal flakes was transferred to a plate containing water agar (for Phy. oblonga) and nutrient agar (for Ph. polycephalum). The plasmodial cultures were incubated in the dark at 25 °C for 5 days, after which the slime tracks and plasmodia were collected.
Isolation of slime tracks and EPSs from the plasmodial cultures
Slime tracks were simply scraped off the surfaces of the plasmodial cultures. For EPS isolation, fresh plasmodia were carefully collected in 10 mL of sterile distilled water without discrupting the plasmodium to avoid extracting the intracellular components. The sample was gentely vortexed and centrifuged at 9000 rpm, 4 °C for 25 min [5], the supernatant was transferred into another tube; chilled ethanol was added in which the ratio of ethanol to the sample was 3:1 (v/v). The tube was mixed well and set at 4 °C. The following day, the mixture was centrifuged at the conditions as described above, and the pellet was collected as EPS. Both EPS and slime track samples were dried at 60 °C, and this material served as dry crude EPS. The crude EPS was then dissolved in 10% (w/v) trichloroacetic acid to remove proteins [5]. The supernatant was precipitated with chilled ethanol and centrifuged at the conditions described above. The pellet, referred to as partially purified EPS, was dried at 60 °C and stored at 4 °C. Partially purified EPS was used for activity assessment and structural analysis.
The total carbohydrate and protein content of the slime track and crude EPS samples were analyzed by using the phenol sulphuric acid method [36] and the Bradford method [37], respectively. The sulfate group content was analyzed with the barium chloride gelatin method [38].
Monosaccharide composition analysis by TLC
Ten mg of partially purified EPSs was hydrolyzed in 1.0 mL 3 M triflouroacetic acid (100 °C, 8 h). After hydrolysis, TFA in the sample was removed by decompression evaporation. The hydrolyzed EPSs were re-dissolved in ultra-pure water. The supernatant was obtained by centrifugation at 13000 rpm for 20 min.
The hydrolyses were applied to silica gel plates using a developing solvent of butanol: acetone:pyridine: H2O [10:10:5:5 (v/v/v/v)]. Galactose, glucose and rhamnose were used as the standards. After TLC plate development, carbohydrate was visualized by spraying TLC plates with 1% aniline:1% diphenylamine:85% H3PO4 [5:5:1 (v/v/v)] and heating at 100 °C for 5 min to reveal the colored spots [39].
Quantification of monomers by GC
Samples were prepared according to the Kakasy method [40] with some modifications. The hydrolyses were dissolved in pyridine containing 2.5% hydroxylamine hydrochloride; after inositol (as an internal reference) was added to the solution, it was allowed to react at 80 °C for 30 min and cooled down to room temperature. Hexamethyldisilazane (HMDS) and TFA were then added, the mixture was allowed to react for further 30 min at 45 °C and cooled down again. One mL of silylate derivative was subjected to a DB-1 column (30 m × 0.35 μm × 0.25 μm) of GC (Aligent 6890 N) fitted with a flame ionization detector (FID). The operating conditions were as follow: the N2 carrier gas rate was 1.0 mL/min; injection and detector temperatures were 280 °C and 300 °C, respectively; the column temperature was started at 60 °C for 1 min, then increased to 210 °C at the rate of 20 °C/min and maintained there for 5 min, and finally increased to 300 °C at the rate of 100 °C/min and maintained there for 10 min. Standard sugars (galactose, glucose, lactose, rhamnose and sucrose) with inositol as the internal standard were prepared and subjected to GC analysis separately in the same way.
Well diffusion method
Antimicrobial activity of the EPS and slime track samples was determined using the agar well diffusion method [4]. A volume of 100 mL of the cell suspension of the pathogenic culture (108 CFU/mL) was spread on the surface of a LB/Sabouraud dextrose agar plate using a sterile cotton swab. An amount of 100μL of the sample (5 mg/mL) was introduced into each well (8 mm in diameter) in the plate. The positive antibacterial control used was erythromycin (1.0 mg/mL) and the antifungal control was ketoconazole (1.0 mg/mL). Sterile distilled water was used as the negative control. The plates were incubated for 37 °C in 8 h. Antimicrobial activity was determined by measuring the diameter of the clear inhibition zone around each well.
Minimum inhibitory concentration (MIC)
MIC is the lowest concentration of an agent that inhibits the visible growth of a microorganism after overnight incubation [41]. MIC was carried out on the microorganisms that showed sensitivity to the samples and was done using the broth dilution method. MIC values were determined according Sen and Batra [42] with some modification. EPS/slime track samples were prepared with the concentration range of 0 to 20,480 μg/mL. One mL of sterile culture medium was placed in a sterile test tube containing 100 μl of microorganism suspension (105 CFU/mL). Then, 1.0 ml of the EPS extract with a certain concentration was added to the mixture and incubated at 37 °C for 24 h. After that, turbidity of the mixture was measured by using a spectrophotometer at a wavelength of 600, OD600 value less than 0.01 was recorded as the MIC. Minimum bactericidal concentration (MBC) and minimum fungicidal concentration (MFC) were defined as the lowest concentration of extract which showed no evidence of microbial growth on nutrient agar plates. MBC and MFC were investigated to confirm the MIC results.
The radical scavenging activity of slime track/EPS samples was measured with the use of the DPPH assay described by Monaki et al. [43] with some modifications. In brief, 80 μL of the sample with a certain concentration ranging from 0 to 6.0 mg/mL was added to 120 μl of 0.02 mg/mL DPPH prepared in a methanol solution. The mixture was mixed gently and incubated at room teperature for 30 min in the dark. Then, the absorbance was measured at 517 nm and the inhibition was calculated using the following formula
$$ \mathrm{Scavenging}\ \mathrm{rate}\left(\%\right)=\left[\left({\mathrm{A}}_0\hbox{-} {\mathrm{A}}_1\right)/{\mathrm{A}}_0\right]\mathrm{x}100 $$
where A1 was absorbance of the sample and A0 was the absorbance of the control [44]. The antioxidant ability of the sample was expressed as an IC50 value, which was defined as the concentration of sample that inhibits the formation of the DPPH radical by 50%. An equal amount of methanol was added to the negative control, and ascorbic acid was used as the positive control.
Anticancer activity
The cytotoxicity of isolated EPS was determined using a sulforhodamine-B (SRB) assay [45]. The cancer cell lines (105 cells/mL) were seeded in a 96-well microtiter plate and cultivated under standard conditions (5% CO2 at 37° C). Stock solution of the EPS samples were prepared in distilled water and serially diluted with sterile medium to obtain the desired concentrations. One hundred μL of the sample was then added to each well and incubated for another 48 h for cell attachment. Cells were fixed by gently layering of cold 50% TCA and incubated at 4 °C for 1 h. The plate was then washed five times with distilled water and air-dried for 12 h at room temperature to avoid cell monolayer detachment. Cells were stained al least 15 min with 0.2% SRB dissolved in 1.0% acetic acid and subsequently washed 5 times with 1.0% acetic acid to remove unbound proteins. The plate was air-dried. A tris-base solution was added to the wells to solubilize the dye. The plates were shaken gently for 10 min on a mechanical shaker. Distilled water and camptothecin (0.01 μg/mL) were used as negative and positive controls, respectively. A blank contained culture medium without cells. The optical density (OD) of the plate wells was recorded using a microplate reader at 560 nm. Growth inhibition was calculated as.
$$ \%I=\left(1-\frac{\mathrm{A}}{\mathrm{B}}\right)\ x\ 100\% $$
where A and B represent the absorbance of the test sample and the control [45].
All experiments were done in triplicate and all data are expressed as mean ± standard eviation.
CFU:
Colony forming unit
DPPH:
2,2-diphenyl-1-picrylhydrazyl
Exopolysaccharides
GC:
IC50:
Half inhibitory concentration
MBC:
MFC:
Minimum fungicidal concentration
MIC:
Minimum inhibitory concentration
TCA:
Trichloroacetic acid
TFA:
Triflouroacetic acid
TLC:
Nwodo UU, Green E, Okoh AI. Bacterial exopolysaccharides: functionality and prospects. Int J Mol Sci. 2012;13:14002–15.
Singha TK. Microbial extracellular polymeric substances: production, isolation and applications. IOSR J Pharm. 2012;2:276–81.
Orsod M, Joseph M, Huyop F. Characterization of exopolysaccharides produced by Bacillus cereus and Brachybacterium sp. isolated from Asian Sea bass (Lates calcarifer). Malays. J Microbiol. 2012;8:170–4.
Mahendran S, Saravanan S, Vijayabaskar P, Anandapandian KTK, Shankar T. Antibacterial potential of microbial exopolysaccharide from Ganoderma lucidium and Lysinibacillus fusiformis. Int J Recent Sci Res. 2013;4:501–5.
Osama H, El S, El Kader A, El-Sayed M, Salem HM, Manal GM, Asker MS, Saher SM. Isolation, characterization and biological activities of exopolysaccharide produced by Bacillus marinus. Der Pharma Chem. 2015;7:200–8.
Thetsrimuang C, Khammuang S, Chiablaem K, Srisomsap C, Sarnthima R. Antioxidant properties and cytotoxicity of crude polysaccharides from Lentinus polychrous Lév. Food Chem. 2011;128:634–9.
Sun X, Hao L, Ma H, Li T, Zheng L, Ma Z, Zhai G, Wang L, Gao S, Liu X, Jia M, Jia M. Extraction and in vitro antioxidant activity of exopolysaccharide by Pleurotus eryngii SI-02. Braz J Microbiol. 2013;44:1081–8.
Sharma SK. Optimized extraction and antioxidant activities of polysaccharides from two entomogenous fungi. J Bioanal Biomed. 2015;7:180–7.
Wang K, Li W, Rui X, Chen X, Jiang M, Dong M. Structural characterization and bioactivity of released exopolysaccharides from Lactobacillus plantarum 70810. Int J Biol Macromol. 2014;67:71–8.
Li H, Liu X, Xu Y, Wang X, Zhu H. Structure and antitumor activity of the extracellular polysaccharides from Aspergillus aculeatus via apoptosis and cell cycle arrest. Glycoconj J. 2016;33:975–84.
Stephenson SL, Stempen H. Myxomycetes: a handbook of slime molds. Oregon: Timber Press; 1994.
Dembitsky VM, Řezanka T, Spížek J, Hanuš LO. Secondary metabolites of slime molds. Phytochemistry. 2005;66(7):747–69.
Misono Y, Ishibashi M, Ito A. Bahiensol, a new glycerolipid from a cultured myxomycete Didymium bahiense var. bahiense. Chem Pharm Bull. 2003;51(5):612–3.
Herrera NA, Rojas C, Franco-Molano AE, Stephenson SL, Echeverri F. Physarella oblonga centered bioassays for testing the biological activity of myxomycetes. Mycosphere. 2011;2(6):637–44.
Murakami-Murofushi K, Uchiyama A, Fujiwara Y, Kobayashi T, Kobayashi S, Mukai M, Murofushi H, Tigyi G. Biological functions of a novel lipid mediator, cyclic phosphatidic acid. Biochim Biophys Acta Mol Cell Biol Lipids. 2002;1582(1):1–7.
Hosoya T, Yamamoto Y, Uehara Y, Hayashi M, Komiyama K, Ishibashi M. New cytotoxic bisindole alkaloids with protein tyrosine kinase inhibitory activity from a myxomycete Lycogala epidendrum. Bioorg Med Chem Lett. 2005;15(11):2776–80.
Kaniwa K, Ohtsuki T, Yamamoto Y, Ishibashi M, Kehokorins A–C. Novel cytotoxic dibenzofurans isolated from the myxomycete Trichia favoginea Var. persimilis. Tetrahedron Lett. 2006;47(10):1505–8.
Clark J, Haskins EF. Myxomycete plasmodial biology: a review. Mycosphere. 2015;6:643–58.
McCormick JJ, Blomquis JC, Rusch HP. Isolation and characterization of an extracellular polysaccharide from Physarum polycephalum. J Bacteriol. 1970;104:1110–8.
Simon HL, Henney HR. Chemical composition of slime from three species of myxomycetes. FEBS Lett. 1970;7:80–2.
Sperl TG. Isolation and characterization of the Physarum polycephalum extracellular polysaccharides. Food Biotechnol. 1990;4(2):663–8.
Asgari M, Henney HR Jr. Inhibition of growth and cell wall morphogenesis of Bacillus subtilis by extracellular slime produced by Physarum flavicomum. Cytobios. 1977;20:163–77.
Henney JHR, Henney MR. Nutritional requirements for the growth in pure culture of the myxomycete Physarum rigidum and related species. Microbiology. 1968;53(3):333–9.
Farr D, Amster H, Horisberger M. Composition and partial structure of the extracellular polysaccharide of Physarum polycephalum. Carbohydr Res. 1972;24:207–9.
Gamar-Nourani L, Blondeau K, Simonet JM. Influence of culture conditions on exopolysaccharide production by Lactobacillus rhamnosus strain C83. J Appl Microbiol. 1998;85:664–72.
Degeest B, Vaningelgem F, Vuyst LD. Microbial physiology, fermentation kinetics, and process engineering of heteropolysaccharide production by lactic acid bacteria. Int Dairy J. 2011;11:747–57.
Li S, Huang R, Shah NP, Tao X, Xiong Y, Wei H. Antioxidant and antibacterial activities of exopolysaccharides from Bifidobacterium bifidum WBIN03 and Lactobacillus plantarum R315. J Dairy Sci. 2014;97:7334–43.
Kiran GS, Priyadharshini S, Anitha K, Gnanamani E, Selvin J. Characterization of an exopolysaccharide from probiont Enterobacter faecalis MSI12 and its effect on the disruption of Candida albicans biofilm. RSC Adv. 2015;5:71573–85.
Olugbami JO, Gbadegesin MA, Odunola OA. In vitro free radical scavenging and antioxidant properties of ethanol extract of Terminalia glaucescens. Pharmacognosy Res. 2015;7:49–56.
Vamanu E. Biological activities of the polysaccharides produced in submerged culture of two edible Pleurotus ostreatus mushrooms. Biomed Res Int. 2012;2012(565974):8.
He P, Geng L, Mao D, Production XC. Characterization and antioxidant activity of exopolysaccharides from submerged culture of Morchella crassipes. Bioprocess Biosyst Eng. 2012;35:1325–32.
Huang QL, Siu KC, Wang WQ, Cheung YC, Fractionation WJY. Characterization and antioxidant activity of exopolysaccharides from fermentation broth of a Cordyceps sinensis fungus. Process Biochem. 2013;48:380–6.
Tran HTM, Stephenson SL, Chen Z, Pollock ED, Goggin FL. Evaluating the potential use of myxomycetes as a source of lipids for biodiesel production. Bioresour Technol. 2012;123:386–9.
Tran H, Stephenson S, Pollock E. Evaluation of Physarum polycephalum plasmodial growth and lipid production using rice bran as a carbon source. BMC Biotechnol. 2015;15:67.
Dubois M, Gilles KA, Hamilton JK, Rebers PA, Smith F. Colorimetric method for determination of sugars and related substances. Anal Chem. 1956;28:350–6.
Bradford MM. A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem. 1976;72:248–54.
Dodgson KS, Price RG. A note on the determination of the ester sulphate content of sulphated polysaccharides. Biochem J. 1962;84:106–10.
Kumar CSC, Chandraju S, Mythil R, Ahmad T, Gowda NM. Extraction of sugars from black gram peels by reversed-phase liquid chromatography systems and identification by TLC and mass analysis. Adv Anal Chem. 2012;2:32–6.
Kakasy A, Füzfai Z, Kursinszki L, Molnár-Perl I, Lemberkovics É. Analysis of non-volatile constituents in Dracocephalum species by HPLC and GC-MS. Chromatographia. 2006;63(13):17–22.
Wiegand I, Hilper K, Hancock RE. Agar and broth dilution methods to determine the minimal inhibitory concentration (MIC) of antimicrobial substances. Nat Protoc. 2008;3:163–75.
Sen A, Batra A. Evaluation of antimicrobial activity of different solvent extracts of medicinal plant: Melia azedarach L. Int J Curr Pharm Res. 2012;4:67–73.
Osińska-Jaroszuk M, Jaszek M, Mizerska-Dudk M, Błachowicz A, Rejczak TP, Janusz G, Wydrych J, Polak J, Jarosz-Wilkołazka A, Kandefer-Szerszeń M. Exopolysaccharide from Ganoderma applanatum as a promising bioactive compound with cytostatic and antibacterial properties. Biomed Res Int. 2014;2014(743812):10.
Ma YP, Mao DB, Geng LJ, Zhang WY, Wang Z, Xu CP. Production optimization, molecular characterization and biological activities of exopolysaccharides from Xylaria nigripes. Chem Biochem Eng Q. 2013;27:177–84.
Akindele AJ, Wani ZA, Sharma S, Mahajan G, Satt NK, Adeyemi OO, Mondhe DM, Saxena AK. Vitro and in vivo anticancer activity of root extracts of Sansevieria liberica Gerome and Labroy (Agavaceae). Evid Based Complement Alternat Med. 2015;2015:560404.
Vichai V, Kirtikara K, Sulforhodamine B. Colorimetric assay for cytotoxicity screening. Nat Protoc. 2006;1:1112–6.
We would like to thank the Vietnam National Foundation for Science and Technology Development (NAFOSTED for funding this work.
This research was funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 106-NN.04–2015.16.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
School of Biotechnology, International University, VNU-HCM, Block 6, LinhTrung Ward, Thu Duc District, Ho Chi Minh City, 70000, Vietnam
Tuyen T.M Huynh & Hanh T.M Tran
Institute of Chemical Technology, Vietnam Academy of Science and Technology, 01-Mac Dinh Chi Street, District 1, Ho Chi Minh City, 70000, Vietnam
Trung V. Phung
Department of Biological Sciences, University of Arkansas, Fayetteville, AR, 72701, USA
Steven L. Stephenson
Tuyen T.M Huynh
Hanh T.M Tran
TH carried out the experiments and data analysis, TP handled the GC analysis, HT helped with the culture techniques and the manuscript preparation, and SS edited the manuscript prior to submission. All the authors read and approved the manuscript for submission.
Correspondence to Hanh T.M Tran.
Competing interest
The authors declare that they have no competing interests with respect to any aspect of this manuscript or the project reported in the manuscript.
Huynh, T.T., Phung, T.V., Stephenson, S.L. et al. Biological activities and chemical compositions of slime tracks and crude exopolysaccharides isolated from plasmodia of Physarum polycephalum and Physarella oblonga . BMC Biotechnol 17, 76 (2017). https://doi.org/10.1186/s12896-017-0398-6
HepG2
Mcf-7
Monomer composition
Slime molds | CommonCrawl |
Trace: • uncertainty_principle
advanced_notions:uncertainty_principle
We can generate a wave in a long rope by shaking it rhythmically up and down.
Now, if someone asks us: "Where precisely is the wave?" we wouldn't have a good answer since the wave is spread out. In contrast, if we get asked: "What's the wavelength of the wave?" we could easily answer this question: "It's around 6cm".
We can also generate a different kind of wave in a rope by jerking it only once.
This way we get a narrow bump that travels down the line. Now, we could easily answer the question: "Where precisely is the wave?" but we would have a hard time answer the question "What's the wavelength of the wave?" since the wave isn't periodic and it is completely unclear how we could assign a wavelength to it.
Similarly, we can generate any type of wave between these two edge cases. However, there is always a tradeoff. The more precise the position of the wave is, the less precise its wavelength becomes and vice versa.
This is true for any wave phenomena and since in quantum mechanics we describe particle using waves, it also applies here. In quantum mechanics, the wavelength is in a direct relationship to its momentum. The larger the momentum, the smaller the wavelength of the wave that describes the particle. A spread in wavelength, therefore, corresponds to a spread in momentum. As a result, we can derive an uncertainty relation that tells us:
The more precisely we determine the location of a particle, the less precisely we can determine its momentum and vice versa. The thing is that a localized wave bump can be thought of as a superposition of dozens of other waves with well-defined wave-lengths1):
In this sense, such a localized bump does not have one specific wavelength but is a superposition of many.
Heisenberg sometimes explained the uncertainty principle as a problem of making measurements. His most well-known thought experiment involved photographing an electron. To take the picture, a scientist might bounce a light particle off the electron's surface. That would reveal its position, but it would also impart energy to the electron, causing it to move. Learning about the electron's position would create uncertainty in its velocity; and the act of measurement would produce the uncertainty needed to satisfy the principle.
https://www.scientificamerican.com/article/common-interpretation-of-heisenbergs-uncertainty-principle-is-proven-false/
Bohr, for his part, explained uncertainty by pointing out that answering certain questions necessitates not answering others. To measure position, we need a stationary measuring object, like a fixed photographic plate. This plate defines a fixed frame of reference. To measure velocity, by contrast, we need an apparatus that allows for some recoil, and hence moveable parts. This experiment requires a movable frame. Testing one therefore means not testing the other. https://opinionator.blogs.nytimes.com/2013/07/21/nothing-to-see-here-demoting-the-uncertainty-principle/
Uncertainty principle? It's not about quantum. (Video) by 3Blue1Brown is a perfect explanation of the uncertainty principle.
Whenever we measure an observable in quantum mechanics, we get a precise answer. However, if repeat our measurement on equally prepared systems, we do not always get exactly the same result. Instead, the results are spread around some central value.
While we can prepare our systems such that a repeated measurement always yields almost exactly the same value, there is a price we have to pay for that: the measurements of some other observable will be wildly scattered.
The most famous example is the position and momentum uncertainty: $$ \sigma_x \sigma_p \geq \hbar/2,$$ where $\hbar$ denotes the reduced Planck constant and the $\sigma_x$ means the standard variation if we perform multiple measurements of the position $x$ for equally prepared particles. Analogously, $\sigma_p$ denotes the standard variation if we measure the momentum $p$.
Hence, if we try to know the position $x$ very accurately, which means $\sigma_x \ll \hbar/2$, then our knowledge about the momentum becomes much worse: $\sigma_p \gg \hbar/2 $. This follows directly from the inequality $ \sigma_x \sigma_p \geq \hbar/2.$
We have already noted that a wave with a single sinusoidal or complex exponential component extends over all space and time, and that, if we wish to limit its extent, further sinusoidal components spanning a range of frequencies must, as in Section 13.3, be superposed. As with the Gaussian wavepacket of Sec- tions 14.4.1 and 16.5, the tighter the wavepacket is to be confined, the greater the range of frequencies needed. This reciprocal dependence of the frequency range upon the spatial or temporal extent is known as the bandwidth theorem, and is a crucial concept for the analysis of wavepackets. Chapter 17 in "Introduction to the Physics of Waves" by Tim Freegarde
http://math.ucr.edu/home/baez/uncertainty.html
The generalized uncertainty principle reads
\begin{equation} \sigma_A \sigma_B \geq \big | \frac{1}{2i} \langle [A,B] \rangle \big|^2 . \end{equation}
The certainty principle (review) by D. A. Arbatsky
We have also argued that the rationale behind the Heisenberg indeterminacy principle (in the extreme cases in which one of the variables is sharply determined) can be understood in terms of the compatibility condition between the non-trivial identity of a state and the properties that can be consistently attributed to it. Indeed, an observable (such as q) that is not invariant under the automorphisms of a state (such as |jp〉) cannot define an "objective" property of the latter. Hence, the expectation value function will have a non-zero dispersion. We have argued that this dispersion measures the extent to which the transformations generated by q transform the state into a physically different state, i.e. the extent to which the non-rigidity of the state cannot "endure" the transformations generated by q. Klein-Weyl's program and the ontology of gauge and quantum systems by Gabriel Catren
Physics students are still taught this measurement-disturbance version of the uncertainty principle in introductory classes, but it turns out that it's not always true. Aephraim Steinberg of the University of Toronto in Canada and his team have performed measurements on photons (particles of light) and showed that the act of measuring can introduce less uncertainty than is required by Heisenberg's principle. The total uncertainty of what can be known about the photon's properties, however, remains above Heisenberg's limit.
Contrary to what is often believed the Heisenberg inequalities (62) and the Robertson-Schrödinger inequalities (64) are not statements about the accuracy of our measurements; their derivation assumes perfect instruments (see the discussion in Peres,20 p. 93). Their meaning is that if the same preparation procedure is repeated a large number of times on an ensemble of systems and is followed by either a measurement of xj or a measurement of pj, then the results obtained will have standard deviations Dxj and Dpj. In addition, these measurements need not be uncorrelated; this is expressed by the statistical covariances Dðxj; pjÞ appearing in the inequalities (64).
http://webzoom.freewebs.com/cvdegosson/SymplecticEgg_AJP.pdf
An uncertainty relation such as (4.54) is not a statement about the accuracy of our measuring instruments. On the contrary, its derivation assumes the existence of perfect instruments (the experimental errors due to common laboratory hardware are usually much larger than these quantum uncertainties). The only correct interpretation of (4.54) is the following: If the same preparation procedure is repeated many times, and is followed either by a measurement of x, or by a measurement of p, the various results obtained for x and for p have standard deviations, ∆ x and ∆ p, whose product cannot be less than h/ 2. There never is any question here that a measurement of x "disturbs" the value of p and vice-versa, as sometimes claimed. These measurements are indeed incompatible, but they are performed on different particles (all of which were identically prepared) and therefore these measurements cannot disturb each other in any way. The uncertainty relation (4.54), or more generally (4.40), only reflects the intrinsic randomness of the outcomes of quantum tests.
page 93 in Quantum Theory: Concepts and Methods by Peres
According to the uncertainty principle, it is impossible to know several pairs of variables at the same time with arbitrary accuracy.
In some sense, it completely encapsulates what is different about quantum mechanics compared to classical mechanics.
A philosopher once said 'It is necessary for the very existence of science that the same conditions always produce the same results'. Well, they don't!
- Richard Feynman
Is there a time-energy uncertainty relation?
No! See section 3 in Quantum mechanics: Myths and facts by H. Nikolic:
[T]here exists also an explicit counterexample that demonstrates that it is possible in principle to measure energy with arbitrary accuracy during an arbitrarily short time-interval [9].
where Ref. [9] is Y. Aharonov and D. Bohm, "Time in quantum theory and the uncertainty relation for time and energy," Phys. Rev. 122, 1649-1658 (1961)
Is there a classical uncertainty relation?
Yes, it's known as the "bandwidth theorem" of wave mechanics. See, for example page 236ff in Georgi's Physics of Waves, where he also discusses the relationship to the quantum mechanical uncertainty principle.
A simple example is bandwidth in AM radio transmissions. A typical commercial AM station broadcasts in a band of frequency about 5000 cycles/s (5 kc) on either side of the carrier wave frequency. Thus $$∆ω = 2π∆\nu ≈ 3 \times 10^4 s^{-1} \tag{10.71}$$ and they cannot send signals that separate times less than a few $\times 10^{-5}$ seconds apart. This is good enough for talk and acceptable for some music.
A famous example of (10.62) comes from quantum mechanics. There is a completely analogous relation between the spatial spread of a wave packet, ∆x, and the spread of k values required to produce it, ∆k: $$∆x · ∆k ≥ 1/2.$$
For more information, see e.g. Leon Cohen, "Time-Frequency Distributions-A Review", Proc. IEEE 77, 941 (1989)
Can the uncertainty principle be proved mathematically?
See https://physics.stackexchange.com/questions/24116/heisenberg-uncertainty-principle-scientific-proof/24186#24186
What's the origin of the uncertainty?
Quantum mechanics uses the generators of the corresponding symmetry as measurement operators. For instance, this has the consequence that a measurement of momentum is equivalent to the action of the translation generator. (Recall that invariance under translations leads us to conservation of momentum.) The translation generator moves our system a little bit and therefore the location is changed. Physics from Symmetry by J. Schwichtenberg
This is exactly the idea behind the Fourier transform.
advanced_notions/uncertainty_principle.txt · Last modified: 2018/07/26 17:56 by 77.177.199.249 | CommonCrawl |
# Setting up a PostgreSQL database
To set up a PostgreSQL database, you'll need to install the PostgreSQL server on your machine. You can download it from the official website (https://www.postgresql.org/download/) and follow the installation instructions for your operating system.
Once the server is installed, you can create a new database using the `createdb` command. For example, to create a database named "mydb", you would run:
```
createdb mydb
```
To connect to a database, you'll need to use a client like `psql` or a graphical tool like pgAdmin. To connect to the "mydb" database using `psql`, you would run:
```
psql mydb
```
To manage user access, you can use the `createuser` command. For example, to create a new user named "john" with a password, you would run:
```
createuser john --pwprompt
```
You can also grant and revoke privileges to users using the `GRANT` and `REVOKE` SQL commands. For example, to grant the "john" user all privileges on the "mydb" database, you would run:
```sql
GRANT ALL PRIVILEGES ON DATABASE mydb TO john;
```
## Exercise
Instructions:
1. Install PostgreSQL on your machine.
2. Create a new database named "mydb".
3. Connect to the "mydb" database using `psql`.
4. Create a new user named "john" with a password.
5. Grant the "john" user all privileges on the "mydb" database.
### Solution
1. Install PostgreSQL on your machine by downloading it from the official website (https://www.postgresql.org/download/) and following the installation instructions for your operating system.
2. Create a new database named "mydb" by running the following command in your terminal or command prompt:
```
createdb mydb
```
3. Connect to the "mydb" database using `psql` by running the following command:
```
psql mydb
```
4. Create a new user named "john" with a password by running the following command:
```
createuser john --pwprompt
```
5. Grant the "john" user all privileges on the "mydb" database by running the following SQL command:
```sql
GRANT ALL PRIVILEGES ON DATABASE mydb TO john;
```
# Basic SQL queries and data manipulation
To query data from a PostgreSQL database, you can use the `SELECT` statement. For example, to select all records from the "employees" table, you would run:
```sql
SELECT * FROM employees;
```
To insert a new record into a table, you can use the `INSERT INTO` statement. For example, to insert a new employee with the name "John Doe" and the position "Software Engineer", you would run:
```sql
INSERT INTO employees (name, position) VALUES ('John Doe', 'Software Engineer');
```
To update an existing record in a table, you can use the `UPDATE` statement. For example, to update the position of the employee with the ID 1 to "Senior Software Engineer", you would run:
```sql
UPDATE employees SET position = 'Senior Software Engineer' WHERE id = 1;
```
To delete a record from a table, you can use the `DELETE FROM` statement. For example, to delete the employee with the ID 1, you would run:
```sql
DELETE FROM employees WHERE id = 1;
```
## Exercise
Instructions:
1. Query all records from the "employees" table.
2. Insert a new employee with the name "Jane Smith" and the position "Product Manager".
3. Update the position of the employee with the ID 2 to "Product Manager".
4. Delete the employee with the ID 2.
### Solution
1. Query all records from the "employees" table:
```sql
SELECT * FROM employees;
```
2. Insert a new employee with the name "Jane Smith" and the position "Product Manager":
```sql
INSERT INTO employees (name, position) VALUES ('Jane Smith', 'Product Manager');
```
3. Update the position of the employee with the ID 2 to "Product Manager":
```sql
UPDATE employees SET position = 'Product Manager' WHERE id = 2;
```
4. Delete the employee with the ID 2:
```sql
DELETE FROM employees WHERE id = 2;
```
# Creating and managing tables
To create a new table in a PostgreSQL database, you can use the `CREATE TABLE` statement. For example, to create a table named "employees" with columns for "id", "name", and "position", you would run:
```sql
CREATE TABLE employees (
id SERIAL PRIMARY KEY,
name VARCHAR(255),
position VARCHAR(255)
);
```
To add a new column to an existing table, you can use the `ALTER TABLE` statement. For example, to add a "salary" column to the "employees" table, you would run:
```sql
ALTER TABLE employees ADD COLUMN salary NUMERIC;
```
To modify the data type of a column in a table, you can use the `ALTER TABLE` statement. For example, to change the "name" column to a "TEXT" data type, you would run:
```sql
ALTER TABLE employees ALTER COLUMN name TYPE TEXT;
```
To delete a table from a PostgreSQL database, you can use the `DROP TABLE` statement. For example, to delete the "employees" table, you would run:
```sql
DROP TABLE employees;
```
## Exercise
Instructions:
1. Create a new table named "employees" with columns for "id", "name", and "position".
2. Add a "salary" column to the "employees" table.
3. Modify the "name" column to a "TEXT" data type.
4. Delete the "employees" table.
### Solution
1. Create a new table named "employees" with columns for "id", "name", and "position":
```sql
CREATE TABLE employees (
id SERIAL PRIMARY KEY,
name VARCHAR(255),
position VARCHAR(255)
);
```
2. Add a "salary" column to the "employees" table:
```sql
ALTER TABLE employees ADD COLUMN salary NUMERIC;
```
3. Modify the "name" column to a "TEXT" data type:
```sql
ALTER TABLE employees ALTER COLUMN name TYPE TEXT;
```
4. Delete the "employees" table:
```sql
DROP TABLE employees;
```
# Indexes for efficient data retrieval
To create an index in a PostgreSQL database, you can use the `CREATE INDEX` statement. For example, to create an index named "employees_name_idx" on the "name" column of the "employees" table, you would run:
```sql
CREATE INDEX employees_name_idx ON employees (name);
```
PostgreSQL supports various types of indexes, including B-tree indexes, hash indexes, GiST indexes, SP-GiST indexes, GIN indexes, and BRIN indexes. Each type of index has different use cases and performance characteristics.
To use an index to improve query performance, you can include the `USING INDEX` clause in your SQL query. For example, to retrieve all employees with the name "John Doe" using the "employees_name_idx" index, you would run:
```sql
SELECT * FROM employees WHERE name = 'John Doe' USING INDEX employees_name_idx;
```
## Exercise
Instructions:
1. Create an index named "employees_name_idx" on the "name" column of the "employees" table.
2. Use the "employees_name_idx" index to retrieve all employees with the name "John Doe".
### Solution
1. Create an index named "employees_name_idx" on the "name" column of the "employees" table:
```sql
CREATE INDEX employees_name_idx ON employees (name);
```
2. Use the "employees_name_idx" index to retrieve all employees with the name "John Doe":
```sql
SELECT * FROM employees WHERE name = 'John Doe' USING INDEX employees_name_idx;
```
# Advanced SQL queries with joins, subqueries, and window functions
To perform a join between two tables in a PostgreSQL database, you can use the `JOIN` clause in your SQL query. For example, to retrieve all employees and their corresponding departments, you would run:
```sql
SELECT employees.name, departments.name
FROM employees
JOIN departments ON employees.department_id = departments.id;
```
To use a subquery in a PostgreSQL database, you can include a subquery within your SQL query. For example, to retrieve the average salary of all employees, you would run:
```sql
SELECT AVG(salary)
FROM (SELECT salary FROM employees) AS subquery;
```
To use window functions in a PostgreSQL database, you can use the `OVER` clause in your SQL query. For example, to retrieve the rank of each employee based on their salary, you would run:
```sql
SELECT name, salary, RANK() OVER (ORDER BY salary DESC)
FROM employees;
```
## Exercise
Instructions:
1. Perform a join between the "employees" and "departments" tables to retrieve all employees and their corresponding departments.
2. Use a subquery to retrieve the average salary of all employees.
3. Use a window function to retrieve the rank of each employee based on their salary.
### Solution
1. Perform a join between the "employees" and "departments" tables:
```sql
SELECT employees.name, departments.name
FROM employees
JOIN departments ON employees.department_id = departments.id;
```
2. Use a subquery to retrieve the average salary of all employees:
```sql
SELECT AVG(salary)
FROM (SELECT salary FROM employees) AS subquery;
```
3. Use a window function to retrieve the rank of each employee based on their salary:
```sql
SELECT name, salary, RANK() OVER (ORDER BY salary DESC)
FROM employees;
```
# Stored procedures for encapsulating and reusing code
To create a stored procedure in a PostgreSQL database, you can use the `CREATE PROCEDURE` statement. For example, to create a stored procedure named "calculate_salary" that takes two parameters, "base_salary" and "bonus", and returns the calculated salary, you would run:
```sql
CREATE PROCEDURE calculate_salary(base_salary NUMERIC, bonus NUMERIC)
LANGUAGE plpgsql
AS $$
BEGIN
RETURN base_salary + (base_salary * (bonus / 100));
END;
$$;
```
To call a stored procedure in a PostgreSQL database, you can use the `CALL` statement. For example, to call the "calculate_salary" stored procedure with the parameters 50000 and 10, you would run:
```sql
SELECT calculate_salary(50000, 10);
```
## Exercise
Instructions:
1. Create a stored procedure named "calculate_salary" that takes two parameters, "base_salary" and "bonus", and returns the calculated salary.
2. Call the "calculate_salary" stored procedure with the parameters 50000 and 10.
### Solution
1. Create a stored procedure named "calculate_salary":
```sql
CREATE PROCEDURE calculate_salary(base_salary NUMERIC, bonus NUMERIC)
LANGUAGE plpgsql
AS $$
BEGIN
RETURN base_salary + (base_salary * (bonus / 100));
END;
$$;
```
2. Call the "calculate_salary" stored procedure with the parameters 50000 and 10:
```sql
SELECT calculate_salary(50000, 10);
```
# Transactions and concurrency control
To start a transaction in a PostgreSQL database, you can use the `BEGIN` statement. For example, to start a new transaction, you would run:
```sql
BEGIN;
```
To commit a transaction in a PostgreSQL database, you can use the `COMMIT` statement. For example, to commit the current transaction, you would run:
```sql
COMMIT;
```
To rollback a transaction in a PostgreSQL database, you can use the `ROLLBACK` statement. For example, to rollback the current transaction, you would run:
```sql
ROLLBACK;
```
PostgreSQL provides various locking mechanisms, such as row-level locking, page-level locking, and table-level locking. These mechanisms help prevent concurrent access issues and ensure data consistency.
To use isolation levels in a PostgreSQL database, you can set the transaction isolation level using the `SET TRANSACTION` statement. For example, to set the transaction isolation level to "SERIALIZABLE", you would run:
```sql
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
```
## Exercise
Instructions:
1. Start a new transaction.
2. Commit the current transaction.
3. Rollback the current transaction.
4. Set the transaction isolation level to "SERIALIZABLE".
### Solution
1. Start a new transaction:
```sql
BEGIN;
```
2. Commit the current transaction:
```sql
COMMIT;
```
3. Rollback the current transaction:
```sql
ROLLBACK;
```
4. Set the transaction isolation level to "SERIALIZABLE":
```sql
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
```
# Triggers for automating tasks and enforcing data integrity
To create a trigger in a PostgreSQL database, you can use the `CREATE TRIGGER` statement. For example, to create a trigger named "employee_salary_check" that checks if the salary of an employee is greater than 100000 before inserting a new record into the "employees" table, you would run:
```sql
CREATE FUNCTION employee_salary_check() RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
BEGIN
IF NEW.salary > 100000 THEN
RAISE EXCEPTION 'Salary cannot be greater than 100000.';
END IF;
RETURN NEW;
END;
$$;
CREATE TRIGGER employee_salary_check
BEFORE INSERT OR UPDATE ON employees
FOR EACH ROW
EXECUTE FUNCTION employee_salary_check();
```
## Exercise
Instructions:
1. Create a trigger named "employee_salary_check" that checks if the salary of an employee is greater than 100000 before inserting a new record into the "employees" table.
2. Insert a new employee with a salary of 150000 into the "employees" table.
### Solution
1. Create a trigger named "employee_salary_check":
```sql
CREATE FUNCTION employee_salary_check() RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
BEGIN
IF NEW.salary > 100000 THEN
RAISE EXCEPTION 'Salary cannot be greater than 100000.';
END IF;
RETURN NEW;
END;
$$;
CREATE TRIGGER employee_salary_check
BEFORE INSERT OR UPDATE ON employees
FOR EACH ROW
EXECUTE FUNCTION employee_salary_check();
```
2. Insert a new employee with a salary of 150000 into the "employees" table:
```sql
INSERT INTO employees (name, position, salary) VALUES ('John Doe', 'Software Engineer', 150000);
```
# Views for creating virtual tables based on complex queries
To create a view in a PostgreSQL database, you can use the `CREATE VIEW` statement. For example, to create a view named "employees_with_departments" that retrieves all employees and their corresponding departments, you would run:
```sql
CREATE VIEW employees_with_departments AS
SELECT employees.name, departments.name
FROM employees
JOIN departments ON employees.department_id = departments.id;
```
To query a view in a PostgreSQL database, you can use the `SELECT` statement. For example, to retrieve all employees and their corresponding departments from the "employees_with_departments" view, you would run:
```sql
SELECT * FROM employees_with_departments;
```
## Exercise
Instructions:
1. Create a view named "employees_with_departments" that retrieves all employees and their corresponding departments.
2. Query the "employees_with_departments" view to retrieve all employees and their corresponding departments.
### Solution
1. Create a view named "employees_with_departments":
```sql
CREATE VIEW employees_with_departments AS
SELECT employees.name, departments.name
FROM employees
JOIN departments ON employees.department_id = departments.id;
```
2. Query the "employees_with_departments" view:
```sql
SELECT * FROM employees_with_departments;
```
# Data backup, recovery, and replication
To create a backup of a PostgreSQL database using the `pg_dump` utility, you can run the following command:
```
pg_dump mydb > mydb_backup.sql
```
To restore a backup of a PostgreSQL database using the `pg_restore` utility, you can run the following command:
```
pg_restore -d mydb mydb_backup.sql
```
To set up data replication in a PostgreSQL database, you can use the `pg_basebackup` utility to create a base backup of the primary server, and the `WAL` (Write-Ahead Log) to replicate changes from the primary server to the standby server.
## Exercise
Instructions:
1. Create a backup of the "mydb" database using the `pg_dump` utility.
2. Restore the backup to a new database named "mydb_restored" using the `pg_restore` utility.
3. Set up data replication between a primary server and a standby server.
### Solution
1. Create a backup of the "mydb" database:
```
pg_dump mydb > mydb_backup.sql
```
2. Restore the backup to a new database named "mydb_restored":
```
pg_restore -d mydb_restored mydb_backup.sql
```
3. Set up data replication between a primary server and a standby server:
- On the primary server, configure the `wal_level` and `max_wal_senders` settings in the `postgresql.conf` file.
- On the standby server, configure the `primary_conninfo` and `recovery_target_timeline` settings in the `recovery.conf` file.
- On the standby server, run the `pg_basebackup` command to create a base backup from the primary server.
- On the standby server, start the PostgreSQL server in standby mode using the `pg_ctl` command.
# Optimizing database performance
To analyze query performance in a PostgreSQL database, you can use the `EXPLAIN` statement. For example, to analyze the performance of a query that retrieves all employees, you would run:
```sql
EXPLAIN SELECT * FROM employees;
```
To use indexes to improve query performance in a PostgreSQL database, you can create indexes on the columns used in the query. For example, to create an index on the "name" column of the "employees" table, you would run:
```sql
CREATE INDEX employees_name_idx ON employees (name);
```
To configure server settings to optimize database performance in a PostgreSQL database, you can modify the settings in the `postgresql.conf` file. For example, to increase the shared buffer cache size, you would add the following line to the `postgresql.conf` file:
```
shared_buffers = 256MB
```
## Exercise
Instructions:
1. Analyze the performance of a query that retrieves all employees.
2. Create an index on the "name" column of the "employees" table.
3. Configure server settings to optimize database performance.
### Solution
1. Analyze the performance of a query that retrieves all employees:
```sql
EXPLAIN SELECT * FROM employees;
```
2. Create an index on the "name" column of the "employees" table:
```sql
CREATE INDEX employees_name_idx ON employees (name);
```
3. Configure server settings to optimize database performance:
- Open the `postgresql.conf` file.
- Add the following | Textbooks |
\begin{document}
\begin{frontmatter}
\title{Bayesian sparse graphical models for classification with application to protein expression data} \runtitle{Bayesian sparse graphical models}
\begin{aug} \author[A]{\fnms{Veerabhadran} \snm{Baladandayuthapani}\corref{}\ead[label=e1]{[email protected]}\thanksref{T1,T3,m1}}, \author[A]{\fnms{Rajesh} \snm{Talluri}\ead[label=e2]{[email protected]}\thanksref{T3,m1}}, \author[B]{\fnms{Yuan} \snm{Ji}\ead[label=e3]{[email protected]}\thanksref{T2,m2}}, \author[C]{\fnms{Kevin R.} \snm{Coombes}\ead[label=e4]{[email protected]}\thanksref{m3}}, \author[D]{\fnms{Yiling} \snm{Lu}\ead[label=e5]{[email protected]}\thanksref{m1}}, \author[E]{\fnms{Bryan T.} \snm{Hennessy}\ead[label=e6]{[email protected]}\thanksref{m4,T4}}, \author[F]{\fnms{Michael A.} \snm{Davies}\ead[label=e7]{[email protected]}\thanksref{m1}} \and \author[G]{\fnms{Bani K.} \snm{Mallick}\ead[label=e8]{[email protected]}\thanksref{m5}}\vspace*{6pt} \runauthor{V. Baladandayuthapani et al.} \affiliation{The University of Texas M.D. Anderson Cancer Center\thanksmark{m1}, NorthShore University HealthSystem and University of Chicago\thanksmark{m2},\break The Ohio State University\thanksmark{m3}, Beaumont Hospital\thanksmark{m4} and\break Texas A\&M University\thanksmark{m5}}\vspace*{6pt} \address[A]{V. Baladandayuthapani\\ R. Talluri\\ Department of Biostatistics\\ The University of Texas\\ \quad M.D. Anderson Cancer Center\\ Houston, Texas 77030\\ USA\\ \printead{e1}\\ \phantom{E-mail:\ }\printead*{e2}} \address[B]{Y. Ji\\ NorthShore University HealthSystem\\ 1001 University Place\\ Evanston, Illinois 60201\\ USA\\ \printead{e3}} \address[C]{K. R. Coombes\\ Department of Biomedical Informatics\\ The Ohio State University\\ \quad Wexner Medical Center\\ Columbus, Ohio 77030\\ USA\\ \printead{e4}} \address[D]{Y. Lu\\ Department of Systems Biology\\ The University of Texas\\ \quad M.D. Anderson Cancer Center\hspace*{24pt}\\ Houston, Texas 77030\\ USA\\ \printead{e5}\\} \address[E]{B. T. Hennessy\\ Beaumont Hospital\\ Dublin\\ Ireland\\ \printead{e6}} \address[F]{M. A. Davies\\ Department of Melanoma\\ \quad Medical Oncology\\ The University of Texas\\ \quad M.D. Anderson Cancer Center\hspace*{24pt}\\ Houston, Texas 77030\\ USA\\ \printead{e7}} \address[G]{B. K. Mallick\\ Department of Statistics\\ Texas A\&M University\\ College Station, Texas 77843\\ USA\\ \printead{e8}} \end{aug}
\thankstext{T1}{Supported in part by NIH Grant R01 CA160736 and the Cancer Center Support Grant (CCSG) (P30 CA016672).} \thankstext{T2}{Supported by NIH R01 CA132897.} \thankstext{T3}{Equal contributors.} \thankstext{T4}{Supported by TRA (translational research award-TRA-2010-8) from the Health Research Board Ireland (HRB) and Science Foundation Ireland (SFI).}
\received{\smonth{2} \syear{2013}} \revised{\smonth{10} \syear{2013}}
\begin{abstract} Reverse-phase protein array (RPPA) analysis is a powerful, relatively new platform that allows for high-throughput, quantitative analysis of protein networks. One of the challenges that currently limit the potential of this technology is the lack of methods that allow for accurate data modeling and identification of related networks and samples. Such models may improve the accuracy of biological sample classification based on patterns of protein network activation and provide insight into the distinct biological relationships underlying different types of cancer. Motivated by RPPA data, we propose a Bayesian sparse graphical modeling approach that uses selection priors on the conditional relationships in the presence of class information. The novelty of our Bayesian model lies in the ability to draw information from the network data as well as from the associated categorical outcome in a unified hierarchical model for classification. In addition, our method allows for intuitive integration of {a priori} network information directly in the model and allows for posterior inference on the network topologies both within and between classes. Applying our methodology to an RPPA data set generated from panels of human breast cancer and ovarian cancer cell lines, we demonstrate that the model is able to distinguish the different cancer cell types more accurately than several existing models and to identify differential regulation of components of a critical signaling network (the PI3K-AKT pathway) between these two types of cancer. This approach represents a powerful new tool that can be used to improve our understanding of protein networks in cancer. \end{abstract}
\begin{keyword} \kwd{Bayesian methods} \kwd{protein signaling pathways} \kwd{graphical models} \kwd{mixture models} \end{keyword} \end{frontmatter}
\mbox{}
\section{Introduction}\label{sec1}
\subsection{Protein signaling pathways in cancer}\label{sec1.1} The treatment of cancer is rapidly evolving due to an improved understanding of the signaling pathways that are activated in tumors. Global profiling of DNA mutations, chromosomal copy number changes, DNA methylations and gene expression have greatly improved our appreciation of the heterogeneity of cancer [\citet {nishi03,blower07,gaur07,shanka07,ehrich08}]. However, the characterization of protein signaling networks has proven to be much more challenging. Several reasons underscore the critical importance of overcoming this challenge: first, changes in cellular DNA and RNA both ultimately result in changes in protein expression and/or function, thus, protein networks represent the summation of changes that happen at the DNA and RNA levels. Second, research has demonstrated that many of the most common oncogenic genetic changes activate proteins in kinase signaling pathways. Numerous studies of protein networks and expression analysis have shown promising results. Due to the hyperactivation of kinase signaling pathways, numerous kinase inhibitors have been used in clinical trials, frequently with dramatic clinical activity. Inhibitors that target protein signaling pathways have been approved by the U.S. Food and Drug Administration for a variety of cancer types, including chronic myelogenous leukemia, breast cancer, colon cancer, renal cell carcinoma and gastrointestinal stromal tumors [as reviewed in \citet{davies06}].
Protein networks need to be assessed directly, as DNA or RNA analyses often do not accurately reflect or predict the activation status of protein networks. Many proteins are regulated by post-translational modifications, such as phosphorylation or cleavage events, that are not detected by the analysis of DNA or RNA. Several studies have also demonstrated marked discordance between mRNA and protein expression levels, particularly for genes in kinase signaling and cell cycle regulation pathways [\citet{varam05,shanka07}]. It has been demonstrated recently, in both cancer cell lines and tumors, that different genetic mutations in the same signaling pathway can result in significant differences in the quantitative activation levels of downstream pathway effectors [\citet {stemke08,davies09,vasudevan09,park10}]. Although these observations support the suggestion that direct measurements are essential to measure protein network activation, a number of studies have demonstrated that signaling pathways are frequently regulated by complex feed-forward and feedback regulatory loops, as well as cross-talk between different pathways [\citet {mirz09,zhang09,halaban10}]. Thus, developing an accurate understanding of the regulation of protein signaling networks will be optimized by approaches that: (1) assess multiple pathways simultaneously for different tumor types and/or conditions, and (2) allow for the use of rigorous statistical approaches to identify differential functional networks.
\subsection{Reverse-phase protein lysate arrays}\label{sec1.2}
As explained, there is a strong rationale for methods that will directly assess the activation status of protein \mbox{signaling} networks in cancer. Traditional protein assays include immunohistochemistry (IHC), Western blotting, enzyme-linked immunosorbent assay (ELISA) and mass spectroscopy. Although IHC is a very powerful technique for the detection of protein expression and location, it is critically limited in network analyses by its non- to semi-quantitative nature. Western blotting can also provide important \mbox{information}, but due to its requirement for relatively large amounts of protein, it is difficult to use when comprehensively assessing protein networks, and also is semi-quantitative in nature. The ELISA method provides quantitative analysis, but is similarly limited by requirements of relatively high amounts of specimen and by the high cost of analyzing large pools of specimens. Mass spectroscopy is a powerful, quantitative approach, but its utility is mainly limited by the cost and time required to analyze individual samples, which limits the ability to run large sample sets that are needed to appropriately assess characteristics of disease heterogeneity and protein networks. Reverse-phase protein array (RPPA) analysis is a relatively new technology that allows for quantitative, high-throughput, time- and cost-efficient analysis of {protein networks using small amounts of biological} material [\citet{paweletz01}; \citet{tibes06}].
\begin{figure}
\caption{An example of a reverse-phase protein array (RPPA) slide. \textup{(A)}~Each slide is comprised of 4 rows (\textup{A--D}) of 12 columns (1--12) grids of 11X11 spots. \textup{(B)}~Each grid has 22 individual samples and 11 controls. Each row of the grid consists of~2 individual samples (each with 5 serial 2-fold dilutions) and one control spot. Reproduced with permission from \protect\citet{tabchy2011}.}
\label{figrppaslide}
\end{figure}
\subsubsection*{RPPA data collection} {We provide a brief overview of the RPPA experiment and data collection.} In order to perform RPPA, proteins are isolated from the biological specimens such as cell lines, tumors or serum using standard laboratory-based methods. The protein concentrations are then determined for the samples and, subsequently, serial 2-fold dilutions prepared from each sample are then arrayed on a glass slide. Each slide is then probed with an antibody that recognizes a specific protein epitope that reflects the activation status of the protein. A~visible signal is then generated through the use of a signal amplification system and staining. The signal reflects the relative amount of that epitope in each spot on the slide, as shown in Figure~\ref{figrppaslide}. The arrays are then scanned and the resulting images are analyzed with an imaging software specifically designed for the quantification of RPPA analysis (MicroVigene, VigeneTech Inc., Carlisle, MA). The relative signal intensities of the dilution series for each sample on the array are used to calculate the relative {protein concentrations [\citet{neeley09,zhang09}].} Background correction is used to separate the signal from the noise by subtracting the extracted background intensity from the foreground intensity for each individual spot. Relative protein amount is calculated using a joint estimation method that utilizes {the logistic model of \citet {tabus06}.} This method overcomes quenching at high levels and background noise at low levels. An R package, SuperCurve, developed to use with this joint estimation method is available at \url{http://bioinformatics.mdanderson.org/Software/OOMPA}. As with most high-throughput technologies, the normalization of the resulting intensities is conducted before any downstream analysis in order to adjust for sources of systematic variation not attributable to biological variation. Technical differences in protein loading for each sample are determined by first dividing the results for each protein measured by the average value among all the specimens, and then by determining the average value for each sample across all of the measured proteins. This relative loading factor is then used to normalize the raw data for each sample, to correct for any differences in protein loading between specimens. We refer the reader to \citet {paweletz01} and \citet{hennessy2010} for more biological and technical details concerning RPPAs.
Biological researchers typically choose specific targeted pathways containing 50--200 proteins, usually assayed using the same number of arrays, with each array hybridized against one protein. Because of the reverse design (as compared to conventional gene expression microarrays), RPPAs allow much larger sample sizes than the traditional microarrays, thus allowing \textit{detailed} and \textit{integrated} analyses of protein signaling networks with higher statistical power. Furthermore, this makes it possible to use RPPAs to measure protein expression for multiple tumor classes and/or cell conditions. The scientific aims we address using RPPA data in this paper are threefold: to infer differential networks between tumor classes/subtypes; to utilize {a priori} information in inferring protein network topology within tumor classes/subtypes; and, finally, to utilize network information in designing optimal classifiers for tumor classification. We believe this will improve our understanding of the regulation of protein signaling networks in cancer. Understanding the differences in protein networks between various cancer types and subtypes may allow for improved therapeutic strategies for each specific type of tumor. Such information may also be relevant when determining the origin of a tumor, which is clinically important in cases with indeterminate histologic analysis, particularly for patients who have more than one type of cancer.
\subsection{Graphical models for network analysis}\label{sec1.3}
A convenient and coherent statistical representation of protein networks is accorded by graphical models [\citet{lauritzen96}]. By ``protein network'' we mean any graph with proteins as nodes, where the edges between proteins may code for various biological information. For example, an edge between two proteins may represent the fact that their products interact physically (protein--protein interaction network), the presence of an interaction such as a synthetic-lethal or suppressor interaction [\citet{ryan05}], or the fact that these proteins code for enzymes that catalyze successive chemical reactions in a pathway [\citet{philippe03}].
Our focus is on undirected graphical models and on Gaussian graphical models (GGM) in particular [\citet{whittaker90}]. These models provide representations of the conditional independence structure of the multivariate distribution---to develop and infer protein networks. In such models, the nodes represent the variables (proteins) and the edges represent pairwise dependencies, with the edge set defining the global conditional independence structure of the distribution. We develop an adaptive modeling approach for the covariance structure of high-dimensional distributions with a focus on sparse structures, which are particularly relevant in our setting in which the number of variables/proteins ($p$) can exceed the number of observations ($n$).
GGMs have been under intense methodological development over the past few years in both frequentist [\citet{meinshausen06,chaudhuri07,yuan2007,friedman08,bickel08}] and Bayesian settings [\citet{giudici1999,roverato02,carvalho09}]. {\citet{wong2003} proposed a reversible jump MCMC-based Bayesian model for covariance selection. In high-dimensional settings, \citet{dobra03} used regression analysis to find directed acyclic graphs and converted them to undirected (sparse) graphs to explore the underlying network structure, and \citet{rod2011} proposed a new approach for sparse covariance estimation in heterogeneous samples.} However, most of the approaches we have cited focused on inferring the conditional independence structure of the graph and did not consider classification, which is one of the foci of our article. \citet {rapaport07} used spectral decomposition to detect the underlying network structure and classify genetic data using support vector machines (SVM). More recently, \citet{monni10} proposed a graph-based regression approach incorporating pathway information as a prior for classification procedures, however, their method does not detect differential networks based on available data. \citet {zhu2009} proposed network-based classification for microarray data using support vector machines. This was extended to network-based sparse Bayesian classifiers by \citet{suarez2011}, but these approaches do not estimate the network and also do not take into account the differences in network structure between the two classes. Another recent method is that of \citet{Fan2013}, who propose a two-stage approach wherein they first select features and then subsequently use the retained features and Fisher's LDA for classification using only one covariance matrix for both the classes.
In this article, we propose a constructive method for sparse graphical models using selection priors on the conditional relationships in the presence of class information. Our method has several advantages over classical approaches. First, we incorporate (integrate) the uncertainty of the parameters in deriving the optimal rule via Bayesian model mixing. Second, our network model provides an adaptively regularized estimate of the covariance matrix and hence is capable of handling $n < p$ situations. More importantly, our model uses this information in deriving the optimal classification boundary. The novelty of our Bayesian model lies in the ability to draw information from the network data from all the classes as well as from the associated categorical outcomes in a unified hierarchical model for classification. Through this process, it offers the advantages of sparse Bayesian modeling of GGM, as well as the simplicity of a Bayesian classification model. In addition, with available online databases containing tens of thousands of reactions and interactions, there is a pressing need for methods integrating {a priori} pathway knowledge in the proteomic data analysis models. We integrate prior information directly in the model in an intuitive way such that the presence of an edge can be specified by providing the probability of an edge being present in the correlation matrix. Our method is fully Bayesian and allows for posterior inference on the network topologies both within and between classes. After fitting the Bayesian model, we obtain the posterior probabilities of the edge inclusion, which leads to false discovery rate (FDR)-based calls on significant edges.
The structure of our paper is as follows. In Section~\ref{sec2} we outline our Bayesian graph-based model for classification of RPPA data. Section~\ref{sec3} focuses on Bayesian FDR-based determination of significant networks. Section~\ref{sec4} presents the results of our case study using an RPPA experiment. We end with a discussion and conclusion in Section~\ref{sec5}. All technical details and additional analysis results are presented in the supplementary material [\citet{suppl}].
\section{Probability model}\label{sec2} Our data construct for modeling is as follows. We observe a tuple: $(Z_i,\mathbf{Y}_i), i= 1,\ldots,n$, where $Z_i$ is a categorical outcome denoting the type or subtype of cancer (binary or multi category) and $\mathbf{Y}_i = (Y_i^{(1)},\ldots,Y_i^{(p)})$ is a $p$-dimensional vector of proteins assayed for the $i$th sample/patient/array. We detail the model here for binary classification (when $Z_{i}$ is a binary variable), noting that generalization to multi-class classification can be achieved in an analogous manner. We factorize the joint distribution (likelihood) of the data $p(\mathbf {Y_{i}},Z_{i})$, $ \forall i$ in the following manner
\[
p(\mathbf{Y_{i}},Z_{i})=p(\mathbf{Y_{i}}|Z_{i})p(Z_{i}), \]
where the first component models the joint distribution of the $p$ proteins given the class variable $Z_i$ and the second component models the marginal distribution of the class variables. We model the first component as a mixture of the multivariate normal distributions as
\[
p(\mathbf{Y_{i}}|Z_{i},\bolds\mu,\bolds\Omega) \sim Z_{i} N\bigl(\bolds\mu^{(1)},{\bolds \Sigma}^{(1)}\bigr)+(1-Z_{i})N \bigl(\bolds\mu^{(2)},{\bolds\Sigma}^{(2)}\bigr), \]
where ${\bolds\mu}^{(\bullet)}$ and ${\bolds\Sigma}^{(\bullet)}$ are the corresponding means and covariances for the two classes. To specify the marginal component, we note that in the classification framework only a fraction of $Z$'s, say $Z^{u}$, will be unobserved (specifically in the case of prediction, as shown in Section~\ref{sec2.2}) and they will be further modeled as
\[
p\bigl(Z^{u}|h\bigr)\sim\operatorname{Bernoulli}(h), \]
where we assign a Beta prior on probability $h$ as $h \sim\operatorname {Beta}(\eta,\zeta)$. Note that this prior can be generalized to be class-specific by allowing $h$ to depend on the class $k$ by changing the corresponding hyperparameters $\eta_k,\zeta_k$.
Our main constructs of interest in this framework are $({\bolds\mu }^{(k)},{\bolds\Sigma}^{(k)})$, $ k=1,2$ for each of the classes, where the latter provides a dependence structure between the proteins, which we model in a GGM framework. The key idea behind GGMs is rather to model the precision matrix $\bolds{\Omega^{(k)}} = \bolds{\Sigma}^{(k)^{-1}} $, which dictates the network structure between the variables. In this framework of particular interest is the identification of zero entries in the precision matrix---a zero entry at the $ij$th element of $\Omega$ indicates conditional independence between the two random variables $\mathbf{Y}_{i}$~and~$\mathbf{Y}_{j}$, given all other variables. This is often referred to as the covariance selection problem in GGMs [\citet{dempster72,cox96}]. In the section below we provide a constructive method for sparse estimation (identification of many zeros) of the precision matrix in high-dimensional settings, but also allow for borrowing strength between classes to estimate the class-specific precision matrices for conducting classification.
\subsection{Parameterization of the precision matrix}\label{sec2.1}
Given the number of variables~$p$, the size of the precision matrix ($p\times p$) is potentially of high dimension. Instead of specifying a global (joint) distribution on the precision matrix, we explore local dependencies by breaking it down into components. For some applications, it is desirable to work directly with standard deviations and correlations [\citet{barnard2000,liechty03}] that do not correspond to any type of parameterization (e.g., Cholesky, etc.). {This parameterization has a practical motivation because most biologists think in terms of correlations between the proteins, thus easing prior elicitation, as we show below.}
To this end, we parameterize the precision matrix (for each class $k$, suppressing the superscript for ease of notation) as $\bolds\Omega= \mathbf {S} \times\mathbf{C} \times\mathbf{S}$, where $\mathbf{S}$ is a diagonal matrix with nonzero diagonal elements that contains the inverse of the partial standard deviations and $\mathbf{C}$ is a matrix that contains partial correlation coefficients. Note that the correlation matrix $ \mathbf{C}$ satisfies the properties of a correlation matrix, that is, the partial correlation coefficients ($\rho_{ij}$) between variables $i,j$ share a one-to-one correspondence to the elements $C_{ij}$ as
\[ \rho_{ij} = \frac{-\Omega_{ij}}{(\Omega_{ii}\Omega_{jj})^{{1}/{2}}} = -C_{ij}. \]
Due to this correspondence, sparse estimation of $\bolds{\Omega}$ directly implies the identification of zeros in the elements of $\mathbf{C}$. Thus, we model $\mathbf{C}$ as a convolution,
\[ \mathbf{C} = \mathbf{A}\odot\mathbf{R}, \]
where $\odot$ is the Hadamaard operator indicating element-wise multiplication between the two (stochastic) matrices: a \textit{selection} matrix $\mathbf{A}$ and the corresponding \textit{correlation} matrix $\mathbf {R}$ with the following properties:
\begin{itemize}
\item Both $\mathbf{A}$ and $\mathbf{R}$ are symmetric.
\item Both $\mathbf{A}$ and $\mathbf{R}$ have ones as their diagonal elements.
\item The off-diagonal elements of $\mathbf{A}$ are either 0 or 1 and the off-diagonal elements of $\mathbf{R}$ lie in the range $[-1,1]$.
\item Both $\mathbf{A}$ and $\mathbf{R}$ \textit{need not} be positive definite, but the convolution $\mathbf{C}$ \textit{has to be} positive definite. \end{itemize}
In essence, $\mathbf{A}$ is a binary selection matrix that selects which of the elements in $\mathbf{R}$ are zero or nonzero. In other words, $\mathbf{A}$ performs variable selection on the elements of the matrix $ \mathbf{R}$ by shrinking the nonrequired variables (edges) exactly to zero and thus inducing sparsity in the estimation of the resulting precision matrix governing the GGM. We discuss hereafter the estimation and prior specifications for each of these matrices.
\subsubsection*{Prior construction} $\mathbf{R}$ is a matrix with all of its off-diagonal elements in the range $[-1,1]$, therefore, we assign an independent uniform prior over $[-1,1]$ for all $R_{ij}$, $i<j$. Correspondingly, since the off-diagonal elements of $\mathbf {A}$ are binary (0~or~1), we assign an independent Bernoulli prior with probability $q_{ij}$ for the element $A_{ij}$, $i<j$. Note that this element-wise prior specification on $\mathbf{A}$ and $\mathbf{R}$ does not ensure that the $\mathbf{C}$ $({=} \mathbf{A}\odot\mathbf{R})$ is positive definite---hence a valid graph. Thus, a key ingredient of our modeling scheme is that we need an additional constraint: $\mathbf{C}\in\mathbb{C}_{p}$ where $ \mathbb{C}_{p}$ is the space of all proper correlation matrices of dimension $p$, such that the joint convolved prior on $\mathbf{A}$ and $\mathbf{R}$ can be written as
\[
\mathbf{A},\mathbf{R}|\mathbf{q} \sim\prod_{i<j} \bigl\{ \operatorname {Uniform}_{R_{ij}}[-1,1] \operatorname{Bernoulli}_{A_{ij}}(q_{ij}) \bigr\}I(\bolds{\mathbf{A}\odot\mathbf {R}}\in\mathbb{C}_{p}), \]
where $I(\bullet)$, the indicator function, ensures that the correlation matrix is positive definite and introduces dependence among the elements of the matrices $\mathbf{R},\mathbf{A}$, and $q_{ij}$ is the probability of the $ij$th element being selected as 1.
We ensure the positive-definiteness constraint in our posterior sampling sche\-mes. Specifically, we perform MCMC sampling in such a way that the constraint $\mathbf{C}\in\mathbb{C}_{p}$ is satisfied---to search over the possible space of valid correlation matrices. To implement the constraint, we draw $R_{ij}, A_{ij}$, sequentially conditioned on all other elements of $\mathbf{R}$ and $\mathbf{A}$ such that the realized value of $C_{ij}$ ensures $\mathbf{C}$ is positive definite given all other parameter values. Briefly, we follow the method of \citet {barnard2000} to find the range $[u_{ij},v_{ij}]$ on the individual elements of $R$ that will guarantee the positive definiteness of $\mathbf {C}$. The resulting form of the conditional prior on the off-diagonal elements $R_{ij}$ can be written as
\[
R_{ij}|a_{ij},A_{-ij},R_{-ij} \sim \operatorname{Uniform}(u_{ij},v_{ij})I(-1<R_{ij}<1),\qquad i\neq j, i<j, \]
where $R_{-ij}$ contains all other off-diagonal elements of $\mathbf{R}$ except the $ij$th element and $A_{-ij}$ contains all elements of $\mathbf {A}$ except the $ij$th element. The limits of the Uniform distribution $u_{ij}$ and $v_{ij}$ are chosen such that $\mathbf{C} = \mathbf{A}\odot\mathbf {R}$ is positive definite and (conditionally) $u_{ij}$ and $v_{ij}$ are functions of $R_{-ij}$ and $A_{-ij}$ (see Appendix A in the supplementary material [\citet{suppl}] for the detailed proof).
In this construction, the parameter probability $q_{ij}$ controls the degree of sparsity in the GGM in an adaptive manner by element-wise selection of the entries of the correlation matrix. We assign a beta hyperprior for the probabilities $q_{ij}$ as
\[ q_{ij} \sim \operatorname{Beta}(a_{ij},b_{ij}),\qquad i \neq j, \]
where the hyperparameters $a_{ij},b_{ij}$ can be set to induce prior information on the graph structure (see Section~\ref{sec2.3}). To complete the hierarchical specification, we choose an (exchangeable) inverse-gamma prior on the inverse of the partial standard deviations $S$, which is a diagonal matrix containing entries $S_i={\Omega}_{ii}^{{1}/{2}}$ as $S_i\sim IG(g,h)$, $i = 1,2,\ldots,p$.
\subsubsection*{Borrowing strength between classes} Note that in the above construction all the parameters are class-specific, that is, are different for each class $k$, and thus model fitting and estimation can be done for each class separately.
But the main advantage of Bayesian methodology lies in borrowing strength between the classes for both estimation of the graphical structure and subsequent prediction/classification. This can be accomplished by having a variable that introduces dependence between the classes linking the selection matrix $\mathbf{A}$. We introduce a latent variable $\lambda_{ij}$ defined as
\[ \lambda_{ij} = \cases{ 1, &\quad if ${A}_{ij}^{1} \neq{A}_{ij}^{2}$, \vspace*{3pt}\cr 0, &\quad if ${A}_{ij}^{1} = {A}_{ij}^{2}$,} \]
where $\mathbf{A}^{1}$ and $\mathbf{A}^{2}$ are the class-specific selection matrices. The binary variables $\lambda_{ij}$'s imply the presence or absence of the same edge in the graphical model of both classes. In other words, $\lambda_{ij}=1$ signifies a \textit{differential} edge (i.e., the relation between the covariates $ i,j$ is significant in only one class but not the other), whereas $\lambda_{ij}=0$ signifies a \textit{conserved} edge (i.e., the relation between the covariates $ i,j$ is significant in both classes). Thus, the $\lambda$'s serve a dual purpose in our model setup. They not only introduce dependence between the classes, since they are shared between both classes, but also have a distinct interpretation in terms of differential/conserved patterns of dependence between the graphs for the classes. This information is vital for understanding the biological processes and inferring conclusions from the analysis, as we show in Section~\ref{sec4}.
Since the $\lambda_{ij}$'s are binary random variables, we propose a Bernoulli prior on $\lambda_{ij}$ as
\[ \lambda_{ij}\sim\operatorname{Bernoulli}(\pi_{ij}),\qquad i<j, \]
where the parameter $\pi_{ij}$ is the probability that the relation between the $i$th and $j$th variables is different. We further assign a beta hyperprior for the probabilities $\pi_{ij}$ as
\[ \pi_{ij} \sim \operatorname{Beta}(e_{ij},f_{ij}),\qquad i\neq j. \]
To complete the prior specification on the graphical model, we propose a normal prior on the means $(\bolds\mu^{(1)},\bolds\mu^{(2)})$ as
\[ \bolds\mu^{(k)}\sim N \bigl(\bolds\mu_0^{(k)},{ \mathbf{B}_0^{-1}}^{(k)} \bigr),\qquad k=1,2. \]
\subsection{Prediction}\label{sec2.2}
In this section we lay out our graph-based prediction (classification) scheme. Suppose the class variables $\mathbf{Z}$ (of size $n \times1 $) are partitioned into a vector of training samples $\mathbf{Z}^{t}$ (of size $n_t \times1 $) and a vector of (unknown) test/validation cases $\mathbf {Z}^{u}$ (of size $n_u \times1 $) to be predicted. The corresponding observed variables are also partitioned as $[\mathbf{Y}^{t};\mathbf{Y}^{u}]$. Denote the observed data by $\mathcal{D} = \{\mathbf{Y}^t,\mathbf{Z}^t,\mathbf
{Y}^u\}$. In Bayesian prediction, for a new sample with protein expression information $\mathbf{Y}^{u}$, we have to obtain the posterior predictive probability that its class variable $\mathbf{Z}^{u}$, given all observed data $\mathcal{D}$, is $p(\mathbf{Z}^{u}|\mathcal{D})$.
To estimate these probabilities, we treat $\mathbf{Z}^{u}\equiv\{Z_o^u\dvtx o=1,\ldots,n_u\}$ as a parameter in the model and extend the MCMC analysis to sample these values at each iteration. Specifically, we draw $\mathbf{Z}^{u}$ from the corresponding conditional posterior distribution in each MCMC iteration (see Appendix B in the supplementary material [\citet{suppl}] for the full conditional distribution). The way our model is specified, the posterior distribution of $\mathbf{Z}^{u}$ is analyzed conditional not only on all the data from both classes $\mathcal{D}$, but also on the parameters that are shared between the classes. Thus, the predictions are obtained in a single MCMC fitting procedure along with all other parameters, thereby accounting for all sources of variation. We note that the limitation of this method is that training and test splits of the data must be contemplated prior to analysis (as is usually done) and/or analysis fully repeated if new predictions are required.
The complete hierarchical formulation of our graph-based binary classification model can be succinctly summarized as shown hereafter. In addition, the directed acyclic graph (Figure~6 in the supplementary material [\citet{suppl}]) shows a graphical representation of our model where the circles indicate parameters and the squares observed random variables:
\begin{eqnarray*} \mathbf{Y} &=& \bigl[\mathbf{Y}^{t},\mathbf{Y}^{u}\bigr] \sim \mathbf{Z} N \bigl(\bolds\mu^{(1)},{\bolds\Omega ^{-1}}^{(1)}\bigr)+(1- \mathbf{Z})N\bigl(\bolds\mu^{(2)},{\bolds\Omega^{-1}}^{(2)}\bigr), \\ \mathbf{Z} &=& \bigl[\mathbf{Z}^{t}, \mathbf{Z}^{u}\bigr], \\ {Z}_o^{u}&\sim& \operatorname{Bernoulli}(h_o), \\ h_o&\sim&\operatorname{Beta}(\eta,\zeta), \\ \bolds\mu^{(k)}&\sim& N\bigl(\bolds\mu_0^{(k)},{ \mathbf{B}_0^{-1}}^{(k)}\bigr), \\ \bolds{\Omega}^{(k)}&=&\mathbf{S}^{(k)}\bigl(\mathbf{A}^{(k)} \odot\mathbf{R}^{(k)}\bigr)\mathbf {S}^{(k)}, \\
\mathbf{A}^{(k)},\bolds\lambda,\mathbf{R}^{(k)}|\mathbf{q}^{(k)},\bolds \pi&\sim&\prod_{i<j}\operatorname{Uniform}(u_{ij},v_{ij}) \operatorname {Bernoulli}\bigl(q_{ij}^{(k)}\bigr) \\ &&\hspace*{13pt}{}\times \operatorname{Bernoulli}(\pi_{ij}) I\bigl(\mathbf {C}^{(k)}\in \mathbb{C}_{p}\bigr), \\ q_{ij}^{(k)}&\sim& \operatorname{Beta}\bigl( \alpha_{ij}^{(k)},\beta _{ij}^{(k)}\bigr), \\ \pi_{ij} &\sim& \operatorname{Beta}(e_{ij},f_{ij}),\qquad i\neq j, \\ S_i^{(k)}&\sim&IG(g,h),
\end{eqnarray*}
where $k = 1,2$ corresponds to the two classes, $i,j=1,\ldots,p$, and $o=1,\ldots,n_u$ corresponds to the size of the test/validation sample. The full conditional distributions for MCMC sampling of the model parameters and random variables are provided in Appendix B in the supplementary material [\citet{suppl}].
\subsection{Incorporating prior pathway information and hyperparameter settings}\label{sec2.3}
As we mentioned before, there exists a huge amount of literature (prior knowledge) describing the functional behaviors of proteins, as characterized in metabolic, signaling and other regulation pathways. We formally incorporate this {a priori} knowledge in our model through the hyperparameter settings on the prior specification of $q_{ij}$, the probability that the edge between protein $(i,j)$ will be selected. In particular, we impose an informative prior on $\pi(q_{ij}) \sim \operatorname{Beta}(a_{ij},b_{ij})$ and set the hyperparameters $a_{ij}$ and $b_{ij}$ such that the distribution has a higher mean to reflect our prior knowledge of the presence of an edge. For example, one could set the following:
\begin{itemize}
\item prior on $q_{ij}$ as $\operatorname{Beta}(2,10)$ with mean 0.17, if there is biological evidence that the edge does not play an important role in the pathway;
\item prior on $q_{ij}$ as $\operatorname{Beta}(10,2)$ with mean 0.83, if there is biological evidence that the edge plays an important role in the pathway;
\item prior on $q_{ij}$ as $\operatorname{Beta}(2,2)$ with mean 0.5, if no prior information is available. \end{itemize}
The prior information incorporated in the $q_{ij}$'s from online databases is assumed to represent normal conditions only. Information on relations between proteins that is affected by an intervention and/or mutation can be elicited by expert opinion (e.g., from a biologist). Information on the edges of graphs that is perturbed by a mutation can be incorporated formally through our prior on $\pi_{ij}$, the probability that controls the differential/conserved edge between two different conditions. We specify informative priors in a manner analogous to that of $q_{ij}$ (as shown above) in cases where such information exists by setting $e_{ij},f_{ij}$ similarly to $a_{ij}$ and $b_{ij}$. Finally, for the hyperparameters of the variance components, we obtain a vague inverse gamma prior by setting $(g,h)=1$ and set the hyperparameters for the beta prior on $h_o$ to be diffuse using $(\eta,\zeta)=2$.
\section{FDR-based determination of significant networks}\label{sec3} Once we apply the MCMC methods, we are left with posterior samples of the model parameters that we can use to perform Bayesian inference. Our objective is twofold: to detect the ``best'' network/pathway based on the significance of the edges and also to detect differential networks between treatment classes. Given $p$ proteins, our network consists of $p(p-1)/2$ unique edges, which could be large even for a moderate number of proteins. Therefore, we need a mechanism that will control for these large-scale comparisons, discover edges that are significant and also detect differential edges between classes. We accomplish this in a statistically coherent manner using false discovery rate (FDR)-based thresholding to find significant networks and also to differentiate networks across samples.
The MCMC samples explore the distribution of possible network configurations suggested by the data, with each configuration leading to a different topology of the network based on the model parameters. Some edges that are strongly supported by the data may appear in most of the MCMC samples, whereas others with less evidence may appear less often. There are different ways to summarize this information in the samples. One could choose the most likely (posterior mode) network configuration and conduct conditional inference on this particular network topology. The benefit of this approach would be the yielding of a single set of defined edges, but the drawback is that the most likely configuration may still appear only in a very small proportion of MCMC samples. Alternatively, one could use all of the MCMC samples and, applying Bayesian model averaging (BMA) [\citet{hoeting99}], mix the inference over the various configurations visited by the sampler. This approach better accounts for the uncertainty in the data, leads to estimators of the precision matrix with the smallest mean squared error and should lead to better predictive performance in class predictions [\citet{raftery97}]. We will use this Bayesian model averaging approach.
From our MCMC iterations, suppose we have $M$ posterior samples of the corresponding parameter set $\{ A^{(m)}_{ij}, m=1,\ldots,M\}$, for which the selection indicator of the $ij$th edge is in the model. Suppose further that the model averaged set of posterior probabilities is set $\mathcal{P}$, the ${ij}$th element\vspace*{1pt} of which $\mathcal{P}_{ij}= M^{-1}\sum_m A^{(m)}_{ij}$ and is a $p\times p$-dimensional matrix. Note that $1-\mathcal{P}_{ij}$ can be considered Bayesian \mbox{$q$-}values, or estimates of the local false discovery rate [\citet{storey03,newton04}], as they measure the probability of a false positive if the $ij$th edge is called a ``discovery'' or is significant. Given a desired global FDR bound $\alpha\in(0,1)$, we can determine a threshold $\phi_{\alpha}$ with which to flag a set of edges $\mathcal{X}_\phi=\{(i,j)\dvtx \mathcal{P}_{ij} \geq\phi_{\alpha}\}$ as significant edges.
The significance threshold $\phi_{\alpha}$ can be determined based on classical Bayesian utility considerations such as those described in \citet{muller04} and based on the elicited relative costs of false-positive and false-negative errors or can be set to control the average Bayesian FDR, as in \citet{morris08}. The latter is the process we follow here. For example, suppose we are interested in finding the value $\phi_\alpha$ that controls the overall average FDR at some level $\alpha$, meaning that we expect that only $100\alpha\%$ of the edges that are declared significant are in fact false positives. Let $\operatorname{vec}(\mathcal{P}) = [\mathcal{P}_{t}; t=1,\ldots,p(p-1)/2]$ be the vectorized version of the set $\mathcal{P}$ containing the unique posterior probabilities of the edges, stacked columnwise. We first sort $\mathcal{P}_t$ in descending order to yield $\mathcal {P}_{(t)},t=1,\ldots,p(p-1)/2$. Then $\phi_\alpha=\mathcal{P}_{(\xi)}$, where $\xi=\max\{j^*\dvtx j^{*-1}\sum_{j=1}^{j^*} \mathcal{P}_{(t)} \le \alpha\}$. The set of regions $\mathcal{X}_{\phi_\alpha}$ then can be claimed to be significant edges based on an average Bayesian FDR of $\alpha$.
This FDR-based thresholding procedure can be extended to find differential networks between different populations (tumor classes/subtypes), for example, to identify edges that are significantly different between tumor types. To this end, we use the corresponding parameter set $\{ \lambda^{(m)}_{ij}, m=1,\ldots,M\}$, which is the selection indicator of the differential edge between the $ij$th covariates in the model. The model averaged set of posterior probabilities is set $\mathcal{P}^d$, the ${ij}$th element of which $\mathcal{P}^d_{ij}= M^{-1}\sum_m \lambda^{(m)}_{ij}$. We use this same procedure to arrive at a set of differential edges $\mathcal{X}_\phi=\{ (i,j)\dvtx \mathcal{P}^{d}_{ij} \geq\phi_{\alpha}\}$ with $\phi_{\alpha}$ chosen to control the Bayesian\vspace*{-2pt} FDR at level~$\alpha$. We use a similar procedure on the parameter set $\{1-\lambda^{(m)}_{ij}, m=1,\ldots,M\} $, to arrive at a set of common edges $\mathcal{X}_\phi=\{(i,j)\dvtx \mathcal {P}^{c}_{ij} \geq\phi_{\alpha}\}$ with $\phi_{\alpha}$ chosen to control the Bayesian FDR at level $\alpha$.
\section{Data analysis}\label{sec4}
\subsection{Classification of breast and ovarian cancer cell lines}\label{sec4.1}
Breast and ovarian cancer are two of the leading causes of cancer-related deaths in women [\citet{jemal09}]. Both of these diseases are frequently affected by mutations in kinase signaling cascades, particularly those involving components of the PI3K-AKT pathway [\citet{mills03,hennessy08,yuan08,bast09}]. The PI3K-AKT pathway is one of the most important signaling networks in carcinogenesis [\citet{vivanco02}] and is affected more than any other signaling pathway by activating mutations in cancer tissues [\citet{yuan08}]. Aggressive drug development efforts have targeted this critical oncogenic pathway, and inhibitors of multiple different components of the PI3K-AKT pathway have been developed and are in various stages of preclinical and clinical testing [\citet {hennessy05,courtney10}].
We applied our methodology to identify differences in the regulation of the PI3K-AKT signaling network in breast and ovarian cancers. For this analysis, we used expression data of $p=50$ protein markers in signaling pathways from an RPPA analysis of human breast ($n_1=51$) and ovarian ($n_2=31$) cancer cell lines grown under normal tissue culture conditions [\citet{stemke08}]. We used the known connections in the PI3K-AKT pathway suggested by previous studies and expert opinion as {a priori} information in our model, as stated in Section~\ref{sec2.3}.
\begin{sidewaysfigure}
\includegraphics{722f02a.eps}
\centering{\footnotesize{(a) Breast network}}\vspace*{6pt}
\includegraphics{722f02b.eps}
\centering{\footnotesize{(b) Ovary network}} \caption{Significant edges for the proteins in the PI3K-AKT kinase pathway for breast \textup{(a)} and ovarian cancer cell lines \textup{(b)} computed using a Bayesian FDR of 0.10. The red (green) lines between the proteins indicate a negative (positive) correlation between the proteins. The thickness of the edges corresponds to the strength of the associations, with stronger associations having greater thickness.}\label{brnet} \end{sidewaysfigure}
The significant networks based on a Bayesian FDR cutoff of $\alpha = 0.1$ for breast and ovarian cancer samples are shown in Figure~\ref{brnet}(a)~and~(b), respectively. The red edges indicate a negative association (regulation) and the green edges indicate a positive interaction between the proteins. The edges are represented by lines of varying degrees of thickness based on the strength of the association (correlation), with higher weights having thicker edges and lower weights having thinner edges. In order to identify biological similarities and differences between the breast and ovarian cancer cell lines, we compared the results of our network analyses of the two cancer types. Plotted in Figure~\ref{fig3}(a) are the conserved (common) edges between the two cancer types. The differential network between the two cancer types, controlling for a Bayesian FDR cutoff of $\alpha = 0.1$, is shown in Figure~\ref{fig3}(b).
\begin{figure}
\caption{Conserved and differential networks for the proteins in the PI3K-AKT kinase pathway between breast and ovarian cancer cell lines computed using a Bayesian FDR set to 0.10. In the conserved network (top panel), the red (green) lines between the proteins indicate a negative (positive) correlation between the proteins. In the differential network (bottom row), the blue lines between the proteins indicate a relationship that was significant in the ovarian cancer cell lines but not in the breast cancer cell lines; the orange lines between the proteins indicate a relationship in the breast cancer cell lines but not in the ovarian cancer cell lines. The thickness of the edges corresponds to the strength of the associations, with stronger associations having greater thickness.}
\label{fig3}
\end{figure}
A number of protein--protein relationships demonstrated significant similarity between the two cancer types. For example, both breast cancer and ovarian cancer cell lines exhibited a marked negative association between the \mbox{levels} of PTEN and phosphorylated AKT (Akt.pT308). This relationship was expected due to the critical regulation of 3-phopshatidylinositols by the lipid phosphatase activity of PTEN, and has previously been demonstrated as a significant interaction in multiple tumor types [\citeauthor{davies98} (\citeyear{davies98,davies99,davies09}), \citet{stemke08,vasudevan09,park10}]. Although this concordance was expected, our analysis also identified a large network of \mbox{differential} protein interactions among the breast and ovarian cancer cell lines [Figure~\ref{fig3}(b)]. In this figure, the edges in blue indicate relationships between proteins that were present in the ovarian cancer cell lines but not in the breast cancer cell lines using our FDR cutoff, and the orange edges indicate relationships present in the breast cancer cell lines but not in the ovarian cancer cell lines. In addition, the thickness of the edges corresponds to the strength of the association. Notable differential connections in this analysis include the association of phosphorylated AKT (Akt.pS473) with BCL-2 (Bcl2) and phosphorylated MAPK (MAPK.pT202.Y204) in breast cancer. Both of these, BCL-2(Bcl2) and phosphorylated (activated) MAPK (MAPK.pT202.Y204), may contribute to tumor proliferation and survival, and are therapeutic targets with available inhibitors. The \mbox{association} of different proteins with the expression of the estrogen receptor, phosphorylated PDK1 (PDK1.pS241) and MAPK (MAPK.pT202.Y204) in breast cancer and phosphorylated AMPK (AMPK.pT172) in ovarian cancer, may also have therapeutic implications, as the estrogen-receptor blockade is a treatment used in both advanced breast and ovarian cancer.
We used this network information to build a classifier to distinguish between breast cancer and ovarian cancer samples as explained in Section~\ref{sec2}. We assessed the performance of the classifiers using cross-validation techniques. {In particular, we generated 100 random selections of training and test data sets with 66\% and 33\% splits of the data, respectively. We fit our Bayesian graph-based classifier (BGBC) and compared our method to six other methods: the network-based support vector machine (SVM) [\citet{zhu2009}], $K$-nearest neighbor (KNN), linear discriminant analysis (LDA), diagonal linear discriminant analysis (DLDA), diagonal quadratic discriminant analysis (DQDA) and naive Bayes classifier (NBC) [\citet{john1995}] methods. We implemented the network-based SVM using the R package ``pathclass.'' The network structure was specified to be the common network for the two classes obtained from the BGBC algorithm, as this method does not explicitly estimate the network. All other input parameters were set at the default settings for the network-based SVM function. We implemented all the other methods using the corresponding MATLAB functions.}
\begin{table} \tabcolsep=0pt \caption{Misclassification error rates for different classifiers for ovarian and breast cancer data sets. The methods compared here are SVM (network-based support vector machine), LDA (linear discriminant analysis), KNN ($K$-nearest neighbor), DQDA (diagonal quadratic discriminant analysis), DLDA (diagonal linear discriminant analysis), NBC (naive Bayes classifier) and BGBC (Bayesian graph-based classifier), which is the method proposed in this paper with and without incorporating prior information, denoted by BGBC (prior) and BGBC (w/o prior), respectively. The mean and the standard deviation are values of the misclassification percentage over 100 random splits of the data}\label{tabl3}
\begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcccccccc@{}} \hline & \textbf{SVM} & \textbf{KNN} & \textbf{LDA} & \textbf{DLDA} & \textbf{DQDA} & \textbf{NBC} & \textbf{BGBC} & \textbf{BGBC w/o prior} \\ \hline Mean & 8.03 & 15.14 & 25.48 & 12.07 & 13.74 & 13.37 & 6.59 & 10.88 \\ SD & 5.44 & \phantom{0}6.82 & 10.63& \phantom{0}5.829 & 6.70 & \phantom{0}6.96 & 4.06 & \phantom{0}6.31\\ \hline \end{tabular*} \end{table}
The average misclassification errors (along with standard errors) across all splits for all the methods on the test set are shown in Table~\ref{tabl3}. The BGBC method had much lower misclassification rates compared to the other methods (the other methods ignore the underlying network structure of the proteins). We believe that this improved precision is due to the fact that the mean expression profiles of the breast and ovarian cancer cell lines are very similar so there is not enough information in the mean to classify the two cases. Hence, means-based classifiers, especially KNN and LDA (both of which use identity and diagonal covariances), underperform as compared to our method. The results of the DQDA method could be a bit closer to those of the BGBC method, but the former method ignores the cross-connections, that is, network information, and hence results in a higher misclassification rate. The QDA could not be performed because the estimation of different covariance matrices for different classes is an ill-posed problem for $n<p$. { We also tested the performance of BGBC using prior information and without using prior information in estimating the networks. The last two columns of Table~\ref{tabl3} show that incorporating prior information improves our classification performance. Furthermore, the inclusion of prior information leads to sparser networks (as shown in Figure~7 in the supplementary material [\citet{suppl}]), as the prior information provides information about important and unimportant relationships, which aids our classification model. }
We further note that nonlinear (quadratic) boundaries are obtained by using network information (since we model the covariance matrix), whereas linear boundaries are obtained by ignoring the network information (linear/diagonal \mbox{discriminant} based approaches). The classification boundary (Figure~8 in the supplementary material [\citet{suppl}]) exemplifies our intuition and approach. We have a $p (=50)$-dimensional quadratic classification boundary based on the GGM. In order to visualize this, we projected the boundary and the data onto two randomly selected dimensions/covariates. Two of those projections are shown in the figure, which confirm our intuition of how nonlinear boundary is more effective than a linear boundary in classification.
\subsection{Effects of tissue culture conditions on network topology}\label{sec4.2}
Cell lines derived from tumors are a powerful research tool, as they allow for detailed characterization and functional testing. Genetic studies support the concept that cell lines generally mirror the changes that are detected in tumors, particularly at the DNA and RNA levels [\citet{neve06}]. However, the activation status of proteins can be impacted by the use of different environmental conditions in the culturing of cells. A key scientific question in the analysis of protein networks in cancer cell lines is the variability of network topologies due to differing tissue culture conditions. In order to assess if different network connectivity is observed under varying culture conditions, we used three different tissue culture conditions to grow the 31 ovarian cancer cell lines used in the previous analysis.
For condition ``A,'' the cells were grown in tissue culture media that was supplemented with growth factors in the form of fetal calf serum (5\% of the total volume), which is a standard condition for the culturing of cancer cells. For condition ``B,'' the cells were harvested after being cultured in the absence of growth factors (serum) for 24 hours. For condition ``C,'' cells were grown in the absence of growth factors for 24 hours, then they were stimulated acutely (20 minutes) with growth factors (5\% fetal calf serum). Proteins were harvested from each cell line for each tissue culture condition. The experimental procedure used for the isolation and RPPA analysis of proteins from the cancer cells growing under normal, serum-replete tissue culture conditions has been described previously [\citet{davies09,park10}]. Protein isolation, processing and RPPA analysis were performed using the same methodology for all three conditions.
\begin{figure}
\caption{Conserved and differential networks for the proteins in the PI3K-AKT kinase pathway between ovarian cancer cell lines grown in three different tissue culture conditions: A, B and C (see main text) computed using a Bayesian FDR set to 0.10. In the conserved network [\textup{(a)--(c)}], the red (green) lines between the proteins indicate a negative (positive) correlation between the proteins. In the differential network [\textup{(d)--(f)}], the blue lines between the proteins indicate a relationship that was significant in the ovarian cancer cell lines but not in the breast cancer cell lines; the orange lines between the proteins indicate a relationship in the breast cancer cell lines but not in the ovarian cancer cell lines. The thickness of the edges corresponds to the strength of the associations, with stronger associations having greater thickness.}
\label{fig4}
\end{figure}
The RPPA data for each condition were then analyzed for protein--protein interactions using the GGM method. The topology maps for the ovarian cancer cells for the A, B and C tissue culture conditions are shown in Figure~12(a), (b) and~(c) (provided in the supplementary material [\citet{suppl}]), respectively. We then performed comparisons of the results based on each of the three conditions in order to identify protein topology networks that were similar and different between each of the tissue culture conditions. As conditions A (media replete with growth factor) and B (media starved of growth factor) both represented steady-state tissue culture conditions, we initially compared these protein networks using a Bayesian FDR of 10\%. The networks that are shared between the two conditions are shown in Figure~\ref{fig4}(a); the differential associations are presented in Figure~\ref{fig4}(d). We detected 21 significant protein interactions that were common for conditions A and B, and 4 interactions that were different. Thus, the overwhelming majority of protein--protein associations that were observed were maintained regardless of the presence or absence of growth factors (serum) in the tissue culture media. We then compared the significant relationships identified for condition B (media starved of growth factor) versus condition C (media starved, then acutely stimulated with growth factor). This comparison showed increased discordance of results, as we detected 20 associations that were common for conditions B and C [Figure~\ref{fig4}(b)], but 11 associations that differed significantly [Figure~\ref{fig4}(e)]. Similarly, the comparison of networks between the A and C conditions identified 22 shared protein interactions [Figure~\ref{fig4}(c)] and 12 differential interactions [Figure~\ref{fig4}(f)]. Of the differential interactions noted for the comparisons of conditions B versus C and A versus C, only 2 were observed in both comparisons (c-KIT and P38; VEGFR2 and MAPK.pT202.Y204). Neither of these 2 relationships was among the differential protein interactions in the analysis of condition A versus condition B. Of the 4 relationships that differed in the comparison of condition A versus condition B, 3 of the relationships were also identified as differing significantly when comparing condition B versus condition C (eIF4E and P38.pT180.Y182; c-Kit and PARP.cleaved; PARP.cleaved and ER.alpha), and the fourth differed significantly for the comparison of condition A versus condition C (AMPK.pT172 and eIF4E). This analysis suggests that protein--protein relationships are largely maintained under steady-state tissue culture conditions. However, these interactions may differ significantly in the setting of acute growth factor stimulation. We have included the explicit comparisons of our inferred networks with the prior PI3K-AKT pathway in Figures~13--16 in the supplementary material [\citet{suppl}]. The posterior means of the covariance matrices corresponding to the networks are also now plotted as heat maps in Figures~17--20 in the supplementary material [\citet{suppl}]. The exact posterior mean estimates are also provided as excel files downloadable from the corresponding authors' website at \url{http://odin.mdacc.tmc.edu/~vbaladan/Veera_Home_Page/Software_files/Covariance_Matrices.xlsx}.
\section{Discussion and conclusions}\label{sec5} We present methodology to model sparse graphical models in the presence of class variables in high-dimensional settings, with a particular focus on protein signaling networks. Our methods allow for borrowing strength between classes to assess differential and common networks between the classes of cancer/tumor conditions. In addition, our method allows for the effective use of prior information about signaling pathways that is already available to us from various sources to help in decoding the complex protein networks. Improved understanding of the differential networks can be crucial for biologists when designing their experiments, allowing them to concentrate on the most important factors that distinguish tumor types. Such information may also help to narrow the drug targets for specific types of cancer. Knowledge of the common networks can be used to develop a drug for two different types of cancer that targets proteins that are active in both types. Data on the differential edges may be used as a good screening analysis, allowing researchers to eliminate unimportant proteins and concentrate on effective proteins when designing advanced patient-based translational experiments.
In this article we focused on undirected graphical models and not on directed (casual) networks. Directed graphical models, such as Bayesian networks and directed acyclic graphs (DAGs), have explicit causal modeling goals that require further modeling assumptions. In our formulation, we provide a natural and useful technical step in the identification of high posterior probability undirected graphical models, assuming a random sampling paradigm. In addition, our models infer network topologies that assume a steady-state network. Some of the protein networks may be dependent on causal relations between the nodes, which would require us to model data over time to infer the complete dynamics of the network. We leave this task for future consideration.
With regard to computation time, our MCMC chains are fairly fast for high-dimensional data sets such as those we considered, with a 5000-iteration run taking about 15 minutes. The source code, in MATLAB (The Mathworks, Inc., Natick, MA), takes advantage of several matrix optimizations available in that language environment. The computationally-involved step is the imposition of a positive definiteness on the correlation matrix. Optimizations to the code have been made by porting some functions into C. The software is available by contacting the first author.
Our main motivation for this work was to provide a constructive framework to conduct classification using sparse graphical methods that incorporate prior information. We assume parametric structures (likelihood/priors) throughout for ease of interpretation and computation, and our results indicate that this performs reasonably well on both real and simulated data sets. Extending to nonparametric settings would be an excellent avenue of future research that we would wish to undertake.
\section*{Acknowledgments} The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health.
\begin{supplement}[id=suppA] \stitle{Supplement to ``Bayesian sparse graphical models for classification with application to protein expression data''} \slink[doi]{10.1214/14-AOAS722SUPP}
\sdatatype{.pdf} \sfilename{aoas722\_supp.pdf} \sdescription{The supplementary material includes Appendix~A: Positive definiteness constraint, Appendix B: Full conditional distributions and Appendix C: Simulations.} \end{supplement}
\begin{thebibliography}{69}
\bibitem[\protect\citeauthoryear{Baladandayuthapani et al.}{2014}]{suppl}
\begin{bmisc}[auto] \bauthor{\bsnm{Baladandayuthapani},~\bfnm{Veerabhadran}\binits{V.}}, \bauthor{\bsnm{Talluri},~\bfnm{Rajesh}\binits{R.}}, \bauthor{\bsnm{Ji},~\bfnm{Yuan}\binits{Y.}}, \bauthor{\bsnm{Coombes},~\bfnm{Kevin R.}\binits{K.~R.}}, \bauthor{\bsnm{Lu},~\bfnm{Yiling}\binits{Y.}}, \bauthor{\bsnm{Hennessy},~\bfnm{Bryan T.}\binits{B.~T.}}, \bauthor{\bsnm{Davies},~\bfnm{Michael A.}\binits{M.~A.}} \AND \bauthor{\bsnm{Mallick},~\bfnm{Bani K.}\binits{B.~K.}} (\byear{2014}). \bhowpublished{Supplement to ``Bayesian sparse graphical models for classification with application to protein expression data''. DOI:\doiurl{10.1214/14-AOAS722SUPP}.} \end{bmisc}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Barnard, McCulloch and Meng}{2000}]{barnard2000}
\begin{barticle}[mr] \bauthor{\bsnm{Barnard},~\bfnm{John}\binits{J.}}, \bauthor{\bsnm{McCulloch},~\bfnm{Robert}\binits{R.}} \AND \bauthor{\bsnm{Meng},~\bfnm{Xiao-Li}\binits{X.-L.}} (\byear{2000}). \btitle{Modeling covariance matrices in terms of standard deviations and correlations, with application to shrinkage}. \bjournal{Statist. Sinica} \bvolume{10} \bpages{1281--1311}. \bid{issn={1017-0405}, mr={1804544}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Bast Jr., Hennessy and Mills}{2009}]{bast09}
\begin{barticle}[author] \bauthor{\bsnm{Bast} \bfnm{C.~R.}\binits{C.~R.} \bsuffix{Jr.}}, \bauthor{\bsnm{Hennessy},~\bfnm{B.}\binits{B.}} \AND \bauthor{\bsnm{Mills},~\bfnm{G.~B.}\binits{G.~B.}} (\byear{2009}). \btitle{The biology of ovarian cancer: New opportunities for translation}. \bjournal{Nat. Rev. Cancer} \bvolume{9} \bpages{415--428}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Bickel and Levina}{2008}]{bickel08} \begin{barticle}[mr] \bauthor{\bsnm{Bickel},~\bfnm{Peter~J.}\binits{P.~J.}} \AND \bauthor{\bsnm{Levina},~\bfnm{Elizaveta}\binits{E.}} (\byear{2008}). \btitle{Regularized estimation of large covariance matrices}. \bjournal{Ann. Statist.} \bvolume{36} \bpages{199--227}. \bid{doi={10.1214/009053607000000758}, issn={0090-5364}, mr={2387969}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Blower et~al.}{2007}]{blower07}
\begin{barticle}[pbm] \bauthor{\bsnm{Blower},~\bfnm{Paul~E.}\binits{P.~E.}}, \bauthor{\bsnm{Verducci},~\bfnm{Joseph~S.}\binits{J.~S.}}, \bauthor{\bsnm{Lin},~\bfnm{Shili}\binits{S.}}, \bauthor{\bsnm{Zhou},~\bfnm{Jin}\binits{J.}}, \bauthor{\bsnm{Chung},~\bfnm{Ji-Hyun}\binits{J.-H.}}, \bauthor{\bsnm{Dai},~\bfnm{Zunyan}\binits{Z.}}, \bauthor{\bsnm{Liu},~\bfnm{Chang-Gong}\binits{C.-G.}}, \bauthor{\bsnm{Reinhold},~\bfnm{William}\binits{W.}}, \bauthor{\bsnm{Lorenzi},~\bfnm{Philip~L.}\binits{P.~L.}}, \bauthor{\bsnm{Kaldjian},~\bfnm{Eric~P.}\binits{E.~P.}}, \bauthor{\bsnm{Croce},~\bfnm{Carlo~M.}\binits{C.~M.}}, \bauthor{\bsnm{Weinstein},~\bfnm{John~N.}\binits{J.~N.}} \AND \bauthor{\bsnm{Sadee},~\bfnm{Wolfgang}\binits{W.}} (\byear{2007}). \btitle{MicroRNA expression profiles for the NCI-60 cancer cell panel}. \bjournal{Mol. Cancer Ther.} \bvolume{6} \bpages{1483--1491}. \bid{doi={10.1158/1535-7163.MCT-07-0009}, issn={1535-7163}, pii={1535-7163.MCT-07-0009}, pmid={17483436}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Carvalho and Scott}{2009}]{carvalho09}
\begin{barticle}[mr] \bauthor{\bsnm{Carvalho},~\bfnm{C.~M.}\binits{C.~M.}} \AND \bauthor{\bsnm{Scott},~\bfnm{J.~G.}\binits{J.~G.}} (\byear{2009}). \btitle{Objective {B}ayesian model selection in {G}aussian graphical models}. \bjournal{Biometrika} \bvolume{96} \bpages{497--512}. \bid{doi={10.1093/biomet/asp017}, issn={0006-3444}, mr={2538753}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Chaudhuri, Drton and~Richardson}{2007}]{chaudhuri07}
\begin{barticle}[mr] \bauthor{\bsnm{Chaudhuri},~\bfnm{Sanjay}\binits{S.}}, \bauthor{\bsnm{Drton},~\bfnm{Mathias}\binits{M.}} \AND \bauthor{\bsnm{Richardson},~\bfnm{Thomas~S.}\binits{T.~S.}} (\byear{2007}). \btitle{Estimation of a covariance matrix with zeros}. \bjournal{Biometrika} \bvolume{94} \bpages{199--216}. \bid{doi={10.1093/biomet/asm007}, issn={0006-3444}, mr={2307904}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Courtney, Corcoran and Engelman}{2010}]{courtney10}
\begin{barticle}[author] \bauthor{\bsnm{Courtney},~\bfnm{Kevin~D.}\binits{K.~D.}}, \bauthor{\bsnm{Corcoran},~\bfnm{Ryan~B.}\binits{R.~B.}} \AND \bauthor{\bsnm{Engelman},~\bfnm{Jeffrey~A.}\binits{J.~A.}} (\byear{2010}). \btitle{The PI3K pathway as drug target in human cancer}. \bjournal{J.~Clin. Oncol.} \bvolume{28} \bpages{1075--1083}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Cox and Wermuth}{2002}]{cox96}
\begin{barticle}[mr] \bauthor{\bsnm{Cox},~\bfnm{D.~R.}\binits{D.~R.}} \AND \bauthor{\bsnm{Wermuth},~\bfnm{Nanny}\binits{N.}} (\byear{2002}). \btitle{On some models for multivariate binary variables parallel in complexity with the multivariate {G}aussian distribution}. \bjournal{Biometrika} \bvolume{89} \bpages{462--469}. \bid{doi={10.1093/biomet/89.2.462}, issn={0006-3444}, mr={1913973}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Davies, Hennessy and Mills}{2006}]{davies06}
\begin{barticle}[pbm] \bauthor{\bsnm{Davies},~\bfnm{Michael}\binits{M.}}, \bauthor{\bsnm{Hennessy},~\bfnm{Bryan}\binits{B.}} \AND \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}} (\byear{2006}). \btitle{Point mutations of protein kinases and individualised cancer therapy}. \bjournal{Expert Opin. Pharmacother.} \bvolume{7} \bpages{2243--2261}. \bid{doi={10.1517/14656566.7.16.2243}, issn={1744-7666}, pmid={17059381}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Davies et~al.}{1998}]{davies98}
\begin{barticle}[pbm] \bauthor{\bsnm{Davies},~\bfnm{M.~A.}\binits{M.~A.}}, \bauthor{\bsnm{Lu},~\bfnm{Y.}\binits{Y.}}, \bauthor{\bsnm{Sano},~\bfnm{T.}\binits{T.}}, \bauthor{\bsnm{Fang},~\bfnm{X.}\binits{X.}}, \bauthor{\bsnm{Tang},~\bfnm{P.}\binits{P.}}, \bauthor{\bsnm{LaPushin},~\bfnm{R.}\binits{R.}}, \bauthor{\bsnm{Koul},~\bfnm{D.}\binits{D.}}, \bauthor{\bsnm{Bookstein},~\bfnm{R.}\binits{R.}}, \bauthor{\bsnm{Stokoe},~\bfnm{D.}\binits{D.}}, \bauthor{\bsnm{Yung},~\bfnm{W.~K.}\binits{W.~K.}}, \bauthor{\bsnm{Mills},~\bfnm{G.~B.}\binits{G.~B.}} \AND \bauthor{\bsnm{Steck},~\bfnm{P.~A.}\binits{P.~A.}} (\byear{1998}). \btitle{Adenoviral transgene expression of MMAC/PTEN in human glioma cells inhibits Akt activation and induces anoikis}. \bjournal{Cancer Res.} \bvolume{58} \bpages{5285--5290}. \bid{issn={0008-5472}, pmid={9850049}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Davies et~al.}{1999}]{davies99}
\begin{barticle}[pbm] \bauthor{\bsnm{Davies},~\bfnm{M.~A.}\binits{M.~A.}}, \bauthor{\bsnm{Koul},~\bfnm{D.}\binits{D.}}, \bauthor{\bsnm{Dhesi},~\bfnm{H.}\binits{H.}}, \bauthor{\bsnm{Berman},~\bfnm{R.}\binits{R.}}, \bauthor{\bsnm{McDonnell},~\bfnm{T.~J.}\binits{T.~J.}}, \bauthor{\bsnm{McConkey},~\bfnm{D.}\binits{D.}}, \bauthor{\bsnm{Yung},~\bfnm{W.~K.}\binits{W.~K.}} \AND \bauthor{\bsnm{Steck},~\bfnm{P.~A.}\binits{P.~A.}} (\byear{1999}). \btitle{Regulation of Akt/PKB activity, cellular growth, and apoptosis in prostate carcinoma cells by MMAC/PTEN}. \bjournal{Cancer Res.} \bvolume{59} \bpages{2551--2556}. \bid{issn={0008-5472}, pmid={10363971}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Davies et~al.}{2009}]{davies09}
\begin{barticle}[pbm] \bauthor{\bsnm{Davies},~\bfnm{Michael~A.}\binits{M.~A.}}, \bauthor{\bsnm{Stemke-Hale},~\bfnm{Katherine}\binits{K.}}, \bauthor{\bsnm{Lin},~\bfnm{E.}\binits{E.}}, \bauthor{\bsnm{Tellez},~\bfnm{Carmen}\binits{C.}}, \bauthor{\bsnm{Deng},~\bfnm{Wanleng}\binits{W.}}, \bauthor{\bsnm{Gopal},~\bfnm{Yennu~N.}\binits{Y.~N.}}, \bauthor{\bsnm{Woodman},~\bfnm{Scott~E.}\binits{S.~E.}}, \bauthor{\bsnm{Calderone},~\bfnm{Tiffany~C.}\binits{T.~C.}}, \bauthor{\bsnm{Ju},~\bfnm{Zhenlin}\binits{Z.}}, \bauthor{\bsnm{Lazar},~\bfnm{Alexander~J.}\binits{A.~J.}}, \bauthor{\bsnm{Prieto},~\bfnm{Victor~G.}\binits{V.~G.}}, \bauthor{\bsnm{Aldape},~\bfnm{Kenneth}\binits{K.}}, \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}} \AND \bauthor{\bsnm{Gershenwald},~\bfnm{Jeffrey~E.}\binits{J.~E.}} (\byear{2009}). \btitle{Integrated molecular and clinical analysis of AKT activation in metastatic melanoma}. \bjournal{Clin. Cancer Res.} \bvolume{15} \bpages{7538--7546}. \bid{doi={10.1158/1078-0432.CCR-09-1985}, issn={1078-0432}, mid={NIHMS150147}, pii={1078-0432.CCR-09-1985}, pmcid={2805170}, pmid={19996208}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Dempster}{1972}]{dempster72}
\begin{barticle}[author] \bauthor{\bsnm{Dempster},~\bfnm{A.~P.}\binits{A.~P.}} (\byear{1972}). \btitle{Covariance Selection}. \bjournal{Biometrics} \bvolume{28} \bpages{157--175}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Dobra et~al.}{2004}]{dobra03}
\begin{barticle}[mr] \bauthor{\bsnm{Dobra},~\bfnm{Adrian}\binits{A.}}, \bauthor{\bsnm{Hans},~\bfnm{Chris}\binits{C.}}, \bauthor{\bsnm{Jones},~\bfnm{Beatrix}\binits{B.}}, \bauthor{\bsnm{Nevins},~\bfnm{Joseph~R.}\binits{J.~R.}}, \bauthor{\bsnm{Yao},~\bfnm{Guang}\binits{G.}} \AND \bauthor{\bsnm{West},~\bfnm{Mike}\binits{M.}} (\byear{2004}). \btitle{Sparse graphical models for exploring gene expression data}. \bjournal{J. Multivariate Anal.} \bvolume{90} \bpages{196--212}. \bid{doi={10.1016/j.jmva.2004.02.009}, issn={0047-259X}, mr={2064941}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Ehrich et~al.}{2008}]{ehrich08}
\begin{barticle}[author] \bauthor{\bsnm{Ehrich},~\bfnm{Mathias}\binits{M.}}, \bauthor{\bsnm{Turner},~\bfnm{Julia}\binits{J.}}, \bauthor{\bsnm{Gibbs},~\bfnm{Peter}\binits{P.}}, \bauthor{\bsnm{Lipton},~\bfnm{Lara}\binits{L.}}, \bauthor{\bsnm{Giovanneti},~\bfnm{Mara}\binits{M.}}, \bauthor{\bsnm{Cantor},~\bfnm{Charles}\binits{C.}} \AND \bauthor{\bsnm{van~den Boom},~\bfnm{Dirk}\binits{D.}} (\byear{2008}). \btitle{Cytosine methylation profiling of cancer cell lines}. \bjournal{Proc. Natl. Acad. Sci. USA} \bvolume{105} \bpages{4844--4849}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Fan, Jin and Yao}{2013}]{Fan2013}
\begin{barticle}[mr] \bauthor{\bsnm{Fan},~\bfnm{Yingying}\binits{Y.}}, \bauthor{\bsnm{Jin},~\bfnm{Jiashun}\binits{J.}} \AND \bauthor{\bsnm{Yao},~\bfnm{Zhigang}\binits{Z.}} (\byear{2013}). \btitle{Optimal classification in sparse {G}aussian graphic model}. \bjournal{Ann. Statist.} \bvolume{41} \bpages{2537--2571}. \bid{doi={10.1214/13-AOS1163}, issn={0090-5364}, mr={3161437}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Friedman, Hastie and Tibshirani}{2008}]{friedman08}
\begin{barticle}[pbm] \bauthor{\bsnm{Friedman},~\bfnm{Jerome}\binits{J.}}, \bauthor{\bsnm{Hastie},~\bfnm{Trevor}\binits{T.}} \AND \bauthor{\bsnm{Tibshirani},~\bfnm{Robert}\binits{R.}} (\byear{2008}). \btitle{Sparse inverse covariance estimation with the graphical lasso}. \bjournal{Biostatistics} \bvolume{9} \bpages{432--441}. \bid{doi={10.1093/biostatistics/kxm045}, issn={1468-4357}, mid={NIHMS248717}, pii={kxm045}, pmcid={3019769}, pmid={18079126}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Gaur et~al.}{2007}]{gaur07}
\begin{barticle}[pbm] \bauthor{\bsnm{Gaur},~\bfnm{Arti}\binits{A.}}, \bauthor{\bsnm{Jewell},~\bfnm{David~A.}\binits{D.~A.}}, \bauthor{\bsnm{Liang},~\bfnm{Yu}\binits{Y.}}, \bauthor{\bsnm{Ridzon},~\bfnm{Dana}\binits{D.}}, \bauthor{\bsnm{Moore},~\bfnm{Jason~H.}\binits{J.~H.}}, \bauthor{\bsnm{Chen},~\bfnm{Caifu}\binits{C.}}, \bauthor{\bsnm{Ambros},~\bfnm{Victor~R.}\binits{V.~R.}} \AND \bauthor{\bsnm{Israel},~\bfnm{Mark~A.}\binits{M.~A.}} (\byear{2007}). \btitle{Characterization of microRNA expression levels and their biological correlates in human cancer cell lines}. \bjournal{Cancer Res.} \bvolume{67} \bpages{2456--2468}. \bid{doi={10.1158/0008-5472.CAN-06-2698}, issn={0008-5472}, pii={67/6/2456}, pmid={17363563}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Giudici and Green}{1999}]{giudici1999}
\begin{barticle}[mr] \bauthor{\bsnm{Giudici},~\bfnm{Paolo}\binits{P.}} \AND \bauthor{\bsnm{Green},~\bfnm{Peter~J.}\binits{P.~J.}} (\byear{1999}). \btitle{Decomposable graphical {G}aussian model determination}. \bjournal{Biometrika} \bvolume{86} \bpages{785--801}. \bid{doi={10.1093/biomet/86.4.785}, issn={0006-3444}, mr={1741977}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Halaban et~al.}{2010}]{halaban10}
\begin{barticle}[author] \bauthor{\bsnm{Halaban},~\bfnm{R.}\binits{R.}}, \bauthor{\bsnm{Zhang},~\bfnm{W.}\binits{W.}}, \bauthor{\bsnm{Bacchiocchi},~\bfnm{A.}\binits{A.}}, \bauthor{\bsnm{Cheng},~\bfnm{E.}\binits{E.}}, \bauthor{\bsnm{Parisi},~\bfnm{F.}\binits{F.}}, \bauthor{\bsnm{Ariyan},~\bfnm{S.}\binits{S.}}, \bauthor{\bsnm{Krauthammer},~\bfnm{M.}\binits{M.}}, \bauthor{\bsnm{McCusker},~\bfnm{J.~P.}\binits{J.~P.}}, \bauthor{\bsnm{Kluger},~\bfnm{Y.}\binits{Y.}} \AND \bauthor{\bsnm{Sznol},~\bfnm{M.}\binits{M.}} (\byear{2010}). \btitle{PLX4032, a~selective BRAF V600E kinase inhibitor, activates the ERK pathway and enhances cell migration and proliferation of BRAF WT melanoma cells}. \bjournal{Pigment Cell \& Melanoma Research} \bvolume{23} \bpages{190--200}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Hennessy et~al.}{2005}]{hennessy05}
\begin{barticle}[pbm] \bauthor{\bsnm{Hennessy},~\bfnm{Bryan~T.}\binits{B.~T.}}, \bauthor{\bsnm{Smith},~\bfnm{Debra~L.}\binits{D.~L.}}, \bauthor{\bsnm{Ram},~\bfnm{Prahlad~T.}\binits{P.~T.}}, \bauthor{\bsnm{Lu},~\bfnm{Yiling}\binits{Y.}} \AND \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}} (\byear{2005}). \btitle{Exploiting the PI3K/AKT pathway for cancer drug discovery}. \bjournal{Nat. Rev., Drug Discov.} \bvolume{4} \bpages{988--1004}. \bid{doi={10.1038/nrd1902}, issn={1474-1776}, pii={nrd1902}, pmid={16341064}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Hennessy et~al.}{2008}]{hennessy08}
\begin{barticle}[pbm] \bauthor{\bsnm{Hennessy},~\bfnm{Bryan~T.}\binits{B.~T.}}, \bauthor{\bsnm{Murph},~\bfnm{Mandi}\binits{M.}}, \bauthor{\bsnm{Nanjundan},~\bfnm{Meera}\binits{M.}}, \bauthor{\bsnm{Carey},~\bfnm{Mark}\binits{M.}}, \bauthor{\bsnm{Auersperg},~\bfnm{Nelly}\binits{N.}}, \bauthor{\bsnm{Almeida},~\bfnm{Jonas}\binits{J.}}, \bauthor{\bsnm{Coombes},~\bfnm{Kevin~R.}\binits{K.~R.}}, \bauthor{\bsnm{Liu},~\bfnm{Jinsong}\binits{J.}}, \bauthor{\bsnm{Lu},~\bfnm{Yiling}\binits{Y.}}, \bauthor{\bsnm{Gray},~\bfnm{Joe~W.}\binits{J.~W.}} \AND \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}} (\byear{2008}). \btitle{Ovarian cancer: Linking genomics to new target discovery and molecular markers--the way ahead}. \bjournal{Adv. Exp. Med. Biol.} \bvolume{617} \bpages{23--40}. \bid{doi={10.1007/978-0-387-69080-3_3}, issn={0065-2598}, mid={NIHMS185818}, pmcid={2844243}, pmid={18497028}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Hennessy et~al.}{2010}]{hennessy2010}
\begin{barticle}[author] \bauthor{\bsnm{Hennessy},~\bfnm{B.~T.}\binits{B.~T.}}, \bauthor{\bsnm{Lu},~\bfnm{Y.}\binits{Y.}}, \bauthor{\bsnm{Gonzalez-Angulo},~\bfnm{A.~M.}\binits{A.~M.}}, \bauthor{\bsnm{Carey},~\bfnm{M.~S.}\binits{M.~S.}}, \bauthor{\bsnm{Myhre},~\bfnm{S.}\binits{S.}}, \bauthor{\bsnm{Ju},~\bfnm{Z.}\binits{Z.}}, \bauthor{\bsnm{Davies},~\bfnm{M.~A.}\binits{M.~A.}}, \bauthor{\bsnm{Liu},~\bfnm{W.}\binits{W.}}, \bauthor{\bsnm{Coombes},~\bfnm{K.}\binits{K.}}, \bauthor{\bsnm{Meric-Bernstam},~\bfnm{F.}\binits{F.}} \betal{et~al.} (\byear{2010}). \btitle{A technical assessment of the utility of reverse phase protein arrays for the study of the functional proteome in nonmicrodissected human breast cancers}. \bjournal{Clinical Proteomics} \bvolume{6} \bpages{129--151}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Hoeting et~al.}{1999}]{hoeting99}
\begin{barticle}[mr] \bauthor{\bsnm{Hoeting},~\bfnm{Jennifer~A.}\binits{J.~A.}}, \bauthor{\bsnm{Madigan},~\bfnm{David}\binits{D.}}, \bauthor{\bsnm{Raftery},~\bfnm{Adrian~E.}\binits{A.~E.}} \AND \bauthor{\bsnm{Volinsky},~\bfnm{Chris~T.}\binits{C.~T.}} (\byear{1999}). \btitle{Bayesian model averaging: A tutorial}. \bjournal{Statist. Sci.} \bvolume{14} \bpages{382--401}.
\bid{doi={10.1214/ss/1009212519}, issn={0883-4237}, mr={1765176}} \bptnote{check related} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Jemal et~al.}{2009}]{jemal09}
\begin{barticle}[author] \bauthor{\bsnm{Jemal},~\bfnm{Ahmedin}\binits{A.}}, \bauthor{\bsnm{Siegel},~\bfnm{Rebecca}\binits{R.}}, \bauthor{\bsnm{Ward},~\bfnm{Elizabeth}\binits{E.}}, \bauthor{\bsnm{Hao},~\bfnm{Yongping}\binits{Y.}}, \bauthor{\bsnm{Xu},~\bfnm{Jiaquan}\binits{J.}} \AND \bauthor{\bsnm{Thun},~\bfnm{Michael~J.}\binits{M.~J.}} (\byear{2009}). \btitle{Cancer statistics, 2009}. \bjournal{CA Cancer J. Clin.} \bvolume{59} \bpages{225--249}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{John and Langley}{1995}]{john1995}
\begin{binproceedings}[author] \bauthor{\bsnm{John},~\bfnm{George~H.}\binits{G.~H.}} \AND \bauthor{\bsnm{Langley},~\bfnm{Pat}\binits{P.}} (\byear{1995}). \btitle{Estimating continuous distributions in Bayesian classifiers}. In \bbooktitle{Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence} \bpages{338--345}. \bpublisher{Morgan Kaufmann}, \blocation{San Francisco, CA}. \end{binproceedings}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Kelley and Ideker}{2005}]{ryan05}
\begin{barticle}[pbm] \bauthor{\bsnm{Kelley},~\bfnm{Ryan}\binits{R.}} \AND \bauthor{\bsnm{Ideker},~\bfnm{Trey}\binits{T.}} (\byear{2005}). \btitle{Systematic interpretation of genetic interactions using protein networks}. \bjournal{Nat. Biotechnol.} \bvolume{23} \bpages{561--566}. \bid{doi={10.1038/nbt1096}, issn={1087-0156}, mid={NIHMS166565}, pii={nbt1096}, pmcid={2814446}, pmid={15877074}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Lauritzen}{1996}]{lauritzen96}
\begin{bbook}[author] \bauthor{\bsnm{Lauritzen},~\bfnm{S.~L.}\binits{S.~L.}} (\byear{1996}). \btitle{Graphical Models}. \bpublisher{Claredon}, \blocation{Oxford}. \end{bbook}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Liechty, Liechty and M{\" u}ller}{2004}]{liechty03}
\begin{barticle}[mr] \bauthor{\bsnm{Liechty},~\bfnm{John~C.}\binits{J.~C.}}, \bauthor{\bsnm{Liechty},~\bfnm{Merrill~W.}\binits{M.~W.}} \AND \bauthor{\bsnm{M{\"u}ller},~\bfnm{Peter}\binits{P.}} (\byear{2004}). \btitle{Bayesian correlation estimation}. \bjournal{Biometrika} \bvolume{91} \bpages{1--14}. \bid{doi={10.1093/biomet/91.1.1}, issn={0006-3444}, mr={2050456}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Meinshausen and B{\" u}hlmann}{2006}]{meinshausen06}
\begin{barticle}[mr] \bauthor{\bsnm{Meinshausen},~\bfnm{Nicolai}\binits{N.}} \AND \bauthor{\bsnm{B{\"u}hlmann},~\bfnm{Peter}\binits{P.}} (\byear{2006}). \btitle{High-dimensional graphs and variable selection with the lasso}. \bjournal{Ann. Statist.} \bvolume{34} \bpages{1436--1462}. \bid{doi={10.1214/009053606000000281}, issn={0090-5364}, mr={2278363}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Miguel Hern{\'a}ndez-Lobato, Hern{\' a}ndez-Lobato and Su{\'a}rez}{2011}]{suarez2011}
\begin{barticle}[author] \bauthor{\bsnm{Miguel Hern{\'a}ndez-Lobato},~\bfnm{Jose}\binits{J.}}, \bauthor{\bsnm{Hern{\'a}ndez-Lobato},~\bfnm{Daniel}\binits{D.}} \AND \bauthor{\bsnm{Su{\'a}rez},~\bfnm{Alberto}\binits{A.}} (\byear{2011}). \btitle{Network-based sparse Bayesian classification}. \bjournal{Pattern Recogn.} \bvolume{44} \bpages{886--900}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Mills et~al.}{2003}]{mills03}
\begin{barticle}[pbm] \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}}, \bauthor{\bsnm{Kohn},~\bfnm{Elise}\binits{E.}}, \bauthor{\bsnm{Lu},~\bfnm{Yiling}\binits{Y.}}, \bauthor{\bsnm{Eder},~\bfnm{Astrid}\binits{A.}}, \bauthor{\bsnm{Fang},~\bfnm{Xianjun}\binits{X.}}, \bauthor{\bsnm{Wang},~\bfnm{Hongwei}\binits{H.}}, \bauthor{\bsnm{Bast},~\bfnm{Robert~C.}\binits{R.~C.}}, \bauthor{\bsnm{Gray},~\bfnm{Joe}\binits{J.}}, \bauthor{\bsnm{Jaffe},~\bfnm{Robert}\binits{R.}} \AND \bauthor{\bsnm{Hortobagyi},~\bfnm{Gabriel}\binits{G.}} (\byear{2003}). \btitle{Linking molecular diagnostics to molecular therapeutics: Targeting the PI3K pathway in breast cancer}. \bjournal{Semin. Oncol.} \bvolume{30} \bpages{93--104}. \bid{issn={0093-7754}, pii={S0093775403004445}, pmid={14613030}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Mirzoeva et~al.}{2009}]{mirz09}
\begin{barticle}[author] \bauthor{\bsnm{Mirzoeva},~\bfnm{Olga~K.}\binits{O.~K.}}, \bauthor{\bsnm{Das},~\bfnm{Debopriya}\binits{D.}}, \bauthor{\bsnm{Heiser},~\bfnm{Laura~M.}\binits{L.~M.}}, \bauthor{\bsnm{Bhattacharya},~\bfnm{Sanchita}\binits{S.}}, \bauthor{\bsnm{Siwak},~\bfnm{Doris}\binits{D.}}, \bauthor{\bsnm{Gendelman},~\bfnm{Rina}\binits{R.}}, \bauthor{\bsnm{Bayani},~\bfnm{Nora}\binits{N.}}, \bauthor{\bsnm{Wang},~\bfnm{Nicholas~J.}\binits{N.~J.}}, \bauthor{\bsnm{Neve},~\bfnm{Richard~M.}\binits{R.~M.}}, \bauthor{\bsnm{Guan},~\bfnm{Yinghui}\binits{Y.}}, \bauthor{\bsnm{Hu},~\bfnm{Zhi}\binits{Z.}}, \bauthor{\bsnm{Knight},~\bfnm{Zachary}\binits{Z.}}, \bauthor{\bsnm{Feiler},~\bfnm{Heidi~S.}\binits{H.~S.}}, \bauthor{\bsnm{Gascard},~\bfnm{Philippe}\binits{P.}}, \bauthor{\bsnm{Parvin},~\bfnm{Bahram}\binits{B.}}, \bauthor{\bsnm{Spellman},~\bfnm{Paul~T.}\binits{P.~T.}}, \bauthor{\bsnm{Shokat},~\bfnm{Kevan~M.}\binits{K.~M.}}, \bauthor{\bsnm{Wyrobek},~\bfnm{Andrew~J.}\binits{A.~J.}}, \bauthor{\bsnm{Bissell},~\bfnm{Mina~J.}\binits{M.~J.}}, \bauthor{\bsnm{McCormick},~\bfnm{Frank}\binits{F.}}, \bauthor{\bsnm{Kuo},~\bfnm{Wen-Lin}\binits{W.-L.}}, \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}}, \bauthor{\bsnm{Gray},~\bfnm{Joe~W.}\binits{J.~W.}} \AND \bauthor{\bsnm{Korn},~\bfnm{W.~Michael}\binits{W.~M.}} (\byear{2009}). \btitle{Basal subtype and MAPK/ERK kinase (MEK)-phosphoinositide 3-kinase feedback signaling determine susceptibility of breast cancer cells to MEK inhibition}. \bjournal{Cancer Res.} \bvolume{69} \bpages{565--572}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Monni and Li}{2010}]{monni10}
\begin{barticle}[author] \bauthor{\bsnm{Monni},~\bfnm{Stefano}\binits{S.}} \AND \bauthor{\bsnm{Li},~\bfnm{Hongzhe}\binits{H.}} (\byear{2010}). \btitle{Bayesian methods for network-structured genomics data}. \bjournal{UPenn Biostatistics Working Papers} \bvolume{34}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Morris et~al.}{2008}]{morris08}
\begin{barticle}[mr] \bauthor{\bsnm{Morris},~\bfnm{Jeffrey~S.}\binits{J.~S.}}, \bauthor{\bsnm{Brown},~\bfnm{Philip~J.}\binits{P.~J.}}, \bauthor{\bsnm{Herrick},~\bfnm{Richard~C.}\binits{R.~C.}}, \bauthor{\bsnm{Baggerly},~\bfnm{Keith~A.}\binits{K.~A.}} \AND \bauthor{\bsnm{Coombes},~\bfnm{Kevin~R.}\binits{K.~R.}} (\byear{2008}). \btitle{Bayesian analysis of mass spectrometry proteomic data using wavelet-based functional mixed models}. \bjournal{Biometrics} \bvolume{64} \bpages{479--489}. \bid{doi={10.1111/j.1541-0420.2007.00895.x}, issn={0006-341X}, mr={2432418}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{M{\"u}ller et~al.}{2004}]{muller04}
\begin{barticle}[mr] \bauthor{\bsnm{M{\"u}ller},~\bfnm{Peter}\binits{P.}}, \bauthor{\bsnm{Parmigiani},~\bfnm{Giovanni}\binits{G.}}, \bauthor{\bsnm{Robert},~\bfnm{Christian}\binits{C.}} \AND \bauthor{\bsnm{Rousseau},~\bfnm{Judith}\binits{J.}} (\byear{2004}). \btitle{Optimal sample size for multiple testing: The case of gene expression microarrays}. \bjournal{J. Amer. Statist. Assoc.} \bvolume{99} \bpages{990--1001}. \bid{doi={10.1198/016214504000001646}, issn={0162-1459}, mr={2109489}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Neeley et~al.}{2009}]{neeley09}
\begin{barticle}[pbm] \bauthor{\bsnm{Neeley},~\bfnm{E.~Shannon}\binits{E.~S.}}, \bauthor{\bsnm{Kornblau},~\bfnm{Steven~M.}\binits{S.~M.}}, \bauthor{\bsnm{Coombes},~\bfnm{Kevin~R.}\binits{K.~R.}} \AND \bauthor{\bsnm{Baggerly},~\bfnm{Keith~A.}\binits{K.~A.}} (\byear{2009}). \btitle{Variable slope normalization of reverse phase protein arrays}. \bjournal{Bioinformatics} \bvolume{25} \bpages{1384--1389}. \bid{doi={10.1093/bioinformatics/btp174}, issn={1367-4811}, pii={btp174}, pmcid={3968550}, pmid={19336447}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Neve et~al.}{2006}]{neve06}
\begin{barticle}[author] \bauthor{\bsnm{Neve},~\bfnm{R.~M.}\binits{R.~M.}}, \bauthor{\bsnm{Chin},~\bfnm{K.}\binits{K.}}, \bauthor{\bsnm{Fridlyand},~\bfnm{J.}\binits{J.}}, \bauthor{\bsnm{Yeh},~\bfnm{J.}\binits{J.}}, \bauthor{\bsnm{Baehner},~\bfnm{F.~L.}\binits{F.~L.}}, \bauthor{\bsnm{Fevr},~\bfnm{T.}\binits{T.}}, \bauthor{\bsnm{Clark},~\bfnm{L.}\binits{L.}}, \bauthor{\bsnm{Bayani},~\bfnm{N.}\binits{N.}}, \bauthor{\bsnm{Coppe},~\bfnm{J.~P.}\binits{J.~P.}}, \bauthor{\bsnm{Tong},~\bfnm{F.}\binits{F.}}, \bauthor{\bsnm{Speed},~\bfnm{T.}\binits{T.}}, \bauthor{\bsnm{Spellman},~\bfnm{P.~T.}\binits{P.~T.}}, \bauthor{\bsnm{DeVries},~\bfnm{S.}\binits{S.}}, \bauthor{\bsnm{Lapuk},~\bfnm{A.}\binits{A.}}, \bauthor{\bsnm{Wang},~\bfnm{N.~J.}\binits{N.~J.}}, \bauthor{\bsnm{Kuo},~\bfnm{W.~L.}\binits{W.~L.}}, \bauthor{\bsnm{Stilwell},~\bfnm{J.~L.}\binits{J.~L.}}, \bauthor{\bsnm{Pinkel},~\bfnm{D.}\binits{D.}}, \bauthor{\bsnm{Albertson},~\bfnm{D.~G.}\binits{D.~G.}}, \bauthor{\bsnm{Waldman},~\bfnm{F.~M.}\binits{F.~M.}}, \bauthor{\bsnm{McCormick},~\bfnm{F.}\binits{F.}}, \bauthor{\bsnm{Dickson},~\bfnm{R.~B.}\binits{R.~B.}}, \bauthor{\bsnm{Johnson},~\bfnm{M.~D.}\binits{M.~D.}}, \bauthor{\bsnm{Lippman},~\bfnm{M.}\binits{M.}}, \bauthor{\bsnm{Ethier},~\bfnm{S.}\binits{S.}}, \bauthor{\bsnm{Gazdar},~\bfnm{A.}\binits{A.}} \AND \bauthor{\bsnm{Gray},~\bfnm{J.~W.}\binits{J.~W.}} (\byear{2006}). \btitle{A collection of breast cancer cell lines for the study of functionally distinct cancer subtypes}. \bjournal{Cancer Cell} \bvolume{10} \bpages{515--527}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Newton et~al.}{2004}]{newton04}
\begin{barticle}[pbm] \bauthor{\bsnm{Newton},~\bfnm{Michael~A.}\binits{M.~A.}}, \bauthor{\bsnm{Noueiry},~\bfnm{Amine}\binits{A.}}, \bauthor{\bsnm{Sarkar},~\bfnm{Deepayan}\binits{D.}} \AND \bauthor{\bsnm{Ahlquist},~\bfnm{Paul}\binits{P.}} (\byear{2004}). \btitle{Detecting differential gene expression with a semiparametric hierarchical mixture method}. \bjournal{Biostatistics} \bvolume{5} \bpages{155--176}. \bid{doi={10.1093/biostatistics/5.2.155}, issn={1465-4644}, pii={5/2/155}, pmid={15054023}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Nishizuka et~al.}{2003}]{nishi03}
\begin{barticle}[author] \bauthor{\bsnm{Nishizuka},~\bfnm{Satoshi}\binits{S.}}, \bauthor{\bsnm{Chen},~\bfnm{Sing-Tsung}\binits{S.-T.}}, \bauthor{\bsnm{Gwadry},~\bfnm{Fuad~G.}\binits{F.~G.}}, \bauthor{\bsnm{Alexander},~\bfnm{Jes}\binits{J.}}, \bauthor{\bsnm{Major},~\bfnm{Sylvia~M.}\binits{S.~M.}}, \bauthor{\bsnm{Scherf},~\bfnm{Uwe}\binits{U.}}, \bauthor{\bsnm{Reinhold},~\bfnm{William~C.}\binits{W.~C.}}, \bauthor{\bsnm{Waltham},~\bfnm{Mark}\binits{M.}}, \bauthor{\bsnm{Charboneau},~\bfnm{Lu}\binits{L.}}, \bauthor{\bsnm{Young},~\bfnm{Lynn}\binits{L.}}, \bauthor{\bsnm{Bussey},~\bfnm{Kimberly~J.}\binits{K.~J.}}, \bauthor{\bsnm{Kim},~\bfnm{Sohyoung}\binits{S.}}, \bauthor{\bsnm{Lababidi},~\bfnm{Samir}\binits{S.}}, \bauthor{\bsnm{Lee},~\bfnm{Jae~K.}\binits{J.~K.}}, \bauthor{\bsnm{Pittaluga},~\bfnm{Stefania}\binits{S.}}, \bauthor{\bsnm{Scudiero},~\bfnm{Dominic~A.}\binits{D.~A.}}, \bauthor{\bsnm{Sausville},~\bfnm{Edward~A.}\binits{E.~A.}}, \bauthor{\bsnm{Munson},~\bfnm{Peter~J.}\binits{P.~J.}}, \bauthor{\bsnm{Petricoin},~\bfnm{Emmanuel~F.~III}\binits{E.~F.~I.}}, \bauthor{\bsnm{Liotta},~\bfnm{Lance~A.}\binits{L.~A.}}, \bauthor{\bsnm{Hewitt},~\bfnm{Stephen~M.}\binits{S.~M.}}, \bauthor{\bsnm{Raffeld},~\bfnm{Mark}\binits{M.}} \AND \bauthor{\bsnm{Weinstein},~\bfnm{John~N.}\binits{J.~N.}} (\byear{2003}). \btitle{Diagnostic markers that distinguish colon and ovarian adenocarcinomas: Identification by genomic, proteomic, and tissue array profiling}. \bjournal{Cancer Res.} \bvolume{63} \bpages{5243--5250}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Park et~al.}{2010}]{park10}
\begin{barticle}[author] \bauthor{\bsnm{Park},~\bfnm{Eun~Sung}\binits{E.~S.}}, \bauthor{\bsnm{Rabinovsky},~\bfnm{Rosalia}\binits{R.}}, \bauthor{\bsnm{Carey},~\bfnm{Mark}\binits{M.}}, \bauthor{\bsnm{Hennessy},~\bfnm{Bryan~T.}\binits{B.~T.}}, \bauthor{\bsnm{Agarwal},~\bfnm{Roshan}\binits{R.}}, \bauthor{\bsnm{Liu},~\bfnm{Wenbin}\binits{W.}}, \bauthor{\bsnm{Ju},~\bfnm{Zhenlin}\binits{Z.}}, \bauthor{\bsnm{Deng},~\bfnm{Wanleng}\binits{W.}}, \bauthor{\bsnm{Lu},~\bfnm{Yiling}\binits{Y.}}, \bauthor{\bsnm{Woo},~\bfnm{Hyun~Goo}\binits{H.~G.}}, \bauthor{\bsnm{Kim},~\bfnm{Sang-Bae}\binits{S.-B.}}, \bauthor{\bsnm{Cheong},~\bfnm{Jae-Ho}\binits{J.-H.}}, \bauthor{\bsnm{Garraway},~\bfnm{Levi~A.}\binits{L.~A.}}, \bauthor{\bsnm{Weinstein},~\bfnm{John~N.}\binits{J.~N.}}, \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}}, \bauthor{\bsnm{Lee},~\bfnm{Ju-Seog}\binits{J.-S.}} \AND \bauthor{\bsnm{Davies},~\bfnm{Michael~A.}\binits{M.~A.}} (\byear{2010}). \btitle{Integrative analysis of proteomic signatures, mutations, and drug responsiveness in the NCI 60 cancer cell line set}. \bjournal{Mol. Cancer Ther.} \bvolume{9} \bpages{257--267}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Paweletz et~al.}{{2001}}]{paweletz01}
\begin{barticle}[author] \bauthor{\bsnm{Paweletz},~\bfnm{C.~P.}\binits{C.~P.}}, \bauthor{\bsnm{Charboneau},~\bfnm{L.}\binits{L.}}, \bauthor{\bsnm{Bichsel},~\bfnm{V.~E.}\binits{V.~E.}}, \bauthor{\bsnm{Simone},~\bfnm{N.~L.}\binits{N.~L.}}, \bauthor{\bsnm{Chen},~\bfnm{T.}\binits{T.}}, \bauthor{\bsnm{Gillespie},~\bfnm{J.~W.}\binits{J.~W.}}, \bauthor{\bsnm{Emmert-Buck},~\bfnm{M.~R.}\binits{M.~R.}}, \bauthor{\bsnm{Roth},~\bfnm{M.~J.}\binits{M.~J.}}, \bauthor{\bsnm{Petricoin},~\bfnm{E.~F.}\binits{E.~F.}} \AND \bauthor{\bsnm{Liotta},~\bfnm{L.~A.}\binits{L.~A.}} (\byear{2001}). \btitle{Reverse phase protein microarrays which capture disease progression show activation of pro-survival pathways at the cancer invasion front}. \bjournal{Oncogene} \bvolume{20} \bpages{1981--1989}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Raftery, Madigan and Hoeting}{1997}]{raftery97}
\begin{barticle}[mr] \bauthor{\bsnm{Raftery},~\bfnm{Adrian~E.}\binits{A.~E.}}, \bauthor{\bsnm{Madigan},~\bfnm{David}\binits{D.}} \AND \bauthor{\bsnm{Hoeting},~\bfnm{Jennifer~A.}\binits{J.~A.}} (\byear{1997}). \btitle{Bayesian model averaging for linear regression models}. \bjournal{J. Amer. Statist. Assoc.} \bvolume{92} \bpages{179--191}. \bid{doi={10.2307/2291462}, issn={0162-1459}, mr={1436107}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Rapaport et~al.}{2007}]{rapaport07}
\begin{barticle}[pbm] \bauthor{\bsnm{Rapaport},~\bfnm{Franck}\binits{F.}}, \bauthor{\bsnm{Zinovyev},~\bfnm{Andrei}\binits{A.}}, \bauthor{\bsnm{Dutreix},~\bfnm{Marie}\binits{M.}}, \bauthor{\bsnm{Barillot},~\bfnm{Emmanuel}\binits{E.}} \AND \bauthor{\bsnm{Vert},~\bfnm{Jean-Philippe}\binits{J.-P.}} (\byear{2007}). \btitle{Classification of microarray data using gene networks}. \bjournal{BMC Bioinformatics} \bvolume{8} \bpages{35}. \bid{doi={10.1186/1471-2105-8-35}, issn={1471-2105}, pii={1471-2105-8-35}, pmcid={1797191}, pmid={17270037}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Rodr{\'{\i}}guez, Lenkoski and Dobra}{2011}]{rod2011}
\begin{barticle}[mr] \bauthor{\bsnm{Rodr{\'{\i}}guez},~\bfnm{Abel}\binits{A.}}, \bauthor{\bsnm{Lenkoski},~\bfnm{Alex}\binits{A.}} \AND \bauthor{\bsnm{Dobra},~\bfnm{Adrian}\binits{A.}} (\byear{2011}). \btitle{Sparse covariance estimation in heterogeneous samples}. \bjournal{Electron. J. Stat.} \bvolume{5} \bpages{981--1014}. \bid{doi={10.1214/11-EJS634}, issn={1935-7524}, mr={2836767}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Roverato}{2002}]{roverato02}
\begin{barticle}[mr] \bauthor{\bsnm{Roverato},~\bfnm{Alberto}\binits{A.}} (\byear{2002}). \btitle{Hyper inverse {W}ishart distribution for nondecomposable graphs and its application to {B}ayesian inference for {G}aussian graphical models}. \bjournal{Scand. J. Stat.} \bvolume{29} \bpages{391--411}. \bid{doi={10.1111/1467-9469.00297}, issn={0303-6898}, mr={1925566}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Shankavaram et~al.}{2007}]{shanka07}
\begin{barticle}[author] \bauthor{\bsnm{Shankavaram},~\bfnm{Uma~T.}\binits{U.~T.}}, \bauthor{\bsnm{Reinhold},~\bfnm{William~C.}\binits{W.~C.}}, \bauthor{\bsnm{Nishizuka},~\bfnm{Satoshi}\binits{S.}}, \bauthor{\bsnm{Major},~\bfnm{Sylvia}\binits{S.}}, \bauthor{\bsnm{Morita},~\bfnm{Daisaku}\binits{D.}}, \bauthor{\bsnm{Chary},~\bfnm{Krishna~K.}\binits{K.~K.}}, \bauthor{\bsnm{Reimers},~\bfnm{Mark~A.}\binits{M.~A.}}, \bauthor{\bsnm{Scherf},~\bfnm{Uwe}\binits{U.}}, \bauthor{\bsnm{Kahn},~\bfnm{Ari}\binits{A.}}, \bauthor{\bsnm{Dolginow},~\bfnm{Douglas}\binits{D.}}, \bauthor{\bsnm{Cossman},~\bfnm{Jeffrey}\binits{J.}}, \bauthor{\bsnm{Kaldjian},~\bfnm{Eric~P.}\binits{E.~P.}}, \bauthor{\bsnm{Scudiero},~\bfnm{Dominic~A.}\binits{D.~A.}}, \bauthor{\bsnm{Petricoin},~\bfnm{Emanuel}\binits{E.}}, \bauthor{\bsnm{Liotta},~\bfnm{Lance}\binits{L.}}, \bauthor{\bsnm{Lee},~\bfnm{Jae~K.}\binits{J.~K.}} \AND \bauthor{\bsnm{Weinstein},~\bfnm{John~N.}\binits{J.~N.}} (\byear{2007}). \btitle{Transcript and protein expression profiles of the NCI-60 cancer cell panel: An integromic microarray study}. \bjournal{Mol. Cancer Ther.} \bvolume{6} \bpages{820--832}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Stemke-Hale et~al.}{2008}]{stemke08}
\begin{barticle}[author] \bauthor{\bsnm{Stemke-Hale},~\bfnm{Katherine}\binits{K.}}, \bauthor{\bsnm{Gonzalez-Angulo},~\bfnm{Ana~Maria}\binits{A.~M.}}, \bauthor{\bsnm{Lluch},~\bfnm{Ana}\binits{A.}}, \bauthor{\bsnm{Neve},~\bfnm{Richard~M.}\binits{R.~M.}}, \bauthor{\bsnm{Kuo},~\bfnm{Wen-Lin}\binits{W.-L.}}, \bauthor{\bsnm{Davies},~\bfnm{Michael}\binits{M.}}, \bauthor{\bsnm{Carey},~\bfnm{Mark}\binits{M.}}, \bauthor{\bsnm{Hu},~\bfnm{Zhi}\binits{Z.}}, \bauthor{\bsnm{Guan},~\bfnm{Yinghui}\binits{Y.}}, \bauthor{\bsnm{Sahin},~\bfnm{Aysegul}\binits{A.}}, \bauthor{\bsnm{Symmans},~\bfnm{W.~Fraser}\binits{W.~F.}}, \bauthor{\bsnm{Pusztai},~\bfnm{Lajos}\binits{L.}}, \bauthor{\bsnm{Nolden},~\bfnm{Laura~K.}\binits{L.~K.}}, \bauthor{\bsnm{Horlings},~\bfnm{Hugo}\binits{H.}}, \bauthor{\bsnm{Berns},~\bfnm{Katrien}\binits{K.}}, \bauthor{\bsnm{Hung},~\bfnm{Mien-Chie}\binits{M.-C.}}, \bauthor{\bsnm{van~de Vijver},~\bfnm{Marc~J.}\binits{M.~J.}}, \bauthor{\bsnm{Valero},~\bfnm{Vicente}\binits{V.}}, \bauthor{\bsnm{Gray},~\bfnm{Joe~W.}\binits{J.~W.}}, \bauthor{\bsnm{Bernards},~\bfnm{Rene}\binits{R.}}, \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}} \AND \bauthor{\bsnm{Hennessy},~\bfnm{Bryan~T.}\binits{B.~T.}} (\byear{2008}). \btitle{An integrative genomic and proteomic analysis of PIK3CA, PTEN, and AKT mutations in breast cancer}. \bjournal{Cancer Res.} \bvolume{68} \bpages{6084--6091}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Storey and Tibshirani}{2003}]{storey03}
\begin{barticle}[mr] \bauthor{\bsnm{Storey},~\bfnm{John~D.}\binits{J.~D.}} \AND \bauthor{\bsnm{Tibshirani},~\bfnm{Robert}\binits{R.}} (\byear{2003}). \btitle{Statistical significance for genomewide studies}. \bjournal{Proc. Natl. Acad. Sci. USA} \bvolume{100} \bpages{9440--9445}. \bid{doi={10.1073/pnas.1530509100}, issn={1091-6490}, mr={1994856}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Tabchy et~al.}{2011}]{tabchy2011}
\begin{barticle}[author] \bauthor{\bsnm{Tabchy},~\bfnm{A.}\binits{A.}}, \bauthor{\bsnm{Hennessy},~\bfnm{B.~T.}\binits{B.~T.}}, \bauthor{\bsnm{Gonzalez-Angulo},~\bfnm{A.~M.}\binits{A.~M.}}, \bauthor{\bsnm{Bernstam},~\bfnm{F.~M.}\binits{F.~M.}}, \bauthor{\bsnm{Lu},~\bfnm{Y.}\binits{Y.}} \AND \bauthor{\bsnm{Mills},~\bfnm{G.~B.}\binits{G.~B.}} (\byear{2011}). \btitle{Quantitative proteomic analysis in breast cancer.} \bjournal{Drugs of Today (Barcelona, Spain: 1998)} \bvolume{47} \bpages{169--182}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Tabus et~al.}{2006}]{tabus06}
\begin{barticle}[author] \bauthor{\bsnm{Tabus},~\bfnm{I.}\binits{I.}}, \bauthor{\bsnm{Hategan},~\bfnm{A.}\binits{A.}}, \bauthor{\bsnm{Mircean},~\bfnm{C.}\binits{C.}}, \bauthor{\bsnm{Rissanen},~\bfnm{J.}\binits{J.}}, \bauthor{\bsnm{Shmulevich},~\bfnm{I.}\binits{I.}} \AND \bauthor{\bsnm{Wei~Zhang Astola},~\bfnm{J.~and}\binits{J.~a.}} (\byear{2006}). \btitle{Nonlinear modeling of protein expressions in protein arrays}. \bjournal{IEEE Trans. Signal Process.} \bvolume{54} \bpages{2394--2407}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Tibes et~al.}{2006}]{tibes06}
\begin{barticle}[pbm] \bauthor{\bsnm{Tibes},~\bfnm{Raoul}\binits{R.}}, \bauthor{\bsnm{Qiu},~\bfnm{Yihua}\binits{Y.}}, \bauthor{\bsnm{Lu},~\bfnm{Yiling}\binits{Y.}}, \bauthor{\bsnm{Hennessy},~\bfnm{Bryan}\binits{B.}}, \bauthor{\bsnm{Andreeff},~\bfnm{Michael}\binits{M.}}, \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}} \AND \bauthor{\bsnm{Kornblau},~\bfnm{Steven~M.}\binits{S.~M.}} (\byear{2006}). \btitle{Reverse phase protein array: Validation of a novel proteomic technology and utility for analysis of primary leukemia specimens and hematopoietic stem cells}. \bjournal{Mol. Cancer Ther.} \bvolume{5} \bpages{2512--2521}. \bid{doi={10.1158/1535-7163.MCT-06-0334}, issn={1535-7163}, pii={5/10/2512}, pmid={17041095}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Varambally et~al.}{2005}]{varam05}
\begin{barticle}[author] \bauthor{\bsnm{Varambally},~\bfnm{Sooryanarayana}\binits{S.}}, \bauthor{\bsnm{Yu},~\bfnm{Jianjun}\binits{J.}}, \bauthor{\bsnm{Laxman},~\bfnm{Bharathi}\binits{B.}}, \bauthor{\bsnm{Rhodes},~\bfnm{Daniel~R.}\binits{D.~R.}}, \bauthor{\bsnm{Mehra},~\bfnm{Rohit}\binits{R.}}, \bauthor{\bsnm{Tomlins},~\bfnm{Scott~A.}\binits{S.~A.}}, \bauthor{\bsnm{Shah},~\bfnm{Rajal~B.}\binits{R.~B.}}, \bauthor{\bsnm{Chandran},~\bfnm{Uma}\binits{U.}}, \bauthor{\bsnm{Monzon},~\bfnm{Federico~A.}\binits{F.~A.}}, \bauthor{\bsnm{Becich},~\bfnm{Michael~J.}\binits{M.~J.}}, \bauthor{\bsnm{Wei},~\bfnm{John~T.}\binits{J.~T.}}, \bauthor{\bsnm{Pienta},~\bfnm{Kenneth~J.}\binits{K.~J.}}, \bauthor{\bsnm{Ghosh},~\bfnm{Debashis}\binits{D.}}, \bauthor{\bsnm{Rubin},~\bfnm{Mark~A.}\binits{M.~A.}} \AND \bauthor{\bsnm{Chinnaiyan},~\bfnm{Arul~M.}\binits{A.~M.}} (\byear{2005}). \btitle{Integrative genomic and proteomic analysis of prostate cancer reveals signatures of metastatic progression}. \bjournal{Cancer Cell} \bvolume{8} \bpages{393--406}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Vasudevan et~al.}{2009}]{vasudevan09}
\begin{barticle}[author] \bauthor{\bsnm{Vasudevan},~\bfnm{Krishna~M.}\binits{K.~M.}}, \bauthor{\bsnm{Barbie},~\bfnm{David~A.}\binits{D.~A.}}, \bauthor{\bsnm{Davies},~\bfnm{Michael~A.}\binits{M.~A.}}, \bauthor{\bsnm{Rabinovsky},~\bfnm{Rosalia}\binits{R.}}, \bauthor{\bsnm{McNear},~\bfnm{Chontelle~J.}\binits{C.~J.}}, \bauthor{\bsnm{Kim},~\bfnm{Jessica~J.}\binits{J.~J.}}, \bauthor{\bsnm{Hennessy},~\bfnm{Bryan~T.}\binits{B.~T.}}, \bauthor{\bsnm{Tseng},~\bfnm{Hsiuyi}\binits{H.}}, \bauthor{\bsnm{Pochanard},~\bfnm{Panisa}\binits{P.}}, \bauthor{\bsnm{Kim},~\bfnm{So~Young}\binits{S.~Y.}}, \bauthor{\bsnm{Dunn},~\bfnm{Ian~F.}\binits{I.~F.}}, \bauthor{\bsnm{Schinzel},~\bfnm{Anna~C.}\binits{A.~C.}}, \bauthor{\bsnm{Sandy},~\bfnm{Peter}\binits{P.}}, \bauthor{\bsnm{Hoersch},~\bfnm{Sebastian}\binits{S.}}, \bauthor{\bsnm{Sheng},~\bfnm{Qing}\binits{Q.}}, \bauthor{\bsnm{Gupta},~\bfnm{Piyush~B.}\binits{P.~B.}}, \bauthor{\bsnm{Boehm},~\bfnm{Jesse~S.}\binits{J.~S.}}, \bauthor{\bsnm{Reiling},~\bfnm{Jan~H.}\binits{J.~H.}}, \bauthor{\bsnm{Silver},~\bfnm{Serena}\binits{S.}}, \bauthor{\bsnm{Lu},~\bfnm{Yiling}\binits{Y.}}, \bauthor{\bsnm{Stemke-Hale},~\bfnm{Katherine}\binits{K.}}, \bauthor{\bsnm{Dutta},~\bfnm{Bhaskar}\binits{B.}}, \bauthor{\bsnm{Joy},~\bfnm{Corwin}\binits{C.}}, \bauthor{\bsnm{Sahin},~\bfnm{Aysegul~A.}\binits{A.~A.}}, \bauthor{\bsnm{Gonzalez-Angulo},~\bfnm{Ana~Maria}\binits{A.~M.}}, \bauthor{\bsnm{Lluch},~\bfnm{Ana}\binits{A.}}, \bauthor{\bsnm{Rameh},~\bfnm{Lucia~E.}\binits{L.~E.}}, \bauthor{\bsnm{Jacks},~\bfnm{Tyler}\binits{T.}}, \bauthor{\bsnm{Root},~\bfnm{David~E.}\binits{D.~E.}}, \bauthor{\bsnm{Lander},~\bfnm{Eric~S.}\binits{E.~S.}}, \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}}, \bauthor{\bsnm{Hahn},~\bfnm{William~C.}\binits{W.~C.}}, \bauthor{\bsnm{Sellers},~\bfnm{William~R.}\binits{W.~R.}} \AND \bauthor{\bsnm{Garraway},~\bfnm{Levi~A.}\binits{L.~A.}} (\byear{2009}). \btitle{AKT-independent signaling downstream of oncogenic PIK3CA mutations in human cancer}. \bjournal{Cancer Cell} \bvolume{16} \bpages{21--32}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Vert and Kanehisa}{2003}]{philippe03}
\begin{barticle}[author] \bauthor{\bsnm{Vert},~\bfnm{Jean~Philippe}\binits{J.~P.}} \AND \bauthor{\bsnm{Kanehisa},~\bfnm{Minoru}\binits{M.}} (\byear{2003}). \btitle{Extracting active pathways from gene expression data}. \bjournal{Bioinformatics} \bvolume{19} \bpages{ii238--244}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Vivanco and Sawyers}{2002}]{vivanco02}
\begin{barticle}[author] \bauthor{\bsnm{Vivanco},~\bfnm{Igor}\binits{I.}} \AND \bauthor{\bsnm{Sawyers},~\bfnm{Charles~L.}\binits{C.~L.}} (\byear{2002}). \btitle{The phosphatidylinositol 3-kinase AKT pathway in human cancer.} \bjournal{Nat. Rev. Cancer} \bvolume{2} \bpages{489--501}. \end{barticle}
\bptok{imsref} \endbibitem
\bibitem[\protect\citeauthoryear{Whittaker}{1990}]{whittaker90}
\begin{bbook}[mr] \bauthor{\bsnm{Whittaker},~\bfnm{Joe}\binits{J.}} (\byear{1990}). \btitle{Graphical Models in Applied Multivariate Statistics}.
\bpublisher{Wiley}, \blocation{Chichester}. \bid{mr={1112133}} \end{bbook}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Wong, Carter and Kohn}{2003}]{wong2003}
\begin{barticle}[mr] \bauthor{\bsnm{Wong},~\bfnm{Frederick}\binits{F.}}, \bauthor{\bsnm{Carter},~\bfnm{Christopher~K.}\binits{C.~K.}} \AND \bauthor{\bsnm{Kohn},~\bfnm{Robert}\binits{R.}} (\byear{2003}). \btitle{Efficient estimation of covariance selection models}. \bjournal{Biometrika} \bvolume{90} \bpages{809--830}. \bid{doi={10.1093/biomet/90.4.809}, issn={0006-3444}, mr={2024759}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Yuan and Cantley}{2008}]{yuan08}
\begin{barticle}[pbm] \bauthor{\bsnm{Yuan},~\bfnm{T.~L.}\binits{T.~L.}} \AND \bauthor{\bsnm{Cantley},~\bfnm{L.~C.}\binits{L.~C.}} (\byear{2008}). \btitle{PI3K pathway alterations in cancer: Variations on a theme}. \bjournal{Oncogene} \bvolume{27} \bpages{5497--5510}. \bid{doi={10.1038/onc.2008.245}, issn={1476-5594}, mid={NIHMS386791}, pii={onc2008245}, pmcid={3398461}, pmid={18794884}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Yuan and Lin}{2007}]{yuan2007}
\begin{barticle}[mr] \bauthor{\bsnm{Yuan},~\bfnm{Ming}\binits{M.}} \AND \bauthor{\bsnm{Lin},~\bfnm{Yi}\binits{Y.}} (\byear{2007}). \btitle{Model selection and estimation in the {G}aussian graphical model}. \bjournal{Biometrika} \bvolume{94} \bpages{19--35}. \bid{doi={10.1093/biomet/asm018}, issn={0006-3444}, mr={2367824}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Zhang et~al.}{2009}]{zhang09}
\begin{barticle}[pbm] \bauthor{\bsnm{Zhang},~\bfnm{Li}\binits{L.}}, \bauthor{\bsnm{Wei},~\bfnm{Qingyi}\binits{Q.}}, \bauthor{\bsnm{Mao},~\bfnm{Li}\binits{L.}}, \bauthor{\bsnm{Liu},~\bfnm{Wenbin}\binits{W.}}, \bauthor{\bsnm{Mills},~\bfnm{Gordon~B.}\binits{G.~B.}} \AND \bauthor{\bsnm{Coombes},~\bfnm{Kevin}\binits{K.}} (\byear{2009}). \btitle{Serial dilution curve: A new method for analysis of reverse phase protein array data}. \bjournal{Bioinformatics} \bvolume{25} \bpages{650--654}. \bid{doi={10.1093/bioinformatics/btn663}, issn={1367-4811}, pii={btn663}, pmcid={2647837}, pmid={19176552}} \end{barticle}
\bptok{imsref}
\endbibitem
\bibitem[\protect\citeauthoryear{Zhu, Shen and Pan}{2009}]{zhu2009}
\begin{barticle}[author] \bauthor{\bsnm{Zhu},~\bfnm{Y.}\binits{Y.}}, \bauthor{\bsnm{Shen},~\bfnm{X.}\binits{X.}} \AND \bauthor{\bsnm{Pan},~\bfnm{W.}\binits{W.}} (\byear{2009}). \btitle{Network-based support vector machine for classification of microarray samples}. \bjournal{BMC Bioinformatics} \bvolume{10} \bpages{S21}. \end{barticle}
\bptok{imsref} \endbibitem
\end{thebibliography}
\printaddresses
\end{document} | arXiv |
\begin{document}
\title{Equidistribution of joinings under off-diagonal polynomial flows of nilpotent Lie groups} \author{Tim Austin}
\date{}
\maketitle
\begin{abstract} Let $G$ be a connected nilpotent Lie group. Given probability-preserving $G$-actions $(X_i,\Sigma_i,\mu_i,u_i)$, $i=0,1,\ldots,k$, and also polynomial maps $\varphi_i:\mathbb{R}\longrightarrow G$, $i=1,\ldots,k$, we consider the trajectory of a joining $\lambda$ of the systems $(X_i,\Sigma_i,\mu_i,u_i)$ under the `off-diagonal' flow \[(t,(x_0,x_1,x_2,\ldots,x_k))\mapsto (x_0,u_1^{\varphi_1(t)}x_1,u_2^{\varphi_2(t)}x_2,\ldots,u_k^{\varphi_k(t)}x_k).\] It is proved that any joining $\lambda$ is equidistributed under this flow with respect to some limit joining $\lambda'$. This is deduced from the stronger fact of norm convergence for a system of multiple ergodic averages, related to those arising in Furstenberg's approach to the study of multiple recurrence. It is also shown that the limit joining $\lambda'$ is invariant under the subgroup of $G^{k+1}$ generated by the image of the off-diagonal flow, in addition to the diagonal subgroup. \end{abstract}
\parskip 0pt
\tableofcontents
\parskip 7pt
\parindent 0pt
\section{Introduction}
This paper is set among jointly measurable probability-preserving actions of a connected nilpotent Lie group $G$. We will assume in addition that $G$ is simply connected; it will be clear from the statements of our main results that by ascending to the universal cover this incurs no real loss of generality.
Suppose that $u_i:G\actson (X_i,\Sigma_i,\mu_i)$ for $i=0,1,\ldots,k$ is a tuple of such actions and that $\lambda$ is a joining of them. This means that $\lambda$ is a coupling of the measures $\mu_i$ on the product space $\prod_iX_i$, and that it is invariant under the \textbf{diagonal transformation} \[u_\Delta^g := u_0^g\times u_1^g\times \cdots\times u_k^g\] for every $g \in G$.
Taking the $G$-actions on each coordinate separately, the $u_i$ together define a jointly measurable action $u_\times$ of the whole Cartesian power $G^{k+1}$ on $\prod_i X_i$ according to \[u_\times^{(g_0,g_1,\ldots,g_k)}:= u_0^{g_0}\times u_1^{g_1}\times \cdots\times u_k^{g_k}.\] In these terms $u_\Delta$ may be identified with the restriction of $u_\times$ to the diagonal subgroup \[G^{\Delta (k+1)}:= \{(g,g,\ldots,g):\ g\in G\}\leq G^{k+1}.\]
An arbitrary joining $\lambda$ need not be $u_\times$-invariant. However, the main result of this paper implies that for any one-parameter subgroup $\mathbb{R}\longrightarrow G^{k+1}$, the trajectory of $\lambda$ under the $u_\times$-action of that subgroup must equidistribute with respect to some new joining $\lambda'$ that is also invariant under that subgroup. Moreover, this statement generalizes to averages over the trajectory of any map $\mathbb{R}\longrightarrow G^{k+1}$ that is `polynomial' in the sense that repeated group-valued differencing leads to the trivial map (precise definitions are recalled in Section~\ref{sec:poly}). The full result is the following.
\begin{thm}\label{thm:main} If $(X_i,\Sigma_i,\mu_i,u_i)$, $0 \leq i \leq k$, and $\lambda$ are as above, and $\varphi_i:\mathbb{R}\longrightarrow G$ for $1 \leq i \leq k$ are polynomial maps satisfying $\varphi_i(0) = e$ (the identity of $G$), then the averaged measures \[\lambda_T := \barint_0^T (\mathrm{id}_{X_0} \times u_1^{\varphi_1(t)}\times \cdots\times u_k^{\varphi_k(t)})_\ast\lambda\,\mathrm{d} t\] converge in the coupling topology as $T\longrightarrow\infty$ to some joining $\lambda'$ of the systems $(X_i,\Sigma_i,\mu_i,u_i)$ which is invariant under the restriction of $u_\times$ to the subgroup \[\langle G^{\Delta (k+1)}\cup \{(e,\varphi_1(t),\ldots,\varphi_k(t)):\ t\in\mathbb{R}\}\rangle.\] \end{thm}
Here we have used the standard analyst's notation $\barint_a^b := \frac{1}{b-a}\int_a^b$, and we write $\langle S\rangle$ for the smallest closed subgroup of $G$ containing $S$.
\textbf{Remark}\quad If $t$ is such that $\varphi_i(t) \neq e$ then the individual measures \[(\mathrm{id}_{X_0} \times u_1^{\varphi_1(t)}\times \cdots\times u_k^{\varphi_k(t)})_\ast\lambda\] may \emph{not} be joinings of the original actions. As measures they are still couplings of the $\mu_i$, but the invariance of $\lambda$ under the diagonal subgroup has been replaced with invariance under its conjugate \[(e,\varphi_1(t),\ldots,\varphi_k(t))\cdot G^{\Delta (k+1)}\cdot (e,\varphi_1(t),\ldots,\varphi_k(t))^{-1}.\] Thus a non-trivial part of the conclusion of Theorem~\ref{thm:main} is that the smoothing effect of averaging over $t$ recovers the invariance under $G^{\Delta (k+1)}$ (and likewise under all of these conjugates). \nolinebreak\hspace{\stretch{1}}$\lhd$
Convergence $\lambda_T\longrightarrow \lambda'$ in the coupling topology, as in Theorem~\ref{thm:main}, asserts that \[\int_{X_0\times X_1\times \cdots\times X_k}f_0\otimes f_1\otimes \cdots \otimes f_k\,\mathrm{d}\lambda_T\longrightarrow \int_{X_0\times X_1\times \cdots\times X_k}f_0\otimes f_1\otimes \cdots \otimes f_k\,\mathrm{d}\lambda'\] for any choice of $f_0\in L^\infty(\mu_0)$, $f_1 \in L^\infty(\mu_1)$, \ldots, $f_k \in L^\infty(\mu_k)$. Informally, it is a variant of weak convergence defined against the class of test functions given by tensor products of bounded measurable functions on the individual coordinate-spaces. It is standard that this topology on the convex set of couplings is compact: see, for instance, Theorem 6.2 of Glasner~\cite{Gla03}.
However, we will actually deduce Theorem~\ref{thm:main} from a stronger kind of convergence. For any joining $\lambda$ and any fixed choice of $f_i \in L^\infty(\mu_i)$ for $1 \leq i \leq k$, the map \[f_0\mapsto \int_{X_0\times X_1\times \cdots \times X_k} f_0\otimes f_1\otimes \cdots \otimes f_k\,\mathrm{d}\lambda\] defines a bounded linear functional on $L^2(\mu_0)$, and hence by the self-duality of Hilbert space it specifies a function \[M^\lambda(f_1,\ldots,f_k) \in L^2(\mu_0)\] (an alternative, more concrete description of $M^\lambda$ can be found in Section~\ref{sec:background} below). The joining convergence asserted by Theorem~\ref{thm:main} is equivalent to the weak convergence in $L^2(\mu_0)$ of the averages \[A^\lambda_T(f_1,\ldots,f_k) := \barint_0^T M^\lambda\big(f_1\circ u_1^{\varphi_1(t)},\ldots,f_k\circ u_k^{\varphi_k(t)}\big)\,\mathrm{d} t,\] but in fact the methods we call on below (particularly the van der Corput estimate, Lemma~\ref{lem:vdC}) naturally give more:
\begin{thm}\label{thm:main2} In the setting of Theorem~\ref{thm:main}, the averages $A^\lambda_T(f_1,\ldots,f_k)$ converge in norm in $L^2(\mu_0)$ as $T\longrightarrow\infty$ for any functions $f_i \in L^\infty(\mu_i)$, $1 \leq i \leq k$. \end{thm}
Of course, this does not immediately imply the remainder of Theorem~\ref{thm:main} concerning the extra symmetries of the limit joining. That will require some additional argument.
The problem of pointwise convergence of the averages $A^\lambda_T$ remains open, and the methods of the present paper probably say very little about it. One related special case (for certain discrete-time averages) has been established by Bourgain in~\cite{Bou90}, but I know of no more recent extensions of his work.
\subsection*{Origin and relation to other works}
Theorem~\ref{thm:main} has its origin in the study of multiple recurrence. Furstenberg's original Multiple Recurrence Theorem~\cite{Fur77} asserts that for a single probability-preserving transformation $T\actson (X,\Sigma,\mu)$, if $A \in \Sigma$ has $\mu(A) > 0$ then also \begin{eqnarray}\label{eq:multirec} \liminf_{N\longrightarrow\infty}\frac{1}{N}\sum_{n=1}^N \mu(A\cap T^{-n}A\cap \cdots \cap T^{-(k-1)n}A) > 0\quad\quad\forall k\geq 1. \end{eqnarray} In particular, there must be a time $n \geq 1$ at which \[\mu(A\cap T^{-n}A\cap \cdots \cap T^{-(k-1)n}A) > 0:\] this is `$k$-fold multiple recurrence' for $A$.
Furstenberg studied this phenomenon in order to give a new proof of a deep theorem of Szemer\'edi in additive cominatorics~\cite{Sze75}, which can be deduced quite easily from the Multiple Recurrence Theorem. Following Furstenberg's original paper, many other works have either proved analogous multiple recurrence assertions in more general settings or analysed the `multiple' ergodic averages of the kind appearing in~(\ref{eq:multirec}), in particular to determine whether they converge. We will not attempt to give complete references here, but refer the reader to~\cite{Aus--thesis}, to the paper~\cite{HosKra05} of Host and Kra and to Chapters 10 and 11 of Tao and Vu's book~\cite{TaoVu06} for more details.
Many of these convergence questions can be phrased in terms of convergence of joinings, much in the spirit of Theorem~\ref{thm:main}. In Furstenberg's original setting, if we let $\mu^\Delta$ be the copy of $\mu$ supported on the diagonal in $X^k$, then the above averages may be re-written as \[\int_{X^k} 1_A\otimes 1_A\otimes \cdots\otimes 1_A\,\mathrm{d}\mu_N,\] where \[\mu_N := \frac{1}{N}\sum_{n=1}^N (\mathrm{id}_X\times T\times \cdots\times T^{k-1})^n_\ast \mu^\Delta,\] so in fact the convergence of these scalar averages is almost precisely the assertion that the orbit of the joining $\mu^\Delta$ under the off-diagonal $\mathrm{id}_X\times T\times\cdots\times T^{k-1}$ is equidistributed relative to some limit joining. Convergence here follows from work of Host and Kra~\cite{HosKra05} (see also Ziegler~\cite{Zie07}), and it is worth noting that in this situation the additional invariance of the limit joining under $\mathrm{id}_X\times T\times \cdots\times T^{k-1}$ is obvious from the definition of the $\mu_N$ and the F\o lner property of the intervals $\{1,2,\ldots,N\} \subset \mathbb{Z}$.
On the other hand, that additional invariance can be put at the heart of an alternative proof of convergence, which also applies to the more general question of the convergence of the averaged joinings \[\frac{1}{N}\sum_{n=1}^N (\mathrm{id}_X\times T_1\times \cdots\times T_k)^n_\ast \mu^\Delta\] for a commuting tuple of transformations $T_1,T_2,\ldots,T_k\actson (X,\Sigma,\mu)$: see~\cite{Aus--nonconv,Aus--thesis} (and compare with Tao~\cite{Tao08(nonconv)}, where the first proof of convergence for this higher-rank setting was given using very different methods). This more general setting still exhibits a multiple recurrence phenomenon with striking combinatorial consequences, as shown much earlier by Furstenberg and Katznelson~\cite{FurKat78}. Another aspect of the study of the limit of the above joinings is that a sufficiently detailed understanding of its structure can be used to give an alternative proof of their theorem~\cite{Aus--newmultiSzem}.
Having come this far, it is natural to ask after the behaviour of these averaged joinings if $T_1$, $T_2$, \ldots, $T_k$ do not commute, but generate some more complicated discrete group. In particular, if they generate a nilpotent group, then Leibman has shown that multiple recurrence phenomena still occur~\cite{Lei98} using an extension of Furstenberg and Katznelson's arguments, but that approach does not prove that the associated functional averages converge in $L^2(\mu)$. The question of convergence seems to be closely related to whether the averages \[\frac{1}{N}\sum_{n=1}^N (\mathrm{id}_X\times T^{p_1(n)}\times \cdots\times T_k^{p_k(n)})_\ast \mu^\Delta\] converge for a $\mathbb{Z}^d$-action $T$ and polynomials $p_i:\mathbb{Z}\longrightarrow\mathbb{Z}^d$, at least insofar as some of the standard methods in this area (particularly the van der Corput estimate) run into very similar difficulties in the contexts of these two problems.
These more general convergence questions were posed by Bergelson as Question 9 in~\cite{Ber96}, having previously been popularized by Furstenberg. Several special cases were established in~\cite{FurWei96,BerLei02,HosKra05poly,Lei05(poly),Aus--lindeppleasant2,ChuFraHos09}. On the other hand, the paper~\cite{BerLei04} contains an example in which $k=2$, $\langle T_1,T_2\rangle$ is a two-step solvable group, and convergence fails.
Shortly before the present paper was submitted, Miguel Walsh offered in~\cite{Wal11} a proof of convergence for general nilpotent groups and tuples of polynomial maps, so answering the question of Furstenberg and Bergelson in full generality. His proof is most akin to Tao's convergence proof in~\cite{Tao08(nonconv)}, but clearly involves some non-trivial new ideas as well. It is quite different from the very `structural' approach taken by most ergodic theoretic papers, such as the present one. It seems likely that Walsh's approach can be adapted to prove convergence in our setting (Theorem~\ref{thm:main2}), but it gives much less information on the structure of the resulting factors and joinings (as, for example, in the rest of Theorem~\ref{thm:bigmain}).
Our Theorem~\ref{thm:main2} establishes the analog of the conjecture of Furstenberg and Bergelson (involving both nilpotent groups and polynomial maps) for continuous-time flows. In Subsection~\ref{subs:compare-discrete} we will offer some discussion of the additional difficulties presented by an adaptation of our approach to the discrete-time setting. It would still be of interest to find a successful such adaptation, since it would presumably require uncovering a more detailed description of the relevant factors and joinings, and so would comprise a substantial complement to the approach via Walsh's methods.
We should note also that the case $G = \mathbb{R}^d$ in Theorems~\ref{thm:main} and~\ref{thm:main2} was recently established in~\cite{Aus--ctspolyaves}. However, the methods below diverge quite sharply from that previous paper. That work relied crucially on making a time change $t \mapsto t^\alpha$ in the integral averages under study for some small $\alpha > 0$, in order to convert averages along polynomial orbits into averages along orbits given by a linear map perturbed by some terms that grow at sublinear rates in $t$. That trick leads to a substantial simplification of the necessary induction on families of polynomials (in that paper Bergelson's PET induction is not needed, since something more direct suffices, whereas this induction scheme will appear in the present paper shortly), and so cuts out various other parts of the argument that we use below. However, I do not know how to implement this time-change trick for maps into general nilpotent groups, essentially because various commutators that appear during the proof can give rise to high-degree terms which disrupt the choice of any particular $\alpha$ used to make the leading-order terms linear. It is also my feeling that the argument given below reveals rather more about the relevant structures within probability-preserving $G$-actions that are responsible for the asymptotic behaviour of the averages in Theorem~\ref{thm:main}.
Although it emerges from the study of multiple recurrence, Theorem~\ref{thm:main} fits neatly into the general program of equidistribution. Equidistribution phenomena for sequences in compact spaces, and especially sequences arising from dynamical systems, have been popular subjects of analysis for most of the twentieth century: see, for instance, the classic text~\cite{KuiNie74}. Theorem~\ref{thm:main} can be seen as a close analog of more classical results concerning special classes of compact topological systems: in place of the orbit of an individual point or distinguished subset, we study the orbit of an initially-given joining, and correspondingly vague convergence of measures (that is, tested against continuous functions on a compact space) is replaced by convergence in the coupling topoology.
Of course, equidistribution theorems for topological systems always rely very crucially on the special structure of the system under study. Among arbitrary actions on compact spaces there are plentiful examples for which the set of invariant probabilities is very large and unstructured, and which have many points that do not equidistribute. It is interesting that once a tuple of systems $(X_i,\Sigma_i,\mu_i,u_i)$ with invariant probabilities has been fixed, their joinings exhibit the behaviour of Theorem~\ref{thm:main} without any extra assumptions on those individual systems. Instead, the necessary provisions are that we start with the orbit of some joining, rather than of a single point, and then prove equidistribution in the sense of the coupling topology.
Among the most profound results giving equidistribution for concrete systems are those concerning the orbits of unipotent flows on homogeneous spaces. In this setting the heart of such an analysis is typically a classification of all invariant probability measures on a system, which then restricts the possible vague limits one can obtain from the empirical measures along an orbit of the system so that, ideally, one can prove that the empirical measures have only one possible limit (and so are equidistributed).
To some extent the approach to Theorem~\ref{thm:main} parallels that strategy, in that the additional invariances of the limit joinings are an important tool in the proof, and our arguments do imply some further results on the possible structure of the limit joinings (see the second remark following Proposition~\ref{prop:vdC-appn}).
The full strength of measure classification for probabilities on homogeneous spaces that are invariant and ergodic under the action of a subgroup generated by unipotent elements was finally proved by Ratner in~\cite{Rat91-a,Rat91-b}, building on several important earlier works of herself and others. The monograph~\cite{Mor05} gives a thorough account of this story. Following Ratner's work, Shah proved in~\cite{Sha94} some equidistribution results for trajectories of points in homogeneous spaces under flows given by regular algebraic maps into the acting group. That notion of `polynomial' encompasses ours in many cases, and so his work offers a further point of contact between the two settings.
However, the details of the arguments used below are rather far from those developed by Ratner and her co-workers. For instance, in Shah's paper, he first shows that any vague limit measure for the trajectory of a point under one of his regular algebraic maps must have some invariance under a nontrivial unipotent subgroup. In light of this he can restrict his attention to the possible limit measures that are permitted by Ratner's Measure Classification Theorem, whereupon the extra analysis needed can proceed. By contrast, it is essential in our work that we allow general polynomial maps into $G$ throughout, since our induction would not remain among homomorphisms even if we started there. It would be interesting to know whether an alternative approach to Theorem~\ref{thm:main} can be found which is more in line with those works on homogeneous space dynamics.
\subsection*{First outline of the proof}
Theorems~\ref{thm:main} and~\ref{thm:main2} will be proved by induction on the tuple of polynomial maps $(\varphi_1,\varphi_2,\ldots,\varphi_k)$. The ordering on polynomials that organizes this induction is (a variant of) Bergelson's PET ordering from~\cite{Ber87}, which has become a mainstay of the study of multiple averages involving nilpotent groups or polynomial maps.
To a large extent, the new innovation below is the formulation of an assertion that includes Theorem~\ref{thm:main} and can be closed on itself in this induction. The delicacy of this formulation is largely attributable to the van der Corput estimate (Lemma~\ref{lem:vdC}), which relates the averages involving a given tuple of polynomial maps to another tuple that precedes it in the PET ordering. In the first place, it is this that forces us to prove Theorem~\ref{thm:main2} alongside Theorem~\ref{thm:main}, but it will also required other features in our inductive hypothesis.
An application of this lemma converts an assertion about a tuple of polynomial maps \[t\mapsto \varphi_i(t)\] into another about the `differenced' maps \begin{eqnarray}\label{eq:diffs} (t,s)\mapsto \varphi_i(t+s)\varphi_i(t)^{-1} \end{eqnarray} (or more complicated relatives of these: see Section~\ref{sec:poly}). Regarded as functions of $t$ alone, these precede the tuple $(\varphi_1,\ldots,\varphi_k)$ in the PET ordering for any fixed $s$. In many applications of PET induction one simply forms these derived maps, then fixes a value of $s$ and applies an inductive hypothesis to the restrictions of these new maps to $\mathbb{R}\times \{s\}$. Unfortunately, in our setting there can be some values of $s$ for which the behaviour of these restrictions is not as `good' as our argument needs. To overcome this we must retain the picture of the new maps in~(\ref{eq:diffs}) as being polynomial on the whole of $\mathbb{R}\times \mathbb{R}$. As a consequence of this polynomial structure and certain general results about actions of nilpotent Lie groups (see Section~\ref{sec:nil-actions}), one finds that these averages behave `asymptotically the same' for all but a small set of exceptional values of $s$. This turns out to be a crucial improvement over the possible worst-case behaviour over $s$. Since repeated appeals to the van der Corput estimate lead to a proliferation of these differencing parameters $s$, we must actually formulate a theorem which allows for polynomial maps $\mathbb{R}\times \mathbb{R}^r\longrightarrow G$, where we average over the first coordinate in $\mathbb{R}\times \mathbb{R}^r$ and the theorem promises some additional good behaviour for generic values of the remaining $r$ coordinates.
The right notion of genericity to make this precise is provided by Baire's definition of category, but transplanted into the Zariski topology of $\mathbb{R}^n$ (which is not Hausdorff and so not quite in the usual mould for applications of Baire category). The required notion of `Zariski genericity' will be defined in Section~\ref{sec:Zariski}, and will be found to relate very well to other standard notions of `smallness' for subsets of $\mathbb{R}^n$.
In terms of this definition, the complete statement that will be proved by PET induction is as follows.
\begin{thm}\label{thm:bigmain} Suppose that $(X_i,\Sigma_i,\mu_i,u_i)$, $0 \leq i \leq k$, and $\lambda$ are as above and that $\varphi_i:\mathbb{R}\times \mathbb{R}^r\longrightarrow G$, $1 \leq i \leq k$, are polynomial maps satisfying $\varphi_i(0,\cdot) \equiv e$. Let $M^\lambda$ be constructed from $\lambda$ as previously, let \[A^\lambda_T(f_1,\ldots,f_k) := \barint_0^T M^\lambda\big(f_1\circ u_1^{\varphi_1(t,h)},\ldots,f_k\circ u_k^{\varphi_k(t,h)}\big)\,\mathrm{d} t\] (so $A^\lambda_T$ implicitly depends on $h$), and let \[\vec{\varphi}:= (e,\varphi_1,\varphi_2,\ldots,\varphi_k):\mathbb{R}\times\mathbb{R}^r\longrightarrow G^{k+1}.\] Then \begin{enumerate} \item for any $h \in \mathbb{R}^r$ and any $f_i \in L^\infty(\mu_i)$, $1 \leq i \leq k$, the functional averages $A^\lambda_T(f_1,\ldots,f_k)$ converge in $L^2(\mu_0)$ as $T \longrightarrow\infty$, \item for any $h \in \mathbb{R}^r$ the averaged joinings \[\barint_0^T (\mathrm{id}_{X_0}\times u_1^{\varphi_1(t,h)}\times u_2^{\varphi_2(t,h)}\times \cdots \times u_k^{\varphi_k(t,h)})_\ast\lambda\,\mathrm{d} t\] converge as $T \longrightarrow \infty$ to some limit joining $\lambda^h$ which is invariant under \[\langle G^{\Delta (k+1)}\cup \rm{img}\, \vec{\varphi}(\cdot,h)\rangle,\] and \item the map $h\mapsto \lambda^h$ is Zariski generically constant on $E$, and the generic value it takes is a joining invariant under \[\langle G^{\Delta (k+1)}\cup \rm{img}\,\vec{\varphi}\rangle.\] \end{enumerate} \end{thm}
This clearly implies both of the previous theorems. The rest of the paper is directed towards the proof of Theorem~\ref{thm:bigmain}.
\subsection*{Overview of the paper}
Sections~\ref{sec:background} through~\ref{sec:idem} establish certain background results that we will need for the main proofs, concerning general properties of group actions and representations; polynomial maps and genericity in the Zariski topology; finer results about actions of nilpotent Lie groups; and the technology of `idempotent' classes of probability-preserving systems. Once all this is at our disposal, the proof of Theorem~\ref{thm:bigmain} is completed in Sections~\ref{sec:k=2},~\ref{sec:char-factor} and~\ref{sec:general-k}. Finally Section~\ref{sec:further-ques} contains a discussion of various further questions related to those of this paper.
\section{Background on group actions}\label{sec:background}
If $G$ is a locally compact second countable (`l.c.s.c.') group, then a \textbf{$G$-system} is a quadruple $(X,\Sigma,\mu,u)$ in which $(X,\Sigma,\mu)$ is a standard Borel probability space and $g\mapsto u^g$ is a jointly measurable, $\mu$-preserving left action of $G$ on $(X,\Sigma)$. Sometimes this situation will alternatively be denoted by $u:G\actson (X,\Sigma,\mu)$, and sometimes a whole system will be denoted by a boldface letter such as $\mathbf{X}$.
Relatedly, a \textbf{$G$-representation} is a strongly continuous orthogonal representation $\pi$ of $G$ on a separable real Hilbert space $\mathfrak{H}$. (It would be more conventional to work with complex Hilbert spaces and unitary representations, but choosing the real setting avoids the need to keep track of several complex conjugations later.) This situation will often be denoted by $\pi:G\actson\mathfrak{H}$. Given a $G$-system $(X,\Sigma,\mu,u)$, the associated \textbf{Koopman representation} $u^\ast:G\actson L^2(\mu)$ is defined by \[u(g)^\ast f := f\circ u^{g^{-1}},\] where this convention concerning inverses ensures that both $u$ and $u^\ast$ are left actions. Here and throughout the paper the notation $L^p$, $1 \leq p \leq \infty$, is used for real Lebesgue spaces. It is classical that the joint measurability of $u$ implies the strong continuity of $u^\ast$ (see, for instance, Lemma 5.28 of Varadarajan~\cite{Varadara85}), so the Koopman representation is a $G$-representation in the present sense.
Given a $G$-system and a closed subgroup $H\leq G$, one may construct the $\sigma$-subalgebra \[\Sigma^H := \{A \in \Sigma:\ \mu(u^hA \triangle A) = 0\ \forall h\in H\}.\] If $H$ is normal in $G$ then this is globally $G$-invariant, and hence defines a factor of the original system which we call the \textbf{$H$-partially invariant} factor. For some quite special technical reasons we will need only the case of normal $H$ in this paper: see Corollary~\ref{cor:normal-clos} below.
Similarly, for a $G$-representation $\pi$ we let \[\rm{Fix}(\pi(H)):= \{v \in \mathfrak{H}:\ \pi(h)v = v\ \forall h \in H\} \leq \mathfrak{H};\] for Koopman representations it is easily seen that
\[\rm{Fix}(u^\ast(H)) = L^2(\mu|_{\Sigma^H}).\]
Sometimes it is necessary to compare actions of different groups. If $q:H\longrightarrow G$ is a continuous homomorphism of l.c.s.c. groups and $\mathbf{X} = (X,\Sigma,\mu,u)$ is a $G$-system, then we may define an $H$-system on the same probability space by letting $h$ act by $u^{q(h)}$. We denote this system by $\mathbf{X}^{q(\cdot)} = (X,\Sigma,\mu,u^{q(\cdot)})$. A similar construction is clearly possible for representations.
We will also need certain standard calculations involving couplings and joinings. Suppose that $\lambda$ is a coupling of $\mu_0$, $\mu_1$, \ldots, $\mu_k$ (without any assumption about group actions). We may regard it instead as a coupling of $(X_0,\Sigma_0,\mu_0)$ with \[(X_1\times \cdots\times X_k,\Sigma_1\otimes\cdots\otimes \Sigma_k,\lambda')\] where $\lambda'$ is the marginal of $\lambda$ on the last $k$ coordinates. Now $\lambda$ can be disintegrated over the first coordinate to obtain a probability kernel \[\Lambda:X_0 \longrightarrow \rm{Pr}(X_1\times \cdots\times X_k,\Sigma_1\otimes\cdots\otimes \Sigma_k)\] so that \[\lambda = \int_{X_0}\delta_{x_0}\otimes \Lambda(x_0,\,\cdot\,)\,\mu_0(\mathrm{d} x_0);\] and this, in turn, defines a multilinear map \[M^\lambda:L^\infty(\mu_1)\times \cdots\times L^\infty(\mu_k)\longrightarrow L^\infty(\mu_0)\] according to \[M^\lambda(f_1,\ldots,f_k)(x_0) := \int_{X_1\times \cdots\times X_k}f_1 \otimes f_2\otimes \cdots \otimes f_k\,\mathrm{d}\Lambda(x_0,\,\cdot\,).\] Clearly one has \[\int_{X_0}f_0\cdot M^\lambda(f_1,\ldots,f_k)\,\mathrm{d}\mu_0 = \int_{X_0\times X_1\times \cdots\times X_k}f_0\otimes f_1 \otimes \cdots \otimes f_k\,\mathrm{d}\lambda,\] so this agrees with the definition of $M^\lambda$ by duality given in the Introduction.
The following is now a routine re-formulation of the definition of a relatively independent product, and the proof is omitted; see, for instance, the third of Examples 6.3 in Glasner~\cite{Gla03}.
\begin{lem}\label{lem:l_1l} Let $\Lambda:X_0\longrightarrow \Pr(X_1\times \cdots \times X_k)$ be as above and define the relative product measure $\lambda\otimes_0\lambda$ on $X_1^2\times \cdots \times X_k^2$ by \[\lambda\otimes_0\lambda = \int_{X_0}\Lambda(x_0,\cdot)\otimes \Lambda(x_0,\cdot)\,\mu_0(\mathrm{d} x_0).\] Then for any $f_i,g_i \in L^\infty(\mu_i)$, $1 \leq i \leq k$, one has \begin{multline*} \int_{X_0}M^\lambda(f_1,f_2,\ldots,f_k)\cdot M^\lambda(g_1,g_2,\ldots,g_k)\,\mathrm{d}\mu_0\\ = \int_{X_1^2\times\cdots\times X_k^2}f_1\otimes g_1\otimes f_2\otimes g_2\otimes \cdots \otimes f_k\otimes g_k\,\mathrm{d} (\lambda\otimes_0\lambda). \end{multline*} \nolinebreak\hspace{\stretch{1}}$\Box$ \end{lem}
\section{Real polynomials and Zariski residual sets}\label{sec:Zariski}
The third part of Theorem~\ref{thm:bigmain} involves the notion of Zariski genericity. Recall that on $\mathbb{R}^n$ (or any other real algebraic variety) the \textbf{Zariski topology} is the topology whose closed sets are the subvarieties. Although the failure of $\mathbb{R}$ to be algebraically closed gives rise to certain novel behaviour not seen in more classical algebraic geometry (especially under projection maps), in this paper we will not meet any of the situations in which this matters. The basic notions of the theory can be found in many books that use algebraic groups, such as in Subsection D.1 of Starkov~\cite{Sta00}. The additional idea we need from that arena is the following.
\begin{dfn}[Zariski meagre and residual sets] A subset $W \subseteq \mathbb{R}^n$ is \textbf{Zariski meagre} if it can be covered by a countable family of proper subvarieties of $\mathbb{R}^n$. A subset of $\mathbb{R}^n$ is \textbf{Zariski residual} if its complement is Zariski meagre. A property that depends on a parameter $h \in \mathbb{R}^n$ is \textbf{Zariski generic} if it obtains on a Zariski residual set of $h$. \end{dfn}
Since proper subvarieties are always closed and nowhere dense in the Euclidean topology, Zariski residual sets are residual in the Euclidean topology. They are therefore `large' in the sense of the Baire Category Theorem and its consequences, but in a much more structured way than an arbitrary Euclidean-residual subset. In particular, they exhibit the following simple behaviour under slicing:
\begin{lem}\label{lem:Zar-res-on-subspace} If $E \subseteq \mathbb{R}^n$ is Zariski meagre and $V \subseteq \mathbb{R}^n$ is any affine subspace then either $E \supseteq V$ or $E \cap V$ is Zariski meagre in $V$. In the space of translates $\mathbb{R}^n/V$, the subset of translates for which the former holds is Zariski meagre. \end{lem}
\textbf{Proof}\quad This is simply a consequence of the corresponding property of Zariski closed sets. \nolinebreak\hspace{\stretch{1}}$\Box$
Zariski meagre sets are also small in a natural measure-theoretic sense.
\begin{lem} A Zariski meagre subset $E \subseteq \mathbb{R}^n$ has Hausdorff dimension at most $n - 1$. \end{lem}
\textbf{Proof}\quad Clearly it suffices to show that a single proper algebraic subvariety $V \subseteq \mathbb{R}^n$ has Hausdorff dimension at most $n - 1$, and moreover that this holds when $V = \{f = 0\}$ for some nonzero polynomial $f:\mathbb{R}^n\longrightarrow \mathbb{R}$ (because any proper $V$ can be contained in such a zero-set).
This follows by induction on degree. If $f$ is linear then it is immediate, so suppose $\deg f \geq 2$. Then on the one hand the nonsingular locus $\{f = 0\}\cap \{\nabla f \neq 0\}$ can be covered with countably many open sets on which $\{f = 0\}\cap \{\nabla f\neq 0\}$ locally agrees with a smooth $(n-1)$-dimensional submanifold of $\mathbb{R}^n$, and hence has Hausdorff dimension $n-1$. On the other hand, the remaining set $\{f = 0\}\cap \{\nabla f = 0\}$ is contained in the set $\{\ell(\nabla f) = 0\}$ for any choice of $\ell \in (\mathbb{R}^n)^\ast\setminus \{0\}$, which is an algebraic variety generated by a polynomial of degree at most $\deg f - 1$ and so has Hausdorff dimension at most $n-1$ by the inductive hypothesis. \nolinebreak\hspace{\stretch{1}}$\Box$
\section{Polynomial maps into nilpotent Lie groups}\label{sec:poly}
Henceforth $G$ will denote a connected and simply connected nilpotent Lie group, $\mathfrak{g}$ its Lie algebra, \[G = G^1 \unrhd G^2 \unrhd \cdots \unrhd G^s\unrhd (e)\] its ascending central series, and \[\mathfrak{g} = \mathfrak{g}^1 \unrhd \mathfrak{g}^2\unrhd \cdots \unrhd \mathfrak{g}^s\unrhd (0)\] the corresponding ascending series of $\mathfrak{g}$.
In the following we will need certain standard facts about such groups, in particular that the exponential map $\exp:\mathfrak{g}\longrightarrow G$ is an analytic diffeomorphism and that any Lie subalgebra $\mathfrak{h} \leq \mathfrak{g}$ exponentiates to a closed Lie subgroup of $G$, which is normal if and only if $\mathfrak{h}$ was an ideal. (Note that both of these require the assumption that $G$ is simply connected as well as connected.) These can be found as Theorem 1.2.1 and Corollary 1.2.2 in Corwin and Greenleaf~\cite{CorGre90}, which provides a good general reference for the study of these groups.
\subsection{Polynomial maps}
\begin{dfn}[Polynomial map]\label{dfn:poly} A map $\varphi:G'\longrightarrow G$ between nilpotent Lie groups is \textbf{polynomial} if there is some $d \geq 1$ such that \[\nabla_{h_1}\nabla_{h_2}\cdots\nabla_{h_d}\varphi \equiv e \quad\quad \forall h_1,h_2,\ldots,h_d \in G',\] where $\nabla_h \varphi(g) := \varphi(gh^{-1})\varphi(g)$. \end{dfn}
This definition has come to prominence in the study of multiple recurrence phenomena since Leibman's work generalizing the Furstenberg-Katznelson Multiple Recurrence Theorem to tuples of transformations generating a nilpotent group~\cite{Lei98}. For maps into a module $M$ over a ring $R$ (such as an Abelian group, which is a module over $\mathbb{Z}$), degree-$d$ polynomial maps have been studied much more classically as an ideal of functions $G\longrightarrow M$ annihilated under convolution by the $d^{\rm{th}}$ power of the augmentation ideal of $R[G]$: see, for instance, Passi~\cite{Pas68,Pas79}.
In this work we will need the above definition only for $G' = \mathbb{R}^n$. If in addition $G = \mathbb{R}^m$, then it is a simple exercise to show that a map $\varphi$ is polynomial according to the above if and only if it may be expressed as an $m$-tuple of polynomials in $n$ variables. For general nilpotent targets $G$ a more concrete view of polynomial maps is still available by the following standard proposition and corollary (for the former see, for instance, Proposition 1.2.7 in Corwin and Greenleaf~\cite{CorGre90}).
\begin{prop} If $G$ is an $s$-step connected and simply connected nilpotent Lie group, then $\exp:\mathfrak{g} \longrightarrow G$ is a diffeomorphism, and pulled back through $\exp$ the operations of multiplication and inversion become polynomial maps $\mathfrak{g}\times\mathfrak{g}\longrightarrow \mathfrak{g}$ and $\mathfrak{g}\longrightarrow \mathfrak{g}$ of degree bounded only in terms of $s$. \nolinebreak\hspace{\stretch{1}}$\Box$ \end{prop}
\begin{cor}\label{cor:exppoly-is-poly} A map $\varphi:\mathbb{R}^n\longrightarrow G$ is polynomial if and only if it is of the form $\exp\circ \Phi$ for some polynomial $\Phi:\mathbb{R}^n\longrightarrow \mathfrak{g}$. \end{cor}
\textbf{Proof}\quad This follows by induction on the nilpotency class of $G$. On the one hand, if $\Phi:\mathbb{R}^n\longrightarrow \mathfrak{g}$ is a polynomial, then after $(\deg\Phi)$-many applications of the differencing operator $\nabla_\bullet$ the exponentiated map $\exp\circ \Phi$ may not vanish identically, but at least its projection to $G/G^2$ vanishes because this is isomorphic to the projection of $\Phi$ to $\mathfrak{g}/\mathfrak{g}^2$. Thus finitely many differencing operations yield a polynomial map into $\mathfrak{g}^2$, and now repeating this argument $s$ times shows that the differences of $\exp\circ\Phi$ do eventually vanish.
On the other hand, if $\varphi:\mathbb{R}^n \longrightarrow G$ is a polynomial map, then the same is true of $\varphi G_2:\mathbb{R}^n\longrightarrow G/G^2\cong \mathbb{R}^{\dim G- \dim G^2}$. This, in turn, is simply isomorphic to $(\exp^{-1}\circ\varphi) + \mathfrak{g}^2:\mathbb{R}^n\longrightarrow \mathfrak{g}/\mathfrak{g}^2$, so this latter is a polynomial. By choosing lifts of its coefficients under the projection $\mathfrak{g}\longrightarrow\mathfrak{g}/\mathfrak{g}^2$, we obtain a polynomial $\Phi_1 :\mathbb{R}^n\longrightarrow\mathfrak{g}$ such that $\exp\circ(\exp^{-1}\circ \varphi - \Phi_1)$ takes values in $G^2$, and it is clearly still a polynomial map there using the argument of the previous paragraph. Now the inductive hypothesis applied to $G^2$ gives another polynomial $\Phi_2:\mathbb{R}^n\longrightarrow \mathfrak{g}^2$ such that $\exp\circ(\exp^{-1}\circ \varphi - \Phi_1) = \exp\circ \Phi_2$, and re-arranging this completes the proof. \nolinebreak\hspace{\stretch{1}}$\Box$
By pulling back to the Lie algebra and arguing there, the above proposition and corollary have the following further consequence, which will be useful in the sequel.
\begin{cor} If $\varphi,\psi:\mathbb{R}^n\longrightarrow G$ are polynomial maps, then so are the pointwise product $x\mapsto \varphi(x)\psi(x)$ and the pointwise inverse $x\mapsto \varphi(x)^{-1}$. \nolinebreak\hspace{\stretch{1}}$\Box$ \end{cor}
\subsection{Families of maps and the PET ordering}
Our attention now turns to finite tuples \[\mathcal{F} = (\varphi_1,\varphi_2,\ldots,\varphi_k)\] of polynomial maps $\mathbb{R}\times\mathbb{R}^r\longrightarrow G$.
In what follows it is extremely important that we consider the domain of these maps to be split as $\mathbb{R}\times \mathbb{R}^r$. Although this is not really different from $\mathbb{R}^{r+1}$, the heart of the main induction below rests on comparing the degrees of different polynomial maps into $G$ \emph{in the first coordinate only}. Therefore we will henceforth restrict attention to maps defined on products of $\mathbb{R}$ with other real vector spaces, and will always regard the second coordinate as an auxiliary parameter.
\begin{dfn}[Internal class; leading degree; leading term] For a polynomial map $\varphi:\mathbb{R} \times \mathbb{R}^r\longrightarrow G$ with $\varphi(0,\cdot) \equiv e$, its \textbf{internal class} is the greatest $c$ such that $G^c \supseteq \rm{img}\, \varphi$. It is denoted $\rm{cl}\,\varphi$.
Given this, the projection \[\varphi G^{c+1}:(t,h)\mapsto \varphi(t,h)G^{c+1}:\mathbb{R}\times \mathbb{R}^r \longrightarrow G^c/G^{c+1} \cong \mathbb{R}^{\dim G^c - \dim G^{c+1}}\] is a Euclidean-valued polynomial map. The \textbf{leading degree} $\rm{ldeg}\, \varphi$ of $\varphi$ is the degree of $\varphi G^{c+1}$ in the variable $t$, and the \textbf{leading term} of $\varphi$ is the term in $\varphi G^{c+1}$ of the form $t^{\rm{ldeg}\, \varphi}\psi(h)$ for some polynomial map $\psi:\mathbb{R}^r\longrightarrow G^c/G^{c+1}$. \end{dfn}
\begin{dfn}[Leading-term equivalence] Two polynomial maps $\varphi,\psi:\mathbb{R}\times \mathbb{R}^r\longrightarrow G$ are \textbf{leading-term equivalent}, denoted $\varphi \sim_{\rm{LT}} \psi$, if $\rm{cl}\,\varphi = \rm{cl}\,\psi$ and $\varphi$ and $\psi$ have the same leading term (hence certainly the same leading degree). \end{dfn}
Several further definitions are needed in order to explain the PET ordering that will steer the inductive proof of Theorem~\ref{thm:bigmain}. The next roughly follows Leibman~\cite{Lei98}.
\begin{dfn}[Weight] The \textbf{weight} of a polynomial $\varphi:\mathbb{R}\times \mathbb{R}^r\longrightarrow G$ is the pair $\rm{wt}\,\varphi := (\rm{cl}\varphi,\rm{ldeg}\,\varphi)$. The set $\rm{Wt}\,$ of possible weights $(c,d)$ is ordered lexicographically: pairs $(c,d),(c',d') \in \rm{Wt}\,$ satisfy $(c,d) \prec (c',d')$ if \begin{itemize} \item either $c > c'$, \item or $c = c'$ and $d < d'$. \end{itemize} Since clearly $\varphi\sim_\rm{LT}\psi$ implies $\rm{wt}\,\varphi = \rm{wt}\,\psi$, we may also define the \textbf{weight} of an $\sim_\rm{LT}$-equivalence class as the weight of any of its members. \end{dfn}
This is a well-ordering on $\rm{Wt}\,$, and it now gives rise to a partial ordering on polynomial maps.
\begin{dfn}[PET ordering on polynomials] Given two polynomial maps $\varphi, \psi:\mathbb{R}\times \mathbb{R}^r\longrightarrow G$, the first \textbf{precedes} the second in the \textbf{PET ordering}, denoted $\varphi\prec_\rm{PET}\psi$, if $\rm{wt}\,\varphi\prec \rm{wt}\,\psi$. \end{dfn}
\textbf{Remark}\quad Our $\prec_\rm{PET}$ is not quite the same as the PET ordering used in much of the earlier literature for polynomial maps into nilpotent groups. Those required a comparison between polynomials in terms of the individual members of some Mal'cev basis of $G$; see, for instance, Section 3 in~\cite{Lei98}. Our ordering is actually a little weaker (in the sense that $\prec_\rm{PET} \subsetneqq \prec_\rm{PET}^{\rm{previous}}$ as relations), because we compare our polynomials on the whole Euclidean subquotients of $G$ arising from the ascending central series, and so in our ordering the assertion that two polynomials have the same leading term is stronger. However, when we later use the PET induction via the van der Corput lemma it will be clear that we are still moving strictly downwards among our families of polynomials, so that the induction proceeds correctly. \nolinebreak\hspace{\stretch{1}}$\lhd$
The PET ordering on polynomials will play a r\^ole in the proof of the special case $k=2$ of Theorem~\ref{thm:bigmain}, but the general case will require an extension of it to an ordering of tuples of polynomials.
\begin{dfn} Suppose that $f,g:\rm{Wt}\,\longrightarrow\mathbb{N}$ are maps which each take nonzero values at only finitely many weights. Then $f$ \textbf{precedes} $g$, denoted $f \prec g$, if there is some $(c,d) \in \rm{Wt}\,$ such that \begin{itemize} \item $f(c',d') = g(c',d')$ whenever $(c',d') \succ (c,d)$, and \item $f(c,d) < g(c,d)$. \end{itemize} \end{dfn}
\begin{dfn}[PET ordering for tuples of polynomials] If $\mathcal{F} = (\varphi_1,\varphi_2,\ldots,\varphi_k)$ is a tuple of polynomial maps then its \textbf{weight assignment} is the function $\rm{Wt}\,\mathcal{F}:\rm{Wt}\, \longrightarrow \mathbb{N}$ which to each $(c,d) \in \rm{Wt}\,$ assigns the number of $\sim_{\rm{LT}}$-equivalence classes of maps in $\mathcal{F}$ that have weight $(c,d)$.
Suppose now that $\mathcal{F} = (\varphi_1,\varphi_2,\ldots,\varphi_k)$ and $\cal{G} = (\psi_1,\psi_2,\ldots,\psi_\ell)$ are families of polynomial maps $\mathbb{R}\times \mathbb{R}^r\longrightarrow G$. Then $\mathcal{F}$ \textbf{precedes} $\cal{G}$, denoted $\mathcal{F} \prec_\rm{PET} \cal{G}$, if \begin{itemize} \item either $\rm{Wt}\,\mathcal{F}\prec\rm{Wt}\,\cal{G}$, \item or $\rm{Wt}\,\mathcal{F} = \rm{Wt}\,\cal{G}$, and the sets of $\sim_\rm{LT}$-equivalence classes $\mathcal{F}/\!\!\sim_\rm{LT}$ and $\cal{G}/\!\!\sim_\rm{LT}$ can be matched in such a way that (i) their weights match, (ii) every class of $\mathcal{F}$ has cardinality no larger than its corresponding class in $\cal{G}$, and (iii) in at least one instance it is strictly smaller. \end{itemize} \end{dfn}
As in most proofs that use the PET ordering, it is needed for a particular pair of families of maps, one derived from the other according to the following definitions.
\begin{dfn}[Pivot] If $\mathcal{F} = (\varphi_1,\varphi_2,\ldots,\varphi_k)$ is a tuple of polynomial maps $\mathbb{R}\times \mathbb{R}^r\longrightarrow G$ then a \textbf{pivot} for $\mathcal{F}$ is a PET-minimal member $\varphi \in \mathcal{F}$. \end{dfn}
\begin{dfn}[Derived family]\label{dfn:derived} Suppose that $\mathcal{F} = (\varphi_1,\varphi_2,\ldots,\varphi_k)$ is a tuple of polynomial maps $\mathbb{R}\times \mathbb{R}^r\longrightarrow G$. Then for $i\leq k$ its \textbf{$i^{\rm{th}}$ derived family} consists of the following polynomial maps $\mathbb{R}\times (\mathbb{R}\times \mathbb{R}^r)\longrightarrow G$: \[(t,k,h)\mapsto \varphi_j(t,h)\varphi_i(t,h)^{-1}\quad\quad\hbox{for}\ j \in \{1,2,\ldots,k\}\setminus \{i\}\] and \[(t,k,h)\mapsto \varphi_j(k,h)^{-1}\varphi_j(t+k,h)\varphi_i(t,h)^{-1}\quad\quad\hbox{for}\ j \in \{1,2,\ldots,k\}.\] \end{dfn}
Note that the pre-multiplication by $\varphi_j(k,h)^{-1}$ in the last line has the consequence that if $\varphi_j(0,\cdot) \equiv e$ for every $i$, then the same is true of the derived family.
\begin{lem}\label{lem:PET-calcns} If $\mathcal{F} = (\varphi_1,\varphi_2,\ldots,\varphi_k)$ with $\varphi_1$ a pivot, then its first derived family precedes it in the PET ordering. Also, the sub-tuple $(\varphi_2,\ldots,\varphi_k)$ precedes $\mathcal{F}$ in the PET ordering. \end{lem}
\textbf{Proof}\quad For each $j\geq 2$ consider the polynomial maps \[\varphi_j(t,h)\varphi_1(t,h)^{-1}\quad\quad\hbox{and}\quad\quad \varphi_j(k,h)^{-1}\varphi_j(t+k,h)\varphi_1(t,h)^{-1}.\] Because $\varphi_1$ is a pivot, \begin{itemize} \item either $\rm{wt}\,\varphi_j \succ \rm{wt}\,\varphi_1$, \item or $\rm{wt}\,\varphi_j = \rm{wt}\, \varphi_1$ but $\varphi_j \not\sim_\rm{LT} \varphi_1$, \item or $\varphi_j \sim_\rm{LT} \varphi_1$. \end{itemize} In the first case both of the new maps above still have weight equal to $\rm{wt}\,\varphi_j$, and are actually leading-term equivalent by comparing their leading terms in $G^c/G^{c+1}$ for $c = \rm{cl}\,\varphi_j$. By the same reasoning, if $\varphi_j \sim_\rm{LT} \varphi_{j'}$ then all four of the resulting new maps are leading-term equivalent.
The same conclusions hold when $\rm{wt}\,\varphi_1 = \rm{wt}\,\varphi_j$ but $\varphi_1 \not\sim_\rm{LT} \varphi_j$, since in this case the leading term of either of the above maps into $G^c/G^{c+1}$ is given by the nonzero difference of the leading terms of $\varphi_1$ and $\varphi_j$.
Lastly, if $\varphi_j \sim_\rm{LT} \varphi_1$, then these leading terms do cancel, and so both of the polynomial maps written above now strictly precede $\varphi_1$ in the PET ordering.
Therefore overall the equivalence classes of $\mathcal{F}$ and of its $1^{\rm{st}}$ derived family are in bijective weight-preserving correspondence, apart from the equivalence class of $\varphi_1$, which is replaced by (possibly several) classes in the derived family of strictly lower weight. This proves the first assertion.
The second assertion is obvious, because the removal of $\varphi_1$ either removes a whole $\sim_\rm{LT}$-equivalence class in case $\varphi_1$ is in a singleton class, and hence reduces $\rm{Wt}\,\mathcal{F}$ in $\prec$, or leaves the $\sim_\rm{LT}$-class structure of $\mathcal{F}$ unchanged but reduces the cardinality of exactly one of the classes. \nolinebreak\hspace{\stretch{1}}$\Box$
\section{Finer results for actions of nilpotent Lie groups}\label{sec:nil-actions}
For any inclusion $H \leq G$ of topological groups, $H^{\rm{n}}$ will denote the \textbf{topological normal closure} of $H$ in $G$: that is, the completion of the normal closure in $G$. This notation suppresses the dependence of this definition on the larger group $G$, which will always be clear from the context. Similarly, if $G$ is a connected and simply connected Lie group with Lie algebra $\mathfrak{g}$ and $V \leq \mathfrak{g}$ is a Lie subalgebra, then $V^\rm{n}$ denotes the Lie algebra generated by $\sum_g \rm{Ad}(g)V$ (equivalently, the Lie ideal generated by $V$ in $\mathfrak{g}$), so that $\exp(V^\rm{n}) = (\exp V)^\rm{n}$.
The first important result we need is a consequence of the classical Mautner Phenomenon. We will make use of the following expression of this argument as isolated by Margulis~\cite{Mar91}; it can also be found as Lemma 2.2 in Subsection 2.1 of Starkov~\cite{Sta00}.
\begin{lem}[Mautner Phenomenon]\label{lem:Mautner} Suppose that $\pi:G\actson \mathfrak{H}$ is a orthogonal representation of a connected Lie group, that $H \leq G$ is a connected Lie subgroup, that $g \in G$ and that there are a sequences $g_i\in G$ and $h_i,h_i' \in H$ with $g_i \longrightarrow e$ and $g_ih_ig_i^{-1}h_i'\longrightarrow g$. Then \[\rm{Fix}(\pi(g)) \supseteq \rm{Fix}(\pi(H)).\] \nolinebreak\hspace{\stretch{1}}$\Box$ \end{lem}
\begin{cor}\label{cor:normal-clos} If $G$ is a connected and simply connected nilpotent Lie group, $H\leq G$ is a connected closed subgroup and $\pi:G\actson \mathfrak{H}$ is an orthogonal representation, then \[\rm{Fix}(\pi(H)) = \rm{Fix}(\pi(H^\rm{n})).\] Similarly, if $(X,\Sigma,\mu,u)$ is a $G$-system then \[\Sigma^H = \Sigma^{H^\rm{n}}.\] \end{cor}
\textbf{Proof}\quad We focus on the first claim, since the second follows at once by considering the Koopman representation.
A simple calculation shows that $H^{\rm{n}} = \langle H [H,G]\rangle$, where $[H,G]$ is the subgroup generated by all commutators of elements of $H$ with elements of $G$. Let \[G = G_1 \unrhd G_2 \unrhd \ldots \unrhd G_s \unrhd G_{s+1} = \{e\}\] be a central series of $G$ in which each quotient $G_r/G_{r+1}$ has dimension one; for example, one may insert extra terms into the ascending central series, as in the construction of a strong Mal'cev basis. Let $\mathfrak{g}_r$ be the Lie algebra of $G_r$ and $\mathfrak{h}$ the Lie algebra of $H$.
We will prove by downwards induction on $r$ that if $1 \leq r\leq s$ then \[\rm{Fix}(\pi(\langle H[G_{r+1},H]\rangle)) = \rm{Fix}(\pi(\langle H[G_r,H]\rangle)).\] When $r = s$ the left-hand side here is $\rm{Fix}(\pi(H))$, while when $r = 1$ the right-hand side is $\rm{Fix}(\pi(H^{\rm{n}}))$, so this will complete the proof.
When $r=s$ the result is clear because $G_s$ is central in $G$, so now suppose the result is known for some $r+1 \leq s$. By replacing $H$ with $\langle H[G_{r+1},H]\rangle $, we may assume that they are equal, since another easy calculation shows that the sets \[(H[G_{r+1},H])\cdot \big[G_{r+1},(H[G_{r+1},H])\big] \quad\quad \hbox{and} \quad\quad H[G_{r+1},H]\] generate the same subgroup of $G$.
Let $V \in \mathfrak{g}_r\setminus \mathfrak{g}_{r+1}$, so that $\mathfrak{g}_r$ is the smallest Lie algebra containing both $V$ and $\mathfrak{g}_{r+1}$. The subgroup $\langle H[G_r,H]\rangle$ is connected, and its Lie algebra is the smallest Lie subalgebra of $\mathfrak{g}$ that contains both $\mathfrak{h}$ and $\{[V,U]:\ U \in \mathfrak{h}\}$. It therefore suffices to show that any $v \in \rm{Fix}(\pi(H))$ is also fixed by $\exp([V,U])$ for any $U \in \mathfrak{h}$.
This can be deduced using Lemma~\ref{lem:Mautner}. We need to show that if $U \in \mathfrak{h}$ then $\exp([V,U])$ is a limit of group elements of the form $g_ih_ig_i^{-1}h_i'$, as treated in that lemma. This follows from the Baker-Campbell-Hausdorff formula, which implies for any $t > 0$ that \[\exp(tV)\exp((1/t)U)\exp(-tV)\exp(-(1/t)U) = \exp([V,U] + \rm{O}(t))\exp(R(t)),\] where $R(t)$ collects those multiple commutators that involve at least one copy of $V$ and at least two entries from $\mathfrak{h}$, which must therefore lie in \[[\mathfrak{g}_{r+1},\mathfrak{h}]\subseteq \mathfrak{h}.\] Hence \[\exp([V,U]) = \exp(tV)\exp((1/t)U)\exp(-tV)\big(\exp(-(1/t)U)\exp(-R(t))\big),\] so letting $t = 1/i$ gives the conditions needed by Lemma~\ref{lem:Mautner}. \nolinebreak\hspace{\stretch{1}}$\Box$
\begin{cor}\label{cor:rel-ind-over-common} If $G$ is a connected and simply connected nilpotent Lie group, $H_1,H_2 \leq G$ are connected closed subgroups and $\pi:G\actson \mathfrak{H}$ is an orthogonal representation, then the subspaces \[\rm{Fix}(\pi(H_1)),\ \rm{Fix}(\pi(H_2))\leq \mathfrak{H}\] are relatively orthogonal over their common further subspace \[\rm{Fix}(\pi(\langle H_1\cup H_2\rangle))\] (meaning that \[\rm{Fix}(\pi(H_1)) \ominus \rm{Fix}(\pi(\langle H_1\cup H_2\rangle)) \perp \rm{Fix}(\pi(H_2)) \ominus \rm{Fix}(\pi(\langle H_1\cup H_2\rangle)).\quad )\] Similarly, if $(X,\Sigma,\mu,u)$ is a $G$-system then $\Sigma^{H_1}$ and $\Sigma^{H_2}$ are relatively independent over $\Sigma^{\langle H_1\cup H_2\rangle}$. \end{cor}
\textbf{Proof}\quad For a Lie subgroup $H \leq G$, since \[\rm{Fix}(\pi(H)) = \rm{Fix}(\pi(H^{\rm{n}}))\] and $H^{\rm{n}}\unlhd G$, this subspace of $\mathfrak{H}$ is actually invariant under the whole action $\pi$. Therefore the orthogonal projections $P_i$ onto $\rm{Fix}(\pi(H_i))$ both commute with $\pi$.
It follows that $P_1P_2$ has image contained in $\rm{Fix}(\pi(\langle H_1\cup H_2\rangle))$. Since conversely any vector fixed by both $H_1$ and $H_2$ is also fixed by $P_1$ and $P_2$, it follows that $P_1P_2$ is an idempotent with image equal to $\rm{Fix}(\pi(\langle H_1\cup H_2\rangle))$, and the same holds for $P_2P_1$. Hence for any vectors $u \in \mathfrak{H}$ and $v \in \rm{Fix}(\pi(\langle H_1\cup H_2\rangle))$ one has \[\langle u,v\rangle = \langle u,(P_1P_2)v\rangle = \langle (P_2P_1)u,v\rangle,\] so in fact $P_2P_1$ is the orthogonal projection onto its image, and similarly for $P_1P_2$.
Finally, if $v_i \in \rm{Fix}(\pi(H_i))$ for $i=1,2$ then this implies \[\langle v_1,v_2\rangle = \langle P_1v_1,P_2v_2\rangle = \langle P_2P_1v_1,v_2\rangle = \langle (P_2P_1)v_1,(P_2P_1)v_2\rangle,\] which is the desired relative orthogonality.
In the case of a $G$-system, applying the above result to the Koopman representation tells us that for any $\Sigma^{H_i}$-measurable functions $f_i \in L^2(\mu_i)$ for $i=1,2$ we have
\[\int_X f_1f_2\,\mathrm{d}\mu = \int_X \mathsf{E}(f_1\,|\,\Sigma^{\langle H_1\cup H_2\rangle})\mathsf{E}(f_2\,|\,\Sigma^{\langle H_1\cup H_2\rangle})\,\mathrm{d}\mu,\] and this is the desired relative independence. \nolinebreak\hspace{\stretch{1}}$\Box$
\textbf{Example}\quad The above proofs are intimately tied to the nilpotency of $G$, so it is worth including an example of a solvable Lie group $G$ and representation $\pi:G\actson \mathfrak{H}$ to show that this restriction is really needed.
Let $\rho:\mathbb{R}\actson\mathbb{C}$ be the rotation action defined by \[\rho^tz := \rm{e}^{2\pi\rm{i} t}z\] and let $G := \mathbb{C}\rtimes_\rho\mathbb{R}$. This is a simple three-dimensional solvable Lie group; in coordinates it is $\mathbb{C}\times \mathbb{R}$ with the product \[(u,s)\cdot (v,t) := (\rho^tu + v,s + t).\] It may also be interpreted as a group extension of $\mathbb{Z}$ by the group $\mathbb{C}\rtimes \rm{S}^1$ of orientation-preserving isometries of $\mathbb{C}$, and this picture gives an action $\xi:G\actson \mathbb{C}$ with kernel isomorphic to $\mathbb{Z}$.
For each $v \in \mathbb{C}$ let $G_v$ be the isotropy subgroup $\{g \in G:\ \xi^gv = v\}$. Then $G_v \cong \mathbb{R}$, and $G_v$ and $G_w$ are conjugated by the `translational' element $(w - v,0) \in G$. Moreover, since any translation of $\mathbb{C}$ may be obtained as a composite of two rotations about different points, the groups $G_v$ together generate $G$, and so $G_v^\rm{n} = G$ for every $v$. A simple calculation shows that in coordinates one has \[G_v = \{(v - \rho^t(v),t):\ t \in \mathbb{R}\}.\]
Now consider the action $\pi:G\actson L_\mathbb{C}^2(m_{\rm{S}^1})\cong L^2(m_{\rm{S}^1})\otimes_\mathbb{R} \mathbb{C}$ defined by \[(\pi(u,t)f)(z) := \rm{e}^{2\pi\rm{i}\langle \rho^{-t}u,z\rangle}f(\rho^tz),\] where $\langle \rho^{-t}u,z\rangle$ is the usual inner product of $\mathbb{C}$ regarded as a vector space over $\mathbb{R}$. (A routine check shows that this formula correctly defines an action of $G$.) The subspace $\rm{Fix}(\pi(G_v))$ consists of those functions $f$ such that \[\rm{e}^{2\pi\rm{i}\langle \rho^{-t}u,z\rangle}f(\rho^tz) = f(z)\quad\forall z \in \rm{S}^1,\,t\in\mathbb{R}:\] that is, of the constant complex multiples of the function $z\mapsto \rm{e}^{-2\pi\rm{i}\langle u,z\rangle}$. These are all distinct $2$-real-dimensional subspaces of $L_\mathbb{C}^2(m_{\rm{S}^1})$, so are not equal to $\rm{Fix}(\pi(G)) = \{0\}$, and also (by considering close-by values of $v$, for instance) they are not pairwise orthogonal. \nolinebreak\hspace{\stretch{1}}$\lhd$
Another useful result in a similar vein to Corollary~\ref{cor:normal-clos} is the following simple relative of the Pugh-Shub Theorem~\cite{PugShu71}. An adaptation of their theorem to the setting of nilpotent groups has previously been given by Ratner in Proposition 5.1 of~\cite{Rat91-a}. Although our formulation is superficially different from hers, each version can easily be deduced from the proof of the other.
\begin{lem}\label{lem:PS} Let $\pi:G\actson \mathfrak{H}$ be an orthogonal representation of a connected nilpotent Lie group, and let $\rm{Lat}\,\mathfrak{g}$ be the family of all proper Lie subalgebras of $\mathfrak{g}$. Then the subfamily \[\cal{A} := \{V \in \rm{Lat}\,\mathfrak{g}:\ \rm{Fix}(\pi(\exp V))\supsetneqq \rm{Fix}(\pi(G))\}\] has countably many maximal elements. \end{lem}
\textbf{Proof}\quad Suppose that $V_1,V_2\in \cal{A}$ are two distinct maximal elements. Then the Lie subalgebra generated by $V_1 + V_2$ must strictly contain them both, and hence \[\rm{Fix}(\pi(\langle \exp V_1\cup \exp V_2\rangle)) = \rm{Fix}(\pi(G)),\] by their maximality.
Corollary~\ref{cor:rel-ind-over-common} now implies that $\rm{Fix}(\pi(\exp V_1))$ and $\rm{Fix}(\pi(\exp V_2))$ are relatively orthogonal over $\rm{Fix}(\pi(G))$. Therefore there can be at most countably many of these maximal elements of $\cal{A}$, because $\mathfrak{H}$ is separable: indeed, if $\mathcal{A}_1\subseteq \mathcal{A}$ were an uncountable collection of maximal elements, then choosing some representative unit vectors \[x_V \in \rm{Fix}(\pi(\exp V))\ominus \rm{Fix}(\pi(G))\quad\forall V \in \cal{A}_1\] would give an uncountable sequence of orthonormal vectors in $\mathfrak{H}$, and hence a contradiction. \nolinebreak\hspace{\stretch{1}}$\Box$
\textbf{Example}\quad It is certainly not true that $\cal{A}_1$ is generally finite. For example, consider the obvious rotation action of $\mathbb{R}^2$ on $\mathbb{T}^2$ and let $\pi:\mathbb{R}^2\actson L^2(m_{\mathbb{T}^2})$ be the resulting orthogonal representation. Then any one-dimensional subgroup $\mathbb{R}\bf{v} \leq \mathbb{R}^2$ of rational slope has some non-trivial invariant functions, but the whole $\mathbb{R}^2$-action is ergodic. \nolinebreak\hspace{\stretch{1}}$\lhd$
This conclusion of countability (rather than finitude) gives rise to the need for the notion of Zariski genericity (rather than simply Zariski openness). The connection between them is established by the following.
\begin{cor}\label{cor:generically-const-fpspace} If $\varphi:\mathbb{R}\times \mathbb{R}^r\longrightarrow G$ is a polynomial map into a connected and simply connected nilpotent Lie group and $\pi:G\actson \mathfrak{H}$ is an orthogonal representation, then the map \[\mathbb{R}^r\longrightarrow (\hbox{subspaces of $\mathfrak{H}$}):h\mapsto \rm{Fix}(\pi(\langle \rm{img}\,\varphi(\cdot,h) \rangle))\] takes the fixed value $\rm{Fix}(\pi(\langle\rm{img}\,\varphi\rangle))$ Zariski generically. Similarly, if $(X,\Sigma,\mu,u)$ is a $G$-system then the $\sigma$-subalgebra $\Sigma^{\langle \rm{img}\, \varphi(\cdot,h)\rangle}$ agrees with $\Sigma^{\langle \rm{img}\, \varphi \rangle}$ up to $\mu$-negligible sets for Zariski generic $h$. \end{cor}
\textbf{Proof}\quad Replacing $G$ with $\langle \rm{img}\, \varphi\rangle^{\rm{n}}$ if necessary, we may assume they are equal.
Let $\cal{A} \leq \rm{Lat}\, \mathfrak{g}$ be the family of all Lie subalgebras with fixed-point subspaces strictly larger than $\rm{Fix}(\pi(G))$, as in Lemma~\ref{lem:PS}, and let $\cal{A}_1 \subseteq \cal{A}$ be the subfamily of maximal elements of $\cal{A}$, so Lemma~\ref{lem:PS} shows that this is countable. Since $\rm{Fix}(\pi(\exp V^\rm{n})) = \rm{Fix}(\pi(\exp V))$ for any $V \in \rm{Lat}\,\mathfrak{g}$ by Corollary~\ref{cor:normal-clos}, by maximality we must have $V = V^\rm{n}$ for every $V \in \cal{A}_1$.
Now, \begin{multline*} \{h:\ \rm{Fix}(\pi(\langle \rm{img}\, \varphi(\cdot,h) \rangle)) \supsetneqq \rm{Fix}(\pi(\langle \rm{img}\, \varphi\rangle)) = \rm{Fix}(\pi(G))\}\\ = \bigcup_{V \in \cal{A}_1}\{h:\ \varphi(t,h) \in \exp V\ \forall t\in\mathbb{R}\}, \end{multline*} and so by the countability of $\cal{A}_1$ it suffices to show that each individual set $\{h:\ \exp\varphi(t,h) \in V\ \forall t\in\mathbb{R}\}$ is proper and Zariski closed in $\mathbb{R}^r$. Since $\rm{Fix}(\pi(\langle \rm{img}\, \varphi\rangle)) = \rm{Fix}(\pi(G))$, the subgroup $\langle \rm{img}\, \varphi\rangle$ is not contained in $\exp V$ for any $V \in \cal{A}_1$, and so in fact $\rm{img}\, \varphi \not\subseteq \exp V$ (since $\exp V$ is itself a subgroup).
Therefore for any $V \in \cal{A}_1$ we may choose a linear form $\ell \in \mathfrak{g}^\ast$ which annihilates $V$ but does not annihilate the whole of $\exp^{-1}\langle\rm{img}\, \varphi\rangle$, and now one has \[\{h:\ \varphi(t,h) \in \exp V\ \forall t \in \mathbb{R}\} \subseteq \{h:\ \ell(\exp^{-1}(\varphi(t,h))) = 0\ \forall t \in \mathbb{R}\}.\]
However, the map $(t,h)\mapsto \ell(\exp^{-1}(\varphi(t,h)))$ is a polynomial $\mathbb{R}\times \mathbb{R}^n\longrightarrow \mathbb{R}$, by Corollary~\ref{cor:exppoly-is-poly}. By collecting monomials it may be expressed as \[t^d p_d(h) + t^{d-1}p_{d-1}(h) + \cdots + t p_1(h) + p_0(h)\] for some $p_i \in \mathbb{R}[h_1,\ldots,h_r]$, and now \[\{h:\ \ell(\exp^{-1}(\varphi(t,h))) = 0\ \forall t \in \mathbb{R}\} = \bigcap_{i=0}^d \{h:\ p_i(h) = 0\}.\] This is manifestly a real algebraic subvariety of $\mathbb{R}^n$, and it is proper because the map $\ell\circ \exp^{-1}\circ \varphi$ was chosen so as not to vanish identically, so it is a Zariski meagre subset of $\mathbb{R}^n$, as required.
Once again the conclusion about $G$-systems follows at once by considering Koopman representations. \nolinebreak\hspace{\stretch{1}}$\Box$
\section{Idempotent classes}\label{sec:idem}
The final ingredients needed for the proof of Theorem~\ref{thm:bigmain} are some results on `idempotent classes' of probability-preserving systems. These were introduced in~\cite{Aus--lindeppleasant1,Aus--lindeppleasant2} building on the earlier notion of a `pleasant extensions' of systems~\cite{Aus--nonconv} (and also worth comparing with Host's `magic extensions' from~\cite{Hos09}).
\begin{dfn}[Idempotent and hereditary classes] For any l.c.s.c. group $G$, a class $\sf{C}$ of jointly-measurable, probability-preserving $G$-systems is \textbf{idempotent} if it is closed under measure-theoretic isomorphisms, inverse limits and arbitrary joinings. It is \textbf{hereditary} if it is closed under passing to factors. \end{dfn}
\textbf{Example}\quad The leading examples of idempotent classes are those of the form \[\mathsf{C}_0^{H_1}\vee \cdots \vee \mathsf{C}_0^{H_\ell}\] for some closed normal subgroups $H_1,H_2,\ldots,H_\ell \unlhd G$, where this denotes the class of all $G$-systems which can be expressed as a joining of systems $\mathbf{Y}_1$, $\mathbf{Y}_2$, \ldots, $\mathbf{Y}_\ell$ where each $\mathbf{Y}_i$ has trivial $H_i$-subaction. \nolinebreak\hspace{\stretch{1}}$\lhd$
The reference~\cite{Aus--thesis} contains an introduction to idempotent classes in the case of a discrete acting group. In earlier works, idempotent classes were introduced to set up the theory of `sated extensions' of probability-preserving systems, which then play the primary r\^ole in applications of these ideas. However, sated extensions are a little inconvenient in the present setting, and so we will work instead with some more elementary results about idempotent classes. The reasoning behind this change of perspective relates to the need to change the group that acts on a system, which will appear in Section~\ref{sec:char-factor}.
In addition, our interest here is in actions of Lie groups, for which these ideas have not previously appeared in the literature. Therefore the basic definitions and results we need have been included below for completeness. Only very simple changes and additions are needed to the treatments in~\cite{Aus--thesis} or~\cite{Aus--lindeppleasant1}. We will also introduce a slightly novel example of an idempotent class, useful for handling the polynomial maps of the present setting.
\begin{lem}[C.f. Lemma 2.2.2 in~\cite{Aus--thesis}] If $\mathsf{C}$ is an idempotent class of $G$-systems and $\mathbf{X} = (X,\Sigma,\mu,u)$ is any $G$-system, then $\mathbf{X}$ has an essentially unique largest factor $\Lambda \leq \Sigma$ that may be generated by a factor map to a member of $\mathsf{C}$. \end{lem}
\textbf{Proof}\quad It is clear that under the above assumption the family of factors \[\{\Xi\leq \Sigma:\ \Xi\ \hbox{is generated by a factor map to a system in}\ \mathsf{C}\}\] is nonempty (it contains $\{\emptyset,X\}$, which corresponds to the trivial system), upwards directed (because $\mathsf{C}$ is closed under joinings) and closed under taking $\sigma$-algebra completions of increasing unions (because $\mathsf{C}$ is closed under inverse limits). There is therefore a maximal $\sigma$-subalgebra in this family. \nolinebreak\hspace{\stretch{1}}$\Box$
\begin{dfn}[Maximal $\mathsf{C}$-factors] The factor $\Lambda$ obtained in the preceding lemma is the \textbf{maximal $\mathsf{C}$-factor} of $(X,\Sigma,\mu,u)$, and will sometimes be denoted by the (slightly abusive) notation $\mathsf{C}\Sigma$. Similarly, we will sometimes denote by $\mathsf{C}\mathbf{X}$ a choice of a member of $\mathsf{C}$ such that $\mathsf{C}\Sigma$ can be generated by a factor map $\mathbf{X}\longrightarrow \mathsf{C}\mathbf{X}$. \end{dfn}
The importance of idempotent classes derives from the following proposition.
\begin{prop}[Joinings to members of idempotent classes]\label{prop:idem-inverse} Suppose that $\mathsf{C}$ is a hereditary idempotent class of $G$-systems, that $\mathbf{X} = (X,\Sigma,\mu,u)$ is any $G$-system and $\mathbf{Y} = (Y,\Phi,\nu,v)$ is a member of $\mathsf{C}$. Then for any joining \begin{center} $\phantom{i}$\xymatrix{& \mathbf{Z} = (X\times Y,\Sigma\otimes \Phi,\lambda,u\times v)\ar_{\pi}[dl]\ar^{\xi}[dr]\\ \mathbf{X} & &\mathbf{Y}, } \end{center} where $\pi$ and $\xi$ are the coordinate projections, there is some further factor $\Lambda$ of $\mathbf{X}$ which is generated by a factor map to a member of $\mathsf{C}$ and such that the factor $\pi^{-1}(\Sigma)$ is relatively independent from $\xi^{-1}(\Phi)$ over $\pi^{-1}(\Lambda)$. Concretely, this means that
\[\int_Z f(x)g(y)\,\lambda(\mathrm{d} x,\mathrm{d} y) = \int_Z \mathsf{E}_\mu(f\,|\,\Lambda)(x)g(y)\,\lambda(\mathrm{d} x,\mathrm{d} y)\] for any $f \in L^\infty(\mu)$ and $g \in L^\infty(\nu)$ (so we do not require that $\pi^{-1}(\Lambda)$ also be contained in $\xi^{-1}(\Phi)$ up to negligible sets). \end{prop}
\textbf{Proof}\quad We will construct from the joining $\lambda$ a new joining of $\mathbf{X}$ with a $\mathsf{C}$-system such that $\lambda$ is relatively independent over a factor of $\mathbf{X}$ which in that new joining is actually determined by the coordinate in the $\mathsf{C}$-system.
Let $\Lambda:X\longrightarrow \rm{Pr}\,Y$ be a disintegration of $\lambda$ over the coordinate projection to $X$. Form the infinite Cartesian product \[Z' := X\times Y^\mathbb{N}\] and let $\lambda'$ be the $(u\times v^{\times \mathbb{N}})$-invariant measure obtained as the relatively independent product of copies of $\lambda$: \[\lambda' = \int_X \delta_x\otimes\Lambda(x,\cdot)^{\otimes\mathbb{N}}\,\mu(\mathrm{d} x).\] Let $\pi':Z'\longrightarrow X$ be the first coordinate projection, and let $\lambda_1$ be the image of $\lambda'$ under the projection to $Y^\mathbb{N}$.
Finally, let $\Lambda \leq \Sigma$ be the $\sigma$-algebra of those sets which are $\lambda'$-a.s. determined by the remaining coordinates of $Z'$: \[\Lambda := \{A \in \Sigma:\ \exists B \in \Phi^{\otimes \mathbb{N}}\ \rm{s.t.}\ \lambda'((A\times Y^\mathbb{N})\triangle (X\times B)) = 0\}.\] This is clearly a factor of $\mathbf{X}$, and by definition it also specifies a factor of the system $(Y^\mathbb{N},\Phi^{\otimes\mathbb{N}},\lambda_1,v^{\times\mathbb{N}})$ (since each $A \in \Lambda$ is identified with a member of $\Phi^{\otimes \mathbb{N}}$, uniquely up to negligible sets). Let $\Lambda' := (\pi')^{-1}(\Lambda)$, so up to negligible sets this is measurable with respect to either $\pi'$ or the coordinate projection $Z'\longrightarrow Y^\mathbb{N}$. The system $(Y^\mathbb{N},\Phi^{\otimes\mathbb{N}},\lambda_1,v^{\times\mathbb{N}})$ is a member of $\mathsf{C}$, because $\mathbf{Y} \in \mathsf{C}$ and $\mathsf{C}$ is closed under joinings; and hence the factor of $\mathbf{X}$ generated by $\Lambda$ is also in $\mathsf{C}$, because it may be identified with a factor of that member of $\mathsf{C}$ and $\mathsf{C}$ is hereditary.
Now let $f \in L^\infty(\mu)$ and $g \in L^\infty(\nu)$. To prove the desired equality of integrals, it suffices to show that
\[\mathsf{E}_\mu(f\,|\,\Lambda) = 0 \quad \Longrightarrow\quad \mathsf{E}_\lambda(f\circ \pi\,|\,\{\emptyset,X\}\otimes \Phi) = 0,\]
since an arbitrary $f$ may be decomposed as $\mathsf{E}_\mu(f\,|\,\Lambda) + (f - \mathsf{E}_\mu(f\,|\,\Lambda))$, and this decomposition inserted into the two integrals against $g$ then shows that they are equal.
Thus, suppose conversely that
\[g := \mathsf{E}_\lambda(f\circ \pi\,|\,\{\emptyset,X\}\otimes \Phi) \neq 0,\] and hence
\[\int_Z (f\circ \pi)\cdot g\,\mathrm{d}\lambda = \|g\|_2^2 \neq 0.\]
For each $i \in \mathbb{N}$ let $\alpha_i:Z'\longrightarrow Y$ be the coordinate projection to the $i^\rm{th}$ copy of $Y$, and let $g_i := g\circ\alpha_i$. By the construction of $\lambda'$, the pair of coordinates $(\pi,\alpha_i):Z'\longrightarrow Z$ has the distribution $\lambda$ for any $i$. This has the following two consequences: \begin{itemize} \item for any $M\geq 1$ one has
\[\int_{Z'}(f\circ \pi')\Big(\frac{1}{M}\sum_{m=1}^Mg_m\Big)\,\mathrm{d}\lambda' = \int_Z (f\circ \pi)\cdot g\,\mathrm{d}\lambda = \|g\|_2^2 \neq 0;\] \item for all $i$ one has
\[\mathsf{E}_{\lambda'}(g_i\,|\,\Sigma\otimes \{\emptyset,Y^\mathbb{N}\}) = \mathsf{E}_{\lambda'}(g_1\,|\,\Sigma\otimes \{\emptyset,Y^\mathbb{N}\}),\] so we may let $h$ be this common conditional expectation. \end{itemize}
Next, since all the $Y$-valued coordinates in $Z'$ are relatively independent under $\lambda'$ given the $X$-coordinate, one has \[\int_{Z'}(g_i - h)(g_j - h)\,\mathrm{d}\lambda' = 0 \quad\hbox{whenever}\ i\neq j,\] and as $M\longrightarrow\infty$ this implies the simple estimate
\[\Big\|\frac{1}{M}\sum_{m=1}^Mg_m - h\Big\|_2^2 = \Big\|\frac{1}{M}\sum_{m=1}^M(g_m - h)\Big\|_2^2 = \frac{1}{M^2}\sum_{m=1}^M\|g_m - h\|_2^2 = \rm{O}\Big(\frac{1}{M}\Big).\] Hence \[\frac{1}{M}\sum_{m=1}^Mg_m \longrightarrow h\]
in $\|\cdot\|_2$ as $M\longrightarrow\infty$. On the one hand, this implies that $h$ is a limit of functions measurable with respect to $\{\emptyset,X\}\otimes \Phi^{\otimes\mathbb{N}}$, hence is itself virtually measurable with respect to that $\sigma$-algebra. Therefore as a function on $X$ it must actually be $\Lambda$-measurable. On the other hand, the above non-vanishing integral now gives \[\int_{Z'}(f\circ \pi')\cdot h\,\mathrm{d}\lambda' \neq 0.\]
Therefore $\mathsf{E}_\mu(f\,|\,\Lambda)\neq 0$, so since $\Lambda$ defines a $\mathsf{C}$-factor of $\mathbf{X}$ this completes the proof. \nolinebreak\hspace{\stretch{1}}$\Box$
\textbf{Remark}\quad This proof can be presented in several superficially different ways. On the one hand, it can be deduced almost immediately from a well-chosen appeal to the de Finetti-Hewitt-Savage Theorem, as in the paper~\cite{LesRitdelaRue03} of Lesigne, Rittaud and de la Rue (see also Section 8.5 in Glasner~\cite{Gla03}). On the other, it is a close cousin of the proof that for any idempotent class $\mathsf{C}$, any system $\mathbf{X}$ has an extension that is `$\mathsf{C}$-sated' (Theorem 2.3.2 in~\cite{Aus--thesis}). \nolinebreak\hspace{\stretch{1}}$\lhd$
In previous applications, the idempotent classes of importance were those of the form $\mathsf{C}_0^{H_1}\vee\cdots\vee \mathsf{C}_0^{H_\ell}$, introduced as examples above. Here we will need some slightly more complicated examples, because in order to account for the possible relations among the polynomials of a tuple $\mathcal{F}$ we will need to consider simultaneously actions of $G$ and also some `more free' covering group $q:\t{G}\longrightarrow G$.
\begin{lem}\label{lem:still-idemp-1} Suppose that $q:H\longrightarrow G$ is a continuous homomorphism of l.c.s.c. groups and that $\mathsf{C}$ is an idempotent class of $H$-systems. Then \[q_\ast\mathsf{C} := \{\hbox{$G$-systems $\mathbf{X}$ such that $\mathbf{X}^{q(\cdot)} \in \mathsf{C}$}\}\] is an idempotent class of $G$-systems, and it is hereditary if $\mathsf{C}$ is hereditary. \end{lem}
\textbf{Proof}\quad We must verify that $q_\ast\mathsf{C}$ is closed under joinings and inverse limits. Both are immediate: if $\mathbf{Y}$ is a joining of $\mathbf{X}_i \in q_\ast\mathsf{C}$ for $i=1,2$ then $\mathbf{Y}^{q(\cdot)}$ is the corresponding joining of $\mathbf{X}_i^{q(\cdot)}$, so lies in $\mathsf{C}$ because $\mathsf{C}$ is closed under joinings, and similarly for inverse limits. The last assertion also follows at once from the definition. \nolinebreak\hspace{\stretch{1}}$\Box$
\begin{dfn} The new class $q_\ast\mathsf{C}$ constructed in the previous lemma is the \textbf{image} of $\mathsf{C}$ under $q$. \end{dfn}
\begin{lem}\label{lem:still-idemp-2} If $\mathsf{C}$ is an idempotent class of $G$-systems then \[\hat{\mathsf{C}} := \{\mathbf{X}:\ \mathbf{X}\ \hbox{is a factor of a member of}\ \mathsf{C}\}\] is a hereditary idempotent class. \end{lem}
\textbf{Proof}\quad The hereditary property is built into the definition, so once again it remains to check closure under joinings and inverse limits. Both are routine, so we give the proof only for joinings. Suppose that $\mathbf{Y}_i = (Y_i,\Phi_i,\nu_i,v_i) \in \hat{\mathsf{C}}$ for $i=1,2$, that $\pi_i:\mathbf{X}_i\longrightarrow \mathbf{Y}_i$ are factors with $\mathbf{X}_i = (X_i,\Sigma_i,\mu_i,u_i)\in \mathsf{C}$ for $i=1,2$, and that $\mathbf{Z} = (Y_1\times Y_2, \Phi_1 \otimes \Phi_2, \lambda, v_1\times v_2)$ defines a joining of $\mathbf{Y}_1$ and $\mathbf{Y}_2$. Then we may define a joining of $\mathbf{X}_1$ and $\mathbf{X}_2$ as a relatively independent product: letting $P_i:Y_i\longrightarrow \Pr(X_i)$ be a probability kernel representing the disintegration of $\mu_i$ over $\pi_i$, define \[\lambda':= \int_{Y_1\times Y_2}P(y_1,\cdot)\otimes P(y_2,\cdot)\,\lambda(\mathrm{d} y_1,\mathrm{d} y_2).\]
Now $(X_1\times X_2,\Sigma_1\otimes \Sigma_2,\lambda',u_1\times u_2)$ is a joining of $\mathbf{X}_1$ and $\mathbf{X}_2$, and hence a member of $\mathsf{C}$. The map $(x_1,x_2)\mapsto (\pi_1(x_1),\pi_2(x_2))$ witnesses $\mathbf{Z}$ as a factor of this member of $\mathsf{C}$, so $\mathbf{Z} \in \hat{\mathsf{C}}$. \nolinebreak\hspace{\stretch{1}}$\Box$
\begin{dfn}\label{dfn:down-clos} The class $\hat{\mathsf{C}}$ constructed above is the \textbf{downward closure} of $\mathsf{C}$. \end{dfn}
When we come to apply this machinery, satedness relative only to classes of the form $\mathsf{C}_0^{H_1} \vee \cdots \vee \mathsf{C}_0^{H_\ell}$ will not give us quite enough purchase over our situation. Instead we will need to first form an extended group $q:\t{G}\twoheadrightarrow G$ (in which copies of certain subgroups of $G$ have been made `more independent': see Section~\ref{sec:char-factor}), and then for some subgroups $\t{H}_1$, $\t{H}_2$, \ldots, $\t{H}_\ell \unlhd \t{G}$ we will need to use satedness relative to the class \[q_\ast\big(\ (\mathsf{C}_0^{\t{H}_1} \vee \cdots \vee \mathsf{C}_0^{\t{H}_\ell})^\wedge\ \big).\] In prose, this is \begin{quote} `The class of $G$-systems which, upon re-writing them as $\t{G}$-systems, become factors of joinings of systems in which one of the $\t{H}_i$ acts trivially.' \end{quote} This manoeuvre will appear during the proof of Proposition~\ref{prop:vdC-appn} below, where the need for it will become clearer. The particular way in which we will appeal to satedness with respect to such a class is captured by the following lemma.
\begin{lem}\label{lem:corn-descent} Suppose that $q:H\twoheadrightarrow G$ is a continuous epimorphism of Lie groups, that $\mathsf{C}$ is an idempotent class of $H$-systems and that $\mathbf{X} = (X,\Sigma,\mu,u)$ is a $G$-system. In addition, suppose that $f \in L^\infty(\mu)$ and that
\[\pi:\mathbf{Y} = (Y,\Phi,\nu,v)\longrightarrow \mathbf{X}^{q(\cdot)}\] is an extension of $H$-systems such that \[\mathsf{E}_\nu(f\circ \pi\,|\,\mathsf{C}\Phi) \neq 0.\] Then also
\[\mathsf{E}_\mu(f\,|\,(q_\ast\hat{\mathsf{C}})\Sigma) \neq 0.\] \end{lem}
\textbf{Proof}\quad We have $\mathsf{E}_\nu(f\circ \pi\,|\,\mathsf{C}\Phi) \neq 0$ by assumption, but on the other hand the function $f\circ\pi$ is invariant under $v^h$ for every $h \in \ker q$: \[f\circ\pi\circ v^h = f\circ u^{q(h)}\circ \pi = f\circ u^e\circ\pi = f\circ \pi.\]
Since $\mathsf{C}\Phi$ is a factor of the whole $H$-action $v$, the conditional expectation operator $\mathsf{E}_\nu(\,\cdot\,|\,\mathsf{C}\Phi)$ preserves this $\ker q$-invariance. Therefore $\mathsf{E}_\nu(f\circ \pi\,|\,\mathsf{C}\Phi)$ is measurable not only with respect to $\mathsf{C}\Phi$ but also with respect to $\Phi^{\ker q}$.
Let $\alpha:\mathbf{Y}\longrightarrow \mathbf{Z}$ be a factor map onto another system which generates the factor $\Phi^{\ker q}\cap \mathsf{C}\Phi \leq \Phi$, so its target system $\mathbf{Z}$ is an element of $\hat{\mathsf{C}}$ and has $\ker q$ acting trivially. Therefore this action of $H$ may be identified with an action of $G$ composed through $q$, say $\mathbf{Z} = \mathbf{W}^{q(\cdot)}$ for some $G$-system $\mathbf{W}$. (The joint measurability of $v$ implies that of the action of $G$ on $\mathbf{W}$, simply by choosing an everywhere-defined Borel selector $G\longrightarrow H$, as we clearly may for Lie group epimorphisms because they are are locally diffeomorphic to orthogonal projections.)
Now the diagram \begin{center} $\phantom{i}$\xymatrix{ & \mathbf{Y}\ar[dl]_\pi\ar[dr]^\alpha\\ \mathbf{X}^{q(\cdot)} & & \mathbf{W}^{q(\cdot)} } \end{center} defines a joining of $\mathbf{X}^{q(\cdot)}$ and $\mathbf{W}^{q(\cdot)}$. It therefore also defines a joining of $\mathbf{X}$ and $\mathbf{W}$, by simply identifying it with an invariant measure on $X\times W$ and writing the actions in terms of $G$ rather than $H$.
Our assumption on $f$ gives that $\mathsf{E}(f\circ \pi\,|\,\alpha) \neq 0$. Therefore, within this joining of $\mathbf{X}$ and $\mathbf{W}$, the lift of $f$ has non-trivial conditional expectation onto the copy of $\mathbf{W}$, which is a member of $q_\ast\hat{\mathsf{C}}$, and so by Proposition~\ref{prop:idem-inverse} and Lemma~\ref{lem:still-idemp-2} this implies $\mathsf{E}_\mu(f\,|\,(q_\ast\hat{\mathsf{C}})\Sigma) \neq 0$. \nolinebreak\hspace{\stretch{1}}$\Box$
\section{The case of two-fold joinings}\label{sec:k=2}
The case of Theorem~\ref{thm:main} in which $k=1$ will form the base of an inductive proof of the full theorem, and must be handled separately. Its proof is quite routine in the shadow of other works in this area, but it does already contain an appeal to the van der Corput estimate and an induction on the PET ordering for single polynomials (rather than whole tuples). It thus serves as a helpful preparation for the full induction that is to come.
\begin{prop}\label{prop:k=1} Suppose that $\pi:G\actson \mathfrak{H}$ is an orthogonal representation and $\varphi:\mathbb{R}\times \mathbb{R}^r \longrightarrow G$ is a polynomial map such that $\varphi(0,\cdot) \equiv e$. Then the operator averages \[\barint_0^T \pi(\varphi(t,h))\,\mathrm{d} t\] converge in the strong operator topology for every $h$, and the limit operator $P_h$ is Zariski generically equal to the orthoprojection onto $\rm{Fix}(\pi(\langle \rm{img}\,\varphi\rangle))$. \end{prop}
\textbf{Proof}\quad\textbf{Step 1}\quad First suppose that $\varphi$ is linear in the first coordinate, meaning that $\varphi(\cdot,h)$ is a homomorphism for every $h \in \mathbb{R}^r$. Then for every $h$ the map $t\mapsto \varphi(t,h)$ takes values in a $1$-parameter subgroup of $G$, and so the classical ergodic theorem for orthogonal flows gives \[\barint_0^T \pi(\varphi(t,h))\,\mathrm{d} t\stackrel{\rm{SOT}}{\longrightarrow} P_h,\] where $P_h$ is the orthoprojection onto $\rm{Fix}(\pi(\langle \rm{img}\,\varphi(\cdot,h)\rangle))$. By Corollary~\ref{cor:generically-const-fpspace} this equals $\rm{Fix}(\pi(\langle \rm{img}\, \varphi\rangle))$ Zariski generically, and so the proof is complete in the linear case.
\quad\textbf{Step 2}\quad For arbitrary polynomial maps $\varphi$ we show by PET induction that if \[\barint_0^T \pi(\varphi(t,h))v\,\mathrm{d} t\,\,\not\!\!\longrightarrow 0\] for some $v \in \mathfrak{H}$, then $P_hv \neq 0$, where again $P_h$ is the orthoprojection onto $\rm{Fix}(\pi(\langle \rm{img}\, \varphi(\cdot,h)\rangle))$. By decomposing an arbitrary $v$ as $(1 - P_h)v + P_hv$ and appealing to Corollary~\ref{cor:generically-const-fpspace} again, this will complete the proof.
If \[\barint_0^T \pi(\varphi(t,h))v\,\mathrm{d} t\,\,\not\!\!\longrightarrow 0\] then the van der Corput estimate~\ref{lem:vdC} gives that also \begin{multline*} \barint_0^S\barint_0^T \langle\pi(\varphi(t+s,h))v,\pi(\varphi(t,h)v\rangle\,\mathrm{d} t\,\mathrm{d} s\\ =\Big\langle\barint_0^S\barint_0^T \pi(\varphi(t,h)^{-1}\varphi(t+s,h))v\,\mathrm{d} t\,\mathrm{d} s,\ v\Big\rangle \,\,\not\!\!\longrightarrow 0 \end{multline*} as $T\longrightarrow\infty$ and then $S\longrightarrow \infty$.
By the special case of Lemma~\ref{lem:PET-calcns} for singleton families we have \[\{(t,s,h)\mapsto \varphi(t,h)^{-1}\varphi(t+s,h)\}\prec_\rm{PET}\{\varphi\},\] and so the inductive hypothesis gives \[\barint_0^T \pi(\varphi(t,h)^{-1}\varphi(t+s,h))v\,\mathrm{d} t \longrightarrow Q_{s,h}v \quad\quad \hbox{as}\ T\longrightarrow\infty\] with $Q_{s,h}$ the orthoprojection onto $\rm{Fix}(\pi(\langle \rm{img}\,\varphi(\cdot,h)^{-1}\varphi(\cdot+s,h)\rangle ))$.
By Corollary~\ref{cor:generically-const-fpspace}, for every fixed $h$ we have \[\rm{Fix}(\pi(\langle \rm{img}\,\varphi(\cdot,h)^{-1}\varphi(\cdot+s,h)\rangle )) = \rm{Fix}(\pi(\langle \rm{img}\,\varphi(\cdot,h)^{-1}\varphi(\cdot+ \cdot,h)\rangle ))\] for Zariski generic $s$, and now since $\varphi(0,h) \equiv e$ this is equal to \[\rm{Fix}(\pi(\langle\rm{img}\, \varphi(\cdot,h)\rangle)).\] In particular, for every $h$ this equality must hold for Lebesgue-a.e. $s$, and thus our previous average over $s$ may be written instead as \[\barint_0^S Q_{s,h}v\,\mathrm{d} s = \barint_0^S P_hv\, \mathrm{d} s \equiv P_h v.\] This proves that $P_hv \neq 0$, as required. \nolinebreak\hspace{\stretch{1}}$\Box$
\section{A partially characteristic factor}\label{sec:char-factor}
Now fix the following assumptions for this section and the next: \begin{itemize} \item $G$ is an $s$-step connected and simply connected nilpotent Lie group; \item $\mathcal{F} = (\varphi_1,\varphi_2,\ldots,\varphi_k)$ is a tuple of polynomial maps $\mathbb{R}\times \mathbb{R}^r\longrightarrow G$ with $k\geq 2$ in which $\varphi_1$ is a pivot, such that $\varphi_i(0,\cdot) \equiv e$ for each $i$, and such that $G = \langle\rm{img}\, \varphi_1\cup\cdots\cup \rm{img}\,\varphi_k\rangle$ (otherwise we may simply replace $G$ with this smaller group); \item $(X_i,\Sigma_i,\mu_i,u_i)$ for $0 \leq i \leq k$ is a tuple of $G$-systems, and $\lambda$ is a joining of them; \item $A^\lambda_T$ for $T \in [0,\infty)$ is the family of averaging operators associated to the orbit of $\lambda$ under $(\varphi_1,\varphi_2,\ldots,\varphi_k)$ as in Theorem~\ref{thm:bigmain}, so note that these implicitly depend on $h$, the parameter in the argument of the $\varphi_i$ which is \emph{not} averaged. \end{itemize}
At the heart of the inductive proof of Theorem~\ref{thm:bigmain} lies a result promising that in order to study the functional averages $A^\lambda_T(f_1,f_2,\ldots,f_k)$, one may assume that one of the functions $f_i$ has some special additional structure (which we will see later enables a further reduction to the case of a simpler family of polynomial maps). This extra structure is captured by a simple adaptation of an important idea introduced in~\cite{FurWei96}, and which has been used extensively since (see, for instance,~\cite{HosKra05,Zie07,Aus--nonconv,Aus--lindeppleasant1}).
\begin{dfn}[Partially characteristic factor] In the above setting a factor $\Lambda \leq \Sigma_1$ is \textbf{partially characteristic} for the averages $A^\lambda_T$ if for any tuple of functions $f_i \in L^\infty(\mu_i)$ one has
\[\big\|A^\lambda_T(f_1,f_2,\ldots,f_k) - A^\lambda_T\big(\mathsf{E}(f_1\,|\,\Lambda),f_2,\ldots,f_k\big)\big\|_2 \longrightarrow 0\] as $T\longrightarrow\infty$ for Zariski generic $h$ (recalling that the operators $A^\lambda_T$ implicitly depend on $h \in \mathbb{R}^r$). \end{dfn}
\textbf{Remark}\quad The main difference between this definition and its predecessors in earlier papers is that here, in consonance with the statement of Theorem~\ref{thm:bigmain}, we require convergence only for Zariski generic $h$.
As stated, this definition allows the Zariski meagre set $F\subseteq \mathbb{R}^r$ containing those $h$ for which convergence fails to depend on $f_1$, $f_2$, \ldots, $f_k$. However, it is easily checked that for a given $h$, this convergence holds for all tuples of functions if one knows that it holds for tuples drawn from some $\|\cdot\|_2$-dense subsets of the unit balls of $L^\infty(\mu_i)$, $i=1,2,\ldots,k$. Since one can choose countable such subsets, we deduce that there is a countable intersection of Zariski residual subsets of $\mathbb{R}^r$ (which is therefore still Zariski residual) on which the above convergence holds for all tuples of functions. \nolinebreak\hspace{\stretch{1}}$\lhd$
As in many of the earlier works cited above, the first step towards proving the convergence of $A^\lambda_T(f_1,\ldots,f_k)$ will be to identify a partially characteristic factor with some useful structure. However, a new twist appears in the present setting: here we must first pass from $G$-systems to actions of some covering group of $G$.
To be precise, let \[\t{\varphi}_1:(t,h)\mapsto (\varphi_1(t,h),\ldots,\varphi_k(t,h)),\] let \[\t{\varphi}_i:(t,h)\mapsto (\varphi_i(t,h),\ldots,\varphi_i(t,h)),\quad \hbox{for}\ i=2,3,\ldots,k,\] (notice the subscripts in different coordinates), and let \[\t{G} := \langle \rm{img}\,\t{\varphi}_1\ \cup\ \rm{img}\,\t{\varphi}_2\ \cup\ \cdots\ \cup\ \rm{img}\,\t{\varphi}_k\rangle \leq G^{k+1}.\] Let $q:\t{G} \longrightarrow G$ be the restriction to $\t{G}$ of the projection $G^k\longrightarrow G$ onto the first coordinate. Then $q$ intertwines each $\t{\varphi}_i$ with $\varphi_i$ for $i\geq 1$ (because $\varphi_i$ appears in the first coordinate of $\t{\varphi}_i$ for every $i$).
It is easy to verify that $q(\t{G}) = G$. The group $\t{G}$ is connected, because each $\t{\varphi}_i(\cdot,h)$ passes through the origin for every $h$, and hence $\t{G} = \exp V$ for some Lie subalgebra $V \leq \mathfrak{g}^k$. The image of $V$ under the first coordinate projection is a Lie subalgebra $V_1 \leq \mathfrak{g}$, and since $G$ is simply connected it follows that $\exp V_1$ is a closed subgroup of $G$ which is contained in $q(\t{G})$. On the other hand it must contain $\rm{img}\, \varphi_i$ for every $i\leq k$, so in fact $q(\t{G}) = \exp V_1 = G$.
The next technical proposition lies at the heart of all that follows. It provides a partially characteristic factor of $\mathbf{X}_1 = (X_1,\Sigma_1,\mu_1,u_1)$ for the averages $A^\lambda_T$, but only at the cost of regarding instead the modified system $\mathbf{X}_1^{q(\cdot)}$. The need for this sleight of hand will become clear during the proof.
\begin{prop}\label{prop:vdC-appn} Assume that conclusions (1--3) of Theorem~\ref{thm:bigmain} have already been established for all polynomial families preceding $\mathcal{F}$ in the PET ordering, suppose that $\varphi_1$ is a pivot, and let \[\mathsf{C} := q_\ast\Big(\ \Big(\mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_1\rangle}\vee\bigvee_{j=2}^k\mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_j\t{\varphi}_1^{-1}\rangle}\Big)^\wedge\ \Big).\] (Recall the discussion following Definition~\ref{dfn:down-clos}.) Then for any systems $\mathbf{X}_i$, $i=0,1,\ldots,k$, the factor $\mathsf{C}\Sigma_1 \leq \Sigma_1$ is partially characteristic. \end{prop}
\textbf{Remark}\quad Of course, once this proposition has been proved then it implies some conclusion even if $A^\lambda_T(f_1,\ldots,f_k) \,\,\not\!\!\longrightarrow 0$ for just one value of $h$, because by fixing that $h$ we may simply regard each $\varphi_i$ as a polynomial function of $t$ alone, and so apply the proposition with $r=0$. Indeed, we will use this trick a few times later. However, one must beware of the delicacy that the idempotent class appearing in this proposition may not be the same after one makes such a restriction, so nor will the $\sigma$-sigma algebra $\mathsf{C}\Sigma_1$ in general. Even the group extension $q:\t{G}\longrightarrow G$ itself will not be the same as above, but will depend on the choice of $h$. Since at some points later we will really need the above conclusion about the generic behaviour of the averages in $h$, it seems easiest to formulate it as here and then apply it with a restricted parameter space when convenient. \nolinebreak\hspace{\stretch{1}}$\lhd$
\textbf{Proof}\quad Since any $f_1$ may be decomposed as
\[\mathsf{E}_\mu(f_1\,|\,\mathsf{C}\Sigma) + \big(f_1 - \mathsf{E}_\mu(f_1\,|\,\mathsf{C}\Sigma)\big)\]
and the operator $A^\lambda_T$ is multilinear, it is enough to prove that if $\mathsf{E}_\mu(f_1\,|\,\mathsf{C}\Sigma) = 0$ then for any $f_2$, \ldots, $f_k$ one has
\[\|A^\lambda_T(f_1,f_2,\ldots,f_k)\|_2\longrightarrow 0\] as $T\longrightarrow\infty$ for Zariski generic $h$. Contrapositively, this is equivalent to showing that if the set
\[E := \{h\in \mathbb{R}^r:\ \|A^\lambda_T(f_1,f_2,\ldots,f_k)\|_2 \,\,\not\!\!\longrightarrow 0\ \hbox{as}\ T\longrightarrow\infty\}\]
is not Zariski meagre then $\mathsf{E}_\mu(f_1\,|\,\mathsf{C}\Sigma)\neq 0$. Henceforth we assume that $E$ is not Zariski meagre.
Furthermore, in view of Lemma~\ref{lem:corn-descent}, it now suffices to find an extension of spaces $\pi:(\t{X},\t{\Sigma},\t{\mu})\longrightarrow (X_1,\Sigma_1,\mu_1)$ and an action $\t{u}:\t{G}\actson (\t{X},\t{\Sigma},\t{\mu})$ such that $\pi\circ \t{u} = u^{q(\cdot)}$ and
\[\mathsf{E}(f_1\circ \pi\,|\,\Lambda) \neq 0,\] where \[\Lambda := \t{\Sigma}^{\langle \rm{img}\,\t{\varphi}_1\rangle}\vee \bigvee_{i=2}^k \t{\Sigma}^{\langle \rm{img}\,\t{\varphi}_1\cdot \t{\varphi}_i^{-1}\rangle}.\] This is the point at which we have made use of the general properties of idempotent classes. This implication will follow in two steps: applying the van der Corput estimate (Lemma~\ref{lem:vdC}), and interpreting what it tells us.
\quad\textbf{Step 1}\quad Letting \[g_{t,h} := M^\lambda\big(f_1\circ u_1^{\varphi_1(t,h)},f_2\circ u_2^{\varphi_2(t,h)},\ldots,f_k\circ u_k^{\varphi_k(t,h)}\big),\] the van der Corput estimate implies that for $h \in E$ one also has \[\barint_0^S\barint_0^T\int_{X_0}g_{t+s,h}g_{t,h}\,\mathrm{d}\mu_0\,\mathrm{d} t\,\mathrm{d} s \,\,\not\!\!\longrightarrow 0\] as $T\longrightarrow\infty$ and then $S\longrightarrow\infty$.
For each $s$, by Lemma~\ref{lem:l_1l} we may re-write the two inner integrals here as \begin{multline*} \barint_0^T\int_{X_1^2\times \cdots \times X_k^2} (f_1\circ u_1^{\varphi_1(t,h)})\otimes (f_1\circ u_1^{\varphi_1(s,h)\psi_1(t,s,h)}) \otimes\\ \quad\quad\quad\quad\quad\quad\quad\quad \cdots \otimes (f_k \circ u_k^{\varphi_k(t,h)})\otimes (f_k\circ u_k^{\varphi_k(s,h)\psi_k(t,s,h)})\,\mathrm{d}(\lambda\otimes_0 \lambda)\,\mathrm{d} t, \end{multline*} where \[\psi_i(t,s,h) := \varphi_i(s,h)^{-1}\varphi_i(t+s,h) \quad\quad \hbox{for each}\ i = 1,2,\ldots,k,\] so $\psi_i:\mathbb{R}\times \mathbb{R}\times \mathbb{R}^r \longrightarrow G$ is a polynomial map with the property that $\psi_i(0,\cdot,\cdot)\equiv e$.
Since $\lambda\otimes_0\lambda$ is a joining of two duplicates of each of the $G$-systems $(X_i,\Sigma_i,\mu_i,u_i)$ for $1 \leq i \leq k$, it is invariant under the diagonal transformations $u_\Delta^{\varphi_1(t,h)^{-1}}$. Applying this within the above integral shows that it is equal to \begin{multline*} \barint_0^T\int_{X_1^2\times \cdots \times X_k^2} f_1\otimes (f_1\circ u_1^{\varphi_1(s,h)\psi'_1(t,s,h)}) \otimes\\ \quad\quad\quad\quad\quad\quad\quad\quad \cdots \otimes (f_k \circ u_k^{\varphi'_k(t,h)})\otimes (f_k\circ u_k^{\varphi_k(s,h)\psi'_k(t,s,h)})\,\mathrm{d}(\lambda\otimes_0 \lambda)\,\mathrm{d} t \end{multline*} with \begin{eqnarray*} \psi'_i(t,s,h) &:=& \psi_i(t,s,h)\varphi_1(t,h)^{-1}\quad\hbox{for}\ i\geq 1\ \hbox{and}\\ \varphi'_i(t,h) &:=& \varphi_i(t,h)\varphi_1(t,h)^{-1}\quad\hbox{for}\ i\geq 2. \end{eqnarray*} We recognize these as comprising the $1^{\rm{st}}$ derived family of $\mathcal{F}$, which by Lemma~\ref{lem:PET-calcns} precedes $\mathcal{F}$ in the PET ordering because $\varphi_1$ was a pivot. Let \[\vec{\psi}:(t,s,h) \mapsto (e,\psi'_1(t,s,h),\varphi'_2(t,h),\psi'_2(t,s,h),\cdots,\varphi_k'(t,h),\psi_k'(t,s,h)).\]
By the inductive hypothesis, for every $h \in \mathbb{R}^r$ there are a Zariski residual set $F_h \subseteq \mathbb{R}$ and a joining $\theta^h$ on $X_1^2 \times X_2^2\times\cdots \times X_k^2$ invariant under \[\langle G^{\Delta 2k} \cup \rm{img}\, \vec{\psi}(\cdot,\cdot,h)\rangle\] such that for all $s \in F_h$ the above integral tends to \[\int_{X_1^2\times \cdots \times X_k^2} f_1\otimes (f_1\circ u_1^{\varphi_1(s,h)}) \otimes \cdots \otimes f_k \otimes (f_k\circ u_k^{\varphi_k(s,h)})\,\mathrm{d}\theta^h\] as $T \longrightarrow \infty$. Moreover these $\theta^h$ are equal to one fixed joining $\theta$ on a Zariski residual set of $h$, so that this $\theta$ must in fact be invariant under $\langle G^{\Delta 2k} \cup \rm{img}\, \vec{\psi} \rangle$.
Since the Zariski residual set $F_h$ has full Lebesgue measure, for each $h$ our previous average over $s$ may now be replaced by \[\barint_0^S\int_{X_1^2\times \cdots \times X_k^2} f_1\otimes (f_1\circ u_1^{\varphi_1(s,h)}) \otimes \cdots \otimes f_k \otimes (f_k\circ u_k^{\varphi_k(s,h)})\,\mathrm{d}\theta^h\,\mathrm{d} s,\] implying that for $h \in E$ this also does not vanish as $S \longrightarrow \infty$.
Next, one has \begin{eqnarray*} &&(\varphi_1(s,h),\varphi_1(s,h),\ldots,\varphi_k(s,h),\varphi_k(s,h))\\ &&= (e,e,\ldots,\varphi_k(s,h)\varphi_1(s,h)^{-1},\varphi_k(s,h)\varphi_1(s,h)^{-1})\\ &&\quad\quad\quad\quad\quad\quad\quad \cdot (\varphi_1(s,h),\varphi_1(s,h),\ldots,\varphi_1(s,h),\varphi_1(s,h))\\ &&= \vec{\psi}(s,0,h)\cdot (\varphi_1(s,h),\varphi_1(s,h),\ldots,\varphi_1(s,h),\varphi_1(s,h))\\ &&\in \langle G^{\Delta 2k} \cup \rm{img}\, \vec{\psi}(\cdot,\cdot,h)\rangle \end{eqnarray*} for every $s$, and so each joining $\theta^h$ is already invariant under the new off-diagonal polynomial flow \[\xi(\cdot,h):s \mapsto (\varphi_1(s,h),\varphi_1(s,h),\ldots,\varphi_k(s,h),\varphi_k(s,h)).\]
Since we may re-write the above average as \[\barint_0^S\int_{X_1^2\times \cdots \times X_k^2} (f_1\otimes 1\otimes \cdots \otimes f_k\otimes 1)\cdot \big((1\otimes f_1\otimes \cdots \otimes 1 \otimes f_k)\circ u_\times^{\xi(s,h)}\big)\,\mathrm{d}\theta^h\,\mathrm{d} s,\] by the base case Proposition~\ref{prop:k=1} it must converge to
\[\int_{X_1^2\times \cdots \times X_k^2} (f_1\otimes 1\otimes \cdots \otimes f_k\otimes 1)\cdot \mathsf{E}(1\otimes f_1\otimes \cdots \otimes 1 \otimes f_k\,|\,\Sigma_\times^{\langle \rm{img}\, \xi(\cdot,h)\rangle})\,\mathrm{d}\theta^h\] as $S\longrightarrow\infty$, where $\Sigma_\times := \Sigma_1^{\otimes 2}\otimes \cdots \otimes \Sigma_k^{\otimes 2}$ and the conditional expectation here is with respect to $\theta^h$.
Therefore this last integral is nonzero for every $h \in E$. Since the sets \[\{h:\ \theta^h \neq \theta\}\quad\quad \hbox{and}\quad\quad \{h:\ \Sigma_\times^{\langle \rm{img}\, \xi(\cdot,h)\rangle} \neq \Sigma_\times^{\langle \rm{img}\, \xi\rangle}\ \hbox{up to $\theta$-negligible sets}\}\] both \emph{are} Zariski meagre (the latter by Corollary~\ref{cor:generically-const-fpspace}), their union cannot contain $E$, and so any value $h \in E$ that is not in either of these meagre sets witnesses that
\[\int_{X_1^2\times \cdots \times X_k^2} (f_1\otimes 1\otimes \cdots \otimes f_k\otimes 1)\cdot \mathsf{E}(1\otimes f_1\otimes \cdots \otimes 1 \otimes f_k\,|\,\Sigma_\times^{\langle \rm{img}\, \xi \rangle})\,\mathrm{d}\theta \neq 0.\]
\quad\textbf{Step 2}\quad Now set \[(\t{X},\t{\Sigma},\t{\mu}) := \Big(\prod_{i=1}^kX_i^2,\bigotimes_{i=1}^k\Sigma_i^{\otimes 2},\theta\Big)\] and let $\pi:\t{X}\longrightarrow X_1$ be the coordinate projection onto the first copy of $X_1$. Observe that the polynomial map $\xi$ defined in Step 1 is simply a copy of $\t{\varphi}_1$ in which each coordinate has been duplicated. Define $q_1:\t{G}\longrightarrow G^{2k}$ to be the restriction to $\t{G}$ of the coordinate-duplicating map \[(g_1,g_2,\ldots,g_k)\mapsto (g_1,g_1,g_2,g_2,\ldots,g_k,g_k).\] Composing $q_1$ with the Cartesian product action $u_\times$ of $G^{2k}$ now gives an action $\t{u}$ of $\t{G}$ on $(\t{X},\t{\Sigma},\t{\mu})$, since we have already deduced from our inductive hypotheses that $\t{\mu} = \theta$ is invariant under $u_\Delta$ (and hence the image of $q_1\circ \t{\varphi}_i$ for each $i\geq 2$) and also under $\langle\rm{img}\, \xi\rangle$ (which is the image of $q_1\circ\t{\varphi}_1$).
On the first coordinate in $\prod_{i=1}^kX_i^2$, the transformation $\t{u}^g$ simply agrees with $u^g$ for any $g \in \langle\rm{img}\,\t{\varphi}_2\cup\cdots\cup \rm{img}\,\t{\varphi}_k\rangle$. On the other hand, \[\pi\circ \t{u}^{\t{\varphi}_1(t,h)} \stackrel{\rm{def}}{=} \pi\circ (u_1^{\varphi_1(t,h)}\times u_1^{\varphi_1(t,h)}\times \cdots \times u_k^{\varphi_k(t,h)}\times u_k^{\varphi_k(t,h)}) = u_1^{\varphi_1(t,h)}.\] Since these cases together generate the whole of $\t{G}$, it follows that $\pi\circ \t{u}^{\t{g}} = u^{q(\t{g})}$ for all $\t{g} \in \t{G}$, where $q:\t{G}\longrightarrow G$ is the covering homomorphism constructed previously.
Finally, an inspection of the action $\t{u}$ on the other coordinates of $\t{X}$ shows that \begin{itemize} \item for each $i\in \{2,3,\ldots,k\}$ the transformations $\t{u}^{\t{\varphi}_1(t,h)}$ and $\t{u}^{\t{\varphi}_i(t,h)}$ agree on the first coordinate copy of $X_i$, and
\item the function $\mathsf{E}(1\otimes f_1\otimes \cdots \otimes 1 \otimes f_k\,|\,\Sigma_\times^{\langle \rm{img}\, \xi \rangle})$ is invariant under the $\t{u}$-action of $\langle\rm{img}\,\t{\varphi}_1\rangle$. \end{itemize}
Therefore the non-vanishing of the integral at the end of step 1 asserts that $f_1\circ\pi$ has a non-zero inner product with a function that is manifestly measurable with respect to a system in the class \[\mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_1\rangle}\vee\bigvee_{j=2}^k\mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_j\t{\varphi}_1^{-1}\rangle},\]
and hence $\mathsf{E}(f_1\circ\pi\,|\,\mathsf{C}\Sigma_1)\neq 0$, as required. \nolinebreak\hspace{\stretch{1}}$\Box$
\textbf{Remarks}\quad\textbf{1.}\quad The above proof makes clear the need to extend the modified system $\mathbf{X}_1^{q(\cdot)}$, rather than $\mathbf{X}_1$ itself. We constructed our extension from some joining on $X_1^2\times \cdots\times X_k^2$ through the coordinate projection onto $X_1$, and in order to derive the desired nonzero conditional expectation for it we needed the polynomial trajectory of transformations $u_1^{\varphi_1(t,h)}$ downstairs to lift to the trajectory \[u_1^{\varphi_1(t,h)}\times u_1^{\varphi_1(t,h)} \times u_2^{\varphi_2(t,h)}\times u_2^{\varphi_2(t,h)}\times \cdots \times u_k^{\varphi_k(t,h)} \times u_k^{\varphi_k(t,h)}.\] The new map $\t{\varphi}_1$ may not be a PET-minimal member of $(\t{\varphi}_1,\ldots,\t{\varphi}_k)$, and it also may not share its leading term with any of the lifts $\t{\varphi}_i$ for $i\geq 2$, even if $\varphi_1$ downstairs does have some leading terms in common with the other $\varphi_i$. Thus in order to write these $\t{\varphi}_i$ as genuine lifts of the $\varphi_i$ we must first split the group $G$ apart slightly in order to separate these leading terms. Happily, the problem itself gives us a natural way to do this: the lifted polynomial mapping $\t{\varphi}_1$ is suitably `separated' from $\t{\varphi}_i$, $i\geq 2$, inside the Cartesian product $G^k$, so we have simply taken $\t{G}$ to be the closed subgroup of $G^k$ generated by these lifted mappings and composed our actions with the quotient map $q:\t{G}\longrightarrow G$.
\quad\textbf{2.}\quad If a factor $\Lambda \leq \Sigma_1$ is partially characteristic and we assume that the limits $\lambda^h = \lim_{T\longrightarrow\infty}\lambda^h_T$ exist, then considering the integral formula \[\int_{\prod_i X_i}f_0\otimes f_1\otimes \cdots \otimes f_k\,\mathrm{d}\lambda^h_T = \int_{X_0} f_0 \cdot A^\lambda_T(f_1,\ldots,f_k)\,\mathrm{d}\mu_0\] shows that for Zariski generic $h$ the coordinate projection $\prod_i X_i\longrightarrow X_1$ is relatively independent under $\lambda^h$ over its further factor generated by $\Lambda \leq \Sigma_1$. Thus, knowledge of a non-trivial partially characteristic factor gives some structural information about the limit joining.
In particular, consider a case in which $q$ is an isomorphism (so that the subgroups $\langle\rm{img}\, \varphi_1\rangle$ and $\langle \rm{img}\,\varphi_2 \cup \cdots \cup \rm{img}\, \varphi_k\rangle$ are already sufficiently `spread apart' in $G$), and suppose furthermore that the factor $\mathsf{C}\mathbf{X}_1$ can itself be expressed as a joining of systems $\mathbf{Z}_0 \in \mathsf{C}_0^{\langle \rm{img}\, \varphi_1\rangle}$ and $\mathbf{Z}_i \in \mathsf{C}_0^{\langle \rm{img}\,(\varphi_1\varphi_i^{-1})\rangle}$ for $i\geq 2$ (rather than just as a factor of such). Then we know that any limit joining $\lambda'$ must be relatively independent over the factor $\mathsf{C}\mathbf{X}_1$, and upon restricting ourselves to this factor we can express $\lambda'$ alternatively as a joining of \[\mathbf{X}_0,\mathbf{Z}_0,\mathbf{Z}_2,\ldots,\mathbf{Z}_k,\mathbf{X}_2,\ldots,\mathbf{X}_k.\] (In fact we will use a similar manipulation in the next section). Moreover, the assumption that $\mathsf{C}\mathbf{X}_1$ itself be a joining is not terribly restrictive, since an arbitrary system $\mathbf{X}_1$ always has an extension for which this is true (by using the machinery of `$\mathsf{C}$-sated' extensions, as developed in Chapter 2 of~\cite{Aus--thesis}).
It would be interesting to know whether further use of the ideas behind Proposition~\ref{prop:vdC-appn} could give a more complete picture of the possible structure of $\lambda'$. This would presumably involve repeated assertions of relative independence over increasingly `small' factors of the original system, on which increasingly large subgroups of $G$ act trivially. Such a picture does emerge in the study of the linear multiple averages constructed from a tuple of $\mathbb{Z}^d$-actions (see Chapter 4 of~\cite{Aus--thesis}), but in the present setting the need to keep track of a large family of different subgroups of $G$ may make the resulting description more obscure.
Even without a manageable description, this kind of result suggests that the limit joining $\lambda'$ of Theorem~\ref{thm:main} not only exists, but exhibits some rigidity over different possible initial joinings $\lambda$, since $\lambda'$ must exhibit these various instances of relative independence. Once again there is a superficial analogy here with the study of unipotent flows on homogeneous spaces, where a central theme is the classification of all possible invariant measures and the rigidity that such a classification entails; but once again, I do not know whether this points to any deeper connexions between that setting and ours. \nolinebreak\hspace{\stretch{1}}$\lhd$
\section{Proof of the main theorem}\label{sec:general-k}
We can now complete the proof of Theorem~\ref{thm:bigmain}. The general case is handled by a `spiral' PET induction on the tuple $(\varphi_1,\ldots,\varphi_k)$: for each such tuple we will show that \begin{eqnarray*} &&(\hbox{assertions (1,2,3) for $(\t{\varphi}_2,\ldots,\t{\varphi}_k)$})\\ &&\quad\quad\quad \Rightarrow (\hbox{assertion (1) for $(\varphi_1,\varphi_2,\ldots,\varphi_k)$})\\ &&\quad\quad\quad\quad\quad\quad \Rightarrow (\hbox{assertion (2) for $(\varphi_1,\varphi_2,\ldots,\varphi_k)$})\\ &&\quad\quad\quad\quad\quad\quad\quad\quad\quad \Rightarrow (\hbox{assertion (3) for $(\varphi_1,\varphi_2,\ldots,\varphi_k)$}), \end{eqnarray*} at which point the induction closes on itself.
We retain the assumptions from the start of Section~\ref{sec:char-factor}. Proposition~\ref{prop:vdC-appn} gives the purchase needed to complete our induction. Let the class $\mathsf{C}$ and group extension $q:\t{G}\longrightarrow G$ be as in the preceding section. In analysing the family of averages \[A^\lambda_T(f_1,f_2,\ldots,f_k),\] Proposition~\ref{prop:vdC-appn} allows us to assume that $f_1$ is measurable with respect to the factor $\mathsf{C}\Sigma_1$, or equivalently that $\mathbf{X}_1$ is itself a system with the property that the $\t{G}$-system $\mathbf{X}_1^{q(\cdot)}$ is a factor of a member of the class \[\mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_1\rangle}\vee\bigvee_{j=2}^k\mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_j\t{\varphi}_1^{-1}\rangle}.\] From this point a careful re-arrangement gives a reduction to the conclusions of Theorem~\ref{thm:main2} for the group $\t{G}$ and family $(\t{\varphi}_2,\ldots,\t{\varphi}_k)$, which is isomorphic to $(\varphi_2,\ldots,\varphi_k)$ and hence precedes $(\varphi_1,\varphi_2,\ldots,\varphi_k)$ in the PET ordering (see Lemma~\ref{lem:PET-calcns}). Note that this holds in spite of our ascent from $G$ to $\t{G}$, because we have now removed $\t{\varphi}_1$ from the picture altogether.
In order to set up the necessary re-arrangement, assume that $\mathbf{X}_1 = \mathsf{C}\mathbf{X}_1$. By the definition of $\mathsf{C}$ there are a system $\t{\mathbf{X}} \in \mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_1\rangle}\vee\bigvee_{j=2}^k\mathsf{C}_0^{\langle\rm{img}\,\t{\varphi}_j\t{\varphi}_1^{-1}\rangle}$ and a factor map $\pi:\t{\mathbf{X}}\longrightarrow \mathbf{X}_1^{q(\cdot)}$.
Now let $\t{\mathbf{X}}_1 := \t{\mathbf{X}}$ and $\t{\mathbf{X}}_i := \mathbf{X}_i^{q(\cdot)}$ for any $i \neq 1$, and choose any lift of $\lambda$ to a joining $\t{\lambda}$ of the $\t{\mathbf{X}}_i$ (for instance, one could use the relatively independent product over $\lambda$). For each $i \neq 1$ consider the factor $\t{\Sigma}_1^{\langle \rm{img}\,\t{\varphi}_1\t{\varphi}_i^{-1}\rangle} \leq \t{\Sigma}_1$, and let \[\zeta_i:\t{X}_1 \longrightarrow Z_i\] be a factor map of standard Borel $\t{G}$-space which generates this factor. These may be realized as factors of the joining $\t{\lambda}$ through the coordinate projection $\prod_i\t{X}_i\longrightarrow \t{X}_1$. Crucially, by enlarging each of the systems $\t{\mathbf{X}}_i$ for $i \neq 1$, we can arrange that under $\t{\lambda}$ each of these factor maps to $Z_i$ is also virtually measurable with respect to the $\t{X}_i$-coordinate, as well as the $\t{X}_1$-coordinate. To this end, for each $i\neq 1$ consider the composition \[\t{X}_0\times\cdots\times \t{X}_k\ \ \stackrel{\rm{coord.}\,\rm{proj.}}{\longrightarrow}\ \ \t{X}_1\times \t{X}_i\ \stackrel{\zeta_i\times \mathrm{id}}{\longrightarrow}\ Z_i\times \t{X}_i.\] Since this composition respects the $\t{G}$-actions, it defines a joining of $\mathbf{Z}_i$ with $\t{\mathbf{X}}_i$, which we denote by $\mathbf{Y}_i = (Y_i,\Phi_i,\nu_i,v_i)$. Let $\eta_i:Y_i\longrightarrow \t{X}_i$ be the second coordinate projection.
Thus we have constructed a collection of factorizations \begin{center} $\phantom{i}$\xymatrix{(\t{X}_0\times \cdots\times \t{X}_k,\t{\Sigma}_0\otimes \cdots \otimes \t{\Sigma}_k,\t{\lambda},\t{u}_\Delta)\ar[dr]\ar^-{\rm{coord.}\,\rm{proj.}}[rr]&& (\t{X}_i,\t{\Sigma}_i,\t{\mu}_i,\t{u}_i)\\ & (Y_i,\Phi_i,\nu_i,v_i)\ar[ur] } \end{center} for each $i\in \{0,2,3,\ldots,k\}$. Putting these together with the coordinate projection $\t{X}_0\times \cdots\times \t{X}_k \longrightarrow \t{X}_1$ therefore gives a measure-theoretic isomorphism \begin{multline*} (\t{X}_0\times \cdots\times \t{X}_k,\t{\Sigma}_0\otimes \cdots \otimes \t{\Sigma}_k,\t{\lambda},\t{u}_\Delta)\\ \stackrel{\cong}{\longrightarrow}(Y_0\times \t{X}_1\times Y_2\times \cdots \times Y_k,\Phi_0\otimes \t{\Sigma}_1\otimes \Phi_2\otimes \cdots \otimes \Phi_k,\theta,v_\Delta) \end{multline*} for some joining $\theta$ of $\t{G}$-systems.
In addition, this construction guarantees that the factor maps \[\t{X}_0\times \cdots\times \t{X}_k\ \ \stackrel{\rm{coord.}\ \rm{proj.}}{\longrightarrow}\ \ \t{X}_1\stackrel{\zeta_i}{\longrightarrow} Z_i\] and \[\t{X}_0\times \cdots\times \t{X}_k \longrightarrow Y_i\ \ \stackrel{\rm{coord.}\,\rm{proj.}}{\longrightarrow}\ \ Z_i\] agree up to $\lambda'$-negligible sets. Therefore any $h \in L^\infty(\t{\mu}_1)$ which is measurable with respect to $\t{\Sigma}_1^{\langle \rm{img}\,\t{\varphi}_1\t{\varphi}_i^{-1}\rangle}$ (equivalently, which is invariant under $\langle \rm{img}\,\t{\varphi}_1\t{\varphi}_i^{-1}\rangle$, with the convention that $\t{\varphi}_0 \equiv e$) has an essentially unique counterpart $h' \in L^\infty(\nu_i)$ which lifts to the same function on $\t{X}_0\times \cdots\times \t{X}_k$ up to $\t{\lambda}$-negligible sets, and which is invariant under the same subgroup of $\t{G}$.
\begin{lem}\label{lem:re-arrange} In the situation described above, consider the averaging operators associated to the lifted family of polynomial maps $\t{\varphi}_i:\mathbb{R}\times \mathbb{R}^r\longrightarrow \t{G}$. Suppose that $f_1 \in L^\infty(\t{\mu}_1)$ is a function of the special form \[g\cdot h_2\cdot \cdots \cdot h_k,\] where $g \in L^\infty(\t{\mu}_1)$ is invariant under $\langle \rm{img}\, \t{\varphi}_1 \rangle$ and each $h_i$ is invariant under $\langle \rm{img}\, \t{\varphi}_1 \t{\varphi}_i^{-1} \rangle$. Then for any other functions $f_i \in L^\infty(\t{\mu}_i)$ for $i \neq 1$ one has
\[A^{\t{\lambda}}_T(f_1,f_2,\ldots,f_k) = \mathsf{E}\big(g'\cdot A^\theta_T(1,\,h'_2(f_2\circ\eta_2),\,\ldots,\,h'_k(f_k\circ\eta_k))\,\big|\,\eta_0\big)\] (recalling that $A^{\t{\lambda}}_T$ has range in $L^\infty(\t{\mu}_0)$, while $A^\theta_T$ has range in $L^\infty(\t{\nu}_0)$), where $g'$ and $h'_i$ are the counterparts of $g$ and $h_i$ introduced above. \end{lem}
\textbf{Proof}\quad By the definition of $A^{\t{\lambda}}_T$ and $A^\theta_T$ this follows from the analogous calculation at the level of joinings. For the joinings $\t{\lambda}$ and $\theta$, the above isomorphism gives \begin{eqnarray*} &&\int_0^T\int_{\prod_i\t{X}_i} f_0\otimes (f_1\circ \t{u}_1^{\t{\varphi}_1(t,h)})\otimes \cdots\otimes (f_k\circ \t{u}_k^{\t{\varphi}_k(t,h)})\,\mathrm{d}\t{\lambda}\,\mathrm{d} t\\ &&= \barint_0^T\int_{Y_0\times \t{X}_1\times Y_2\times \cdots\times Y_k} (f_0\circ \eta_0)\otimes (f_1\circ \t{u}_1^{\t{\varphi}_1(t,h)})\\ &&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \otimes (f_2\circ \eta_2\circ v_2^{\t{\varphi}_2(t,h)})\otimes\cdots\otimes (f_k\circ\eta_k\circ v_k^{\t{\varphi}_k(t,h)})\,\mathrm{d}\theta\,\mathrm{d} t. \end{eqnarray*}
On the other hand, our assumptions on the structure of $f_1$ imply that \[g\circ \t{u}_1^{\t{\varphi}_1(t,h)} = g\quad \hbox{and}\quad h_i\circ\t{u}_1^{\t{\varphi}_1(t,h)} = h_i\circ\t{u}_1^{\t{\varphi}_i(t,h)},\ i=2,3,\ldots,k,\] for all $(t,h)$. Also, the counterparts $g' \in L^\infty(\nu_0)$ and $h'_i \in L^\infty(\nu_i)$ for $i\geq 2$ satisfy \[g'(y_0) = g(\t{x}_1)\quad \hbox{and}\quad h'_i(y_i) = h_i(\t{x}_1)\] for $\theta$-almost every $(y_0,\t{x}_1,y_2,\ldots,y_k)$. The above integral with respect to $\theta$ may therefore be re-written as \begin{multline*} \barint_0^T\int_{Y_0\times \t{X}_1\times Y_2\times \cdots\times Y_k} (g'(f_0\circ \eta_0))\otimes 1_{\t{X}_1} \otimes \big((h'_2(f_2\circ \eta_2))\circ v_2^{\t{\varphi}_2(t,h)}\big) \otimes\\ \cdots\otimes \big((h'_k(f_k\circ\eta_k))\circ v_k^{\t{\varphi}_k(t,h)}\big)\,\mathrm{d}\theta\,\mathrm{d} t. \end{multline*} Regarded as a linear functional applied to $f_0$, this is integration against
\[\mathsf{E}\big(g\cdot A^\theta_T(1,\,h_2(f_2\circ\eta_2),\,\ldots,\,h_k(f_k\circ\eta_k))\,\big|\,\eta_0\big),\] as required. \nolinebreak\hspace{\stretch{1}}$\Box$
Of course, the importance of the above lemma is that on the right-hand side there is no non-trivial function in the first entry under $A^\theta_T$. This now leads quite smoothly to a completion of our spiral induction.
\textbf{Proof of Theorem~\ref{thm:bigmain}}\quad In case $k=1$, $M^\lambda$ extends to a bounded operator $L^2(\mu_1)\longrightarrow L^2(\mu_0)$ and the desired assertions of convergence and genericity become simply that (i) the average \[M^\lambda\Big(\barint_0^T u(\varphi_1(t,h)^{-1})^\ast f_1\,\mathrm{d} t\Big)\] converges to $M^\lambda P_hf$ with $P_h$ the conditional expectation onto $\Sigma_1^{\langle \rm{img}\, \varphi(\cdot,h)\rangle}$, and (ii) this is generically equal to $M^\lambda P f$ with $P$ the conditional expectation onto $\Sigma_1^{\langle \rm{img}\, \varphi\rangle}$. Both of these assertions follow at once from Proposition~\ref{prop:k=1}.
It remains to handle the inductive step in case $k\geq 2$. Assume that properties (1--3) have already been proved for all tuples preceding $\mathcal{F}$ in the PET ordering. We will deduce those properties for $\mathcal{F}$ in order.
\quad\textbf{Property (1)}\quad In this step, by fixing one $h$ throughout the proof and replacing $G$ with its subgroup $\langle\rm{img}\,\varphi_1(\cdot,h) \cup \cdots \cup \rm{img}\, \varphi_k(\cdot,h)\rangle$ if necessary, we may assume that each $\varphi_i$ is a function of $t$ alone, and hence that $r=0$. With this agreed, let $q:\t{G}\longrightarrow G$ and the class $\mathsf{C}$ be constructed as before using this new group and tuple of maps.
By re-ordering $\mathcal{F}$ if necessary we may also assume $\varphi_1$ is a pivot. In this case, by Proposition~\ref{prop:vdC-appn} it suffices to show that the averages $A^\lambda_T(f_1,\ldots,f_k)$ converge when $f_1$ is $\mathsf{C}\Sigma_1$-measurable.
Construct the $\t{G}$-systems $\t{\mathbf{X}}_i$ and $\mathbf{Y}_i$ as above. Lifting $f_1$ to $f_1\circ\pi\in L^\infty(\t{\mu}_1)$, on this larger system we know that it can be approximated in $L^2(\t{\mu}_1)$ by finite sums of the form \[\sum_p g_p\cdot h_{2,p}\cdot \cdots h_{k,p},\] where $g_p \in L^\infty(\t{\mu}_1)$ is invariant under $\langle \rm{img}\, \t{\varphi}_1 \rangle$ and each $h_{i,p} \in L^\infty(\t{\mu}_i)$ is invariant under $\langle \rm{img}\, \t{\varphi}_1 \t{\varphi}_i^{-1} \rangle$.
Appealing first to the uniform continuity of the operators $A^{\t{\lambda}}_T$ in each entry separately, and then to the linearity of these operators in the first entry, it therefore suffices to prove convergence of the averages \[A^{\t{\lambda}}_T(f_1,\ldots,f_k)\] whenever $f_1$ is one such product function. However, this case lands within the hypothesis of the preceding lemma, which converts these into averages of the form
\[\mathsf{E}\big(g\cdot A^\theta_T(1,\,h_2(f_2\circ\eta_2),\,\ldots,\,h_k(f_k\circ\eta_k))\,\big|\,\eta_0\big).\] The norm convergence of these now follows from the norm convergence of the averages $A^\theta_T(1,\,h_2(f_2\circ\eta_2),\,\ldots,\,h_k(f_k\circ\eta_k))$, which is promised by the inductive hypothesis applied to the simpler polynomial family $(\varphi_2,\ldots,\varphi_k)$.
\quad\textbf{Property (2)}\quad Of course, property (1) already implies convergence of the averaged couplings \[\barint_0^T (\mathrm{id}_{X_0}\times u_1^{\varphi_i(t,h)}\times u_2^{\varphi_2(t,h)} \times \cdots \times u_k^{\varphi_k(t,h)})_\ast\lambda\,\mathrm{d} t\] as $T\longrightarrow\infty$ to some limit $\lambda^h$. We must next show that for any tuple of functions $f_i \in L^\infty(\mu_i)$, the $\lambda^h$-integrals are the same whether we integrate $f_0\otimes f_1 \otimes \cdots \otimes f_k$ or $(f_0\circ u^{g_0})\otimes (f_1\circ u^{g_1}) \otimes \cdots \otimes (f_k\circ u^{g_k})$ for any \[(g_0,g_1,\ldots,g_k) \in G^{\Delta (k+1)}\quad\hbox{or}\quad (g_0,g_1,\ldots,g_k) \in\langle \rm{img}\, \vec{\varphi}(\cdot,h) \rangle.\] This will give the invariance of $\lambda^h$ under the $u_\times$ action of $\langle G^{\Delta (k+1)}\cup\rm{img}\, \vec{\varphi}(\cdot,h) \rangle$.
As in the case of property (1), in this step we can fix a choice of $h$ and replace $G$ with the subgroup $G^h := \langle \rm{img}\,\varphi_1(\cdot,h)\cup\cdots \cup \rm{img}\, \varphi_k(\cdot,h)\rangle$ if necessary, so that we may assume $r = 0$.
Since
\[\mathsf{E}(f_1\,|\,\mathsf{C}\Sigma_1)\circ u_1^g = \mathsf{E}(f_1\circ u_1^g\,|\,\mathsf{C}\Sigma_1)\] for any $g$, by Proposition~\ref{prop:vdC-appn} it again suffices to treat the case when $f_1$ is $(\mathsf{C}\Sigma_1)$-measurable. Now we may consider again the previous construction of the $\t{G}$-systems $\t{\mathbf{X}}_i$ and $\mathbf{Y}_i$ and their joinigs $\lambda'$ and $\theta$. In these terms we wish to prove that \begin{multline*} \int_{\prod_i\t{X}_i}f_0\otimes f_1 \otimes \cdots \otimes f_k\,\mathrm{d}\t{\lambda}'\\ = \int_{\prod_i\t{X}_i}(f_0\circ \t{u}_0^{\t{g}_0})\otimes (f_1\circ \t{u}_1^{\t{g}_1}) \otimes \cdots \otimes (f_k\circ \t{u}_k^{\t{g}_k})\,\mathrm{d}\t{\lambda}' \end{multline*} for any tuple $f_i \in L^\infty(\t{\mu}_i)$ and any \[(\t{g}_0,\t{g}_1,\ldots,\t{g}_k) \in \t{G}^{\Delta (k+1)}\quad\hbox{or}\quad (\t{g}_0,\t{g}_1,\ldots,\t{g}_k) \in\langle \rm{img}\, \vec{\t{\varphi}}(\cdot) \rangle,\] where $\t{\lambda}'$ is the limit joining obtained by averaging $\t{\lambda}$.
Arguing again as for property (1), by continuity and multilinearity we may now assume that $f_1$ is of the special form $g\cdot h_2\cdot\cdots \cdot h_k$ assumed by Lemma~\ref{lem:re-arrange}, and so by that lemma it now suffices to prove that \begin{multline*} \int_{Y_0\times \t{X}_0\times Y_2\times \cdots\times Y_k} (g'(f_0\circ \eta_0))\otimes 1\otimes \cdots \otimes (h'_k(f_k\circ \eta_k))\,\mathrm{d}\theta'\\ = \int_{Y_0\times \t{X}_0\times Y_2\times \cdots\times Y_k} ((g'(f_0\circ \eta_0))\circ v_0^{\t{g}_0})\otimes 1\otimes \cdots \otimes ((h'_k(f_k\circ \eta_k))\circ v_k^{\t{g}_k})\,\mathrm{d}\theta', \end{multline*} where $\theta'$ is the limit joining obtained by averaging $\theta$. With this re-arrangement the coordinate in $\t{X}_1$ vanishes from the picture, and what remains is just an instance of property (2) for the simpler tuple of polynomial maps $(\t{\varphi}_2,\ldots,\t{\varphi}_k)$, which is known by induction.
\quad\textbf{Property (3)}\quad Lastly, we must show that there is a Zariski residual set $E \subseteq \mathbb{R}^r$ such that for any tuple of functions $f_i$ the limit \[\lim_{T\longrightarrow\infty}\int_{X_0}f_0\cdot A^\lambda_T(f_1,\ldots,f_k)\,\mathrm{d}\mu_0\] is the same for all $h \in E$, which will imply that the map $h\mapsto \lambda^h$ is Zariski generically constant (and hence, by property (2), that this generic value must be invariant under the whole of $\langle G^{\Delta (k+1)}\cup\rm{img}\, \vec{\varphi} \rangle$). In this step, of course, we may not restrict to a single value of $h$.
Clearly it suffices to prove this $h$-independence for functions $f_i$ drawn from countable ${\|\cdot\|}_2$-dense subsets of $L^\infty(\mu_i)$, and since a countable intersection of Zariski generic sets is Zariski generic we may therefore look for such a Zariski generic set for just a single tuple of functions $f_i$.
The full strength of Proposition~\ref{prop:vdC-appn} and our construction above now give a Zariski residual subset $E \subseteq \mathbb{R}^r$, extensions of $\t{G}$-systems $\pi:\t{\mathbf{X}}_i\longrightarrow \mathbf{X}_i^{q(\cdot)}$ and a joining $\t{\lambda}$ of $\t{G}$-systems such that \begin{multline*}
\int_{X_0\times \cdots\times X_k}f_0\otimes \cdots\otimes f_k\,\mathrm{d} \lambda^h\\ = \lim_{T\longrightarrow\infty}\int_{\t{X}_0}(f_0\circ \pi_0)\cdot A^{\t{\lambda}}_T(\mathsf{E}(f_1\circ\pi\,|\,\Lambda),f_2\circ\pi_2,\ldots,f_k\circ \pi_k)\,\mathrm{d}\t{\mu}_0 \end{multline*} for all $h \in E$, where now \[\Lambda := \t{\Sigma}_1^{\langle \rm{img}\, \t{\varphi}_1\rangle}\vee \bigvee_{i=2}^k \t{\Sigma}_1^{\langle \rm{img}\, \t{\varphi}_1\t{\varphi}_i^{-1}\rangle}.\] Clearly it suffices to show that the desired $h$-independence holds on some further Zariski residual subset of $E$, and now the same manipulations as above give a reduction of this to a proof that the limits \[\lim_{T\longrightarrow \infty}\int_{Y_0}(g_0'(f_0\circ \eta_0))\cdot A^\theta_T\big(1,h_2'(f_2\circ\eta_2),\ldots,h_k'(f_k\circ\eta_k)\big)\,\mathrm{d}\nu_0\] are independent of $h$ on some Zariski residual set, where $\theta$ and the $Y_i$ have been constructed from $\lambda$ and the $\t{X}_i$ as previously. The dependence on $h$ in this expression is all in the off-diagonal polynomial trajectory that appears in the average $A^\theta_T$. Once again, the fact that this limit is generically constant now follows from the inductive hypothesis applied to the family $(\t{\varphi}_2,\ldots,\t{\varphi}_k)$, and so the proof is complete. \nolinebreak\hspace{\stretch{1}}$\Box$
\section{Further questions}\label{sec:further-ques}
\subsection{Other questions in continuous time}
Theorems~\ref{thm:main} and~\ref{thm:main2} suggest many possible extensions involving different kinds of averaging, just as for any other equidistribution phenomenon. The following paragraphs contain a sample of these possibilities.
First, given another connected nilpotent group $G'$, one could ask more generally about polynomial maps $\varphi_i:G'\longrightarrow G$ and the resulting off-diagonal averages along a F\o lner sequence of subsets $F_N \subseteq G'$. Do these always converge as in our main theorems? This seems likely, and I suspect that the methods of proof above can provide significant insight into this question, but it may be tricky to set up the right generalization of PET induction.
A little more abstractly, the off-diagonal polynomial trajectory \[\{(\varphi_1(t),\varphi_2(t),\ldots,\varphi_k(t):\ t\in \mathbb{R}\}\] is a semi-algebraic subset of $G^k$ in the sense of real algebraic geometry (see, for instance, Bochnak, Coste and Roy~\cite{BocCosRoy98}). Could it be that convergence as in Theorems~\ref{thm:main} or~\ref{thm:main2} holds along the intersections of increasingly large balls with any semi-algebraic subset $V \subset G^k$, endowed with a suitable surface-area measure?
A more challenging question concerns the assumption that $G$ be nilpotent. Do Theorems~\ref{thm:main} or~\ref{thm:main2} still hold if we assume only that $G$ is an arbitrary connected and simply connected Lie group? This is probably too much to ask, but some progress may be possible, for instance, if each $\varphi_i$ has image lying within a unipotent subgroup of $G$. This seems a natural setting to investigate in view of Ratner's Theorems giving equidistribution and measure rigidity for unipotent flows on homogeneous spaces~\cite{Rat90-a,Rat90-b,Rat91-a,Rat91-b}, and Shah's extension of these results to averages over regular algebraic maps~\cite{Sha94}.
However, as remarked in the Introduction, the methods used to study homogeneous space flows are very different (and mostly much more delicate) from those explored in this paper. Shah's analysis of regular algebraic maps proceeds by first obtaining the invariance of a weak limit measure under some unipotent subgroup and then using the resulting structure promised by Ratner's Theorems, whereas it is an essential feature of our inductive proof of Theorem~\ref{thm:bigmain} that the cases of homomorphisms $\varphi_i$ and of more general polynomial maps must be treated together.
To illustrate more concretely some of the difficulties posed by non-nilpotent groups, consider the functional averages \[\barint_0^T (f_1\circ u_1^t)(f_2\circ u_2^t)\,\mathrm{d} t\] for a jointly measurable probability-preserving system $(X,\Sigma,\mu,u)$ for $G = \rm{SL}_2(\mathbb{R})$ and with $u_1,u_2:\mathbb{R}\longrightarrow \rm{SL}_2(\mathbb{R})$ parametrizing the upper- and lower-triangular subgroups respectively. (These averages are easily expressed in terms of the natural analog of Theorem~\ref{thm:main2}.) If we assume that these averages do not tend to $0$ for some choice of $f_1,f_2 \in L^\infty(\mu)$, then the van der Corput estimate and a re-arrangement give also \[\barint_0^S\barint_0^T \int_X f_1\cdot (f_1\circ u_1^s)\cdot ((f_2\cdot (f_2\circ u_2^s))\circ u_2^tu_1^{-t})\,\mathrm{d}\mu\,\mathrm{d} t\,\mathrm{d} s \,\,\not\!\!\longrightarrow 0\] as $T\longrightarrow\infty$ and then $S\longrightarrow \infty$. In order to use this, we need some information about the averages along the trajectory $t\mapsto u_2^tu_1^{-t}$ in $G$. This is certainly a polynomial map in the sense of real algebraic geometry, but not in the sense of Definition~\ref{dfn:poly}, so further differencing does not seem to lead to a simplification of the problem. I have not examined in detail what other arguments (for example, using the representation theory of $\rm{SL}_2(\mathbb{R})$) might be brought to bear here, since this is only a very special case: it simply serves to illustrate that the method of PET induction cannot be applied so na\"\i vely in this setting.
Finally, linked to the study of convergence and equidistribution is the problem of describing the limit joinings $\lambda'$. Some information on their possible structure is contained in the proof of Proposition~\ref{prop:vdC-appn} above, as remarked after that proposition, but it would be interesting to know whether they can be classified more precisely, possibly after extending each $\mathbf{X}_i$ to a suitably-sated extension. A discussion of related issues in the setting of $\mathbb{Z}^d$-actions can be found in~\cite{Aus--thesis}.
\subsection{Discrete actions}\label{subs:compare-discrete}
Most past interest in the kind of off-diagonal average appearing in Theorem~\ref{thm:main} has focused on actions of discrete groups. Suppose that $\Gamma$ is a discrete nilpotent group, $\varphi_1,\varphi_2,\ldots,\varphi_k:\mathbb{Z}\longrightarrow\Gamma$ are polynomial maps (according to the obvious relative of Definition~\ref{dfn:poly}), $\mathbf{X}_i = (X_i,\Sigma_i,\mu_i,T_i)$ are probability-preserving $\Gamma$-systems for $1 \leq i \leq k$ and $\lambda$ is a joining of the systems $\mathbf{X}_i$. Much recent work has been directed towards understanding whether the off-diagonal averages \[\frac{1}{N}\sum_{n=1}^N (T_1^{\varphi_1(n)}\times \cdots\times T_k^{\varphi_k(n)})_\ast\lambda\] converge to some limit joining as $N\longrightarrow\infty$, or whether the associated functional averages converge. Several partial results have appeared, and at the time of this writing Miguel Walsh has just settled the general case in his preprint~\cite{Wal11}.
Walsh's approach does not use heavy ergodic-theoretic machinery. It relies on reformulating the problem of norm convergence for the functional averages into a problem asking for some `quantitative' guarantee that one can find long intervals of times $N$ in which those averages are all close in $\|\cdot\|_2$. This new assertion can then be proved by a clever induction on the tuple of polynomial maps $(\varphi_1,\ldots,\varphi_k)$, which is apparently different from Bergelson's PET induction.
In making this reformulation, Walsh uses ideas that have some precedent in Tao's proof of convergence when $\Gamma = \mathbb{Z}^d$ and all the $\varphi_i$ are linear~(\cite{Tao08(nonconv)}). Some of these ideas lie outside more traditional ergodic-theoretic approaches to this class of questions (such as the present paper), and they have the consequence that very little can be gleaned about the structure of the limits (functions or joinings). Therefore it would still be of interest to see a proof that gives some additional information, similar to our Theorem~\ref{thm:bigmain} or to the earlier, even more precise results of~\cite{HosKra05} or~\cite{Zie07} in the case of discrete powers of a single transformation. We finish with an informal discussion of the difficulties that face any attempt to adapt the arguments of the preceding sections to the setting of discrete $\Gamma$.
The first and most obvious difficulty is that if these averaged couplings do converge to some limit $\lambda'$, it need not be invariant under the off-diagonal subgroup \[\langle \rm{img}\, (\varphi_1,\ldots,\varphi_k) \rangle \leq \Gamma^k.\]
Indeed, let $\Gamma = \mathbb{Z}$, let $\varphi_1 \equiv 0$ and $\varphi_2(n) := n^2$, and let $\mathbf{X}_1 = \mathbf{X}_2$ be the system given by the generator rotation on $\mathbb{Z}/4\mathbb{Z}$. Since all square numbers are congruent to either $0$ or $1 \!\!\mod 4$, it is easily computed that the limit obtained by averaging the diagonal joining $\lambda$ is simply \[\frac{1}{2}\lambda + \frac{1}{2}(\mathrm{id}\times T)_\ast\lambda,\] which is not $(\mathrm{id}\times T)$-invariant.
Of course, this is a trivial example, but it is not clear whether this kind of arithmetic system, appearing as a factor of more general systems $\mathbf{X}_i$, is the only possible obstruction to the desired extra invariance of the limit joining.
While this example bears only on the possible symmetries of the limit joining, in the continuous-time setting those symmetries play a crucial r\^ole in the proof of Proposition~\ref{prop:vdC-appn} above, and so the whole method of proof we have used in this paper may need substantial modification before it can give convergence results in the discrete-time world.
A second difficulty worth remarking is the absence of any useful replacement for the notion of Zariski genericity in the discrete-time setting. Of course, Corollary~\ref{cor:generically-const-fpspace} is still true for discrete group actions: the problem is that it tells us nothing, because these groups are themselves countable.
It might be worth exploring a more subtle appeal to the reasoning of Corollary~\ref{cor:rel-ind-over-common} in place of Corollary~\ref{cor:generically-const-fpspace}. The statement of Corollary~\ref{cor:rel-ind-over-common} is also still true for discrete groups provided the subgroups $H_1$ and $H_2$ are both normal in $\langle H_1\cup H_2\rangle$. One possibility might begin as follows. If $\mathfrak{H}_1$, $\mathfrak{H}_2$, \ldots, is a sequence of closed subspaces of a Hilbert space $\mathfrak{H}$, any two of which are relatively orthogonal over some common further subspace $\frak{K}$, and if in addition $x \in \mathfrak{H}$ is such that $\inf_n\|P_nx\| > 0$ with $P_n$ the orthoprojection onto $\mathfrak{H}_n$, then $x$ also has a nonzero projection onto $\frak{K}$ (for otherwise the $P_nx$ would be an infinite sequence of mutually orthogonal projections of a single vector, all of them large, contradicting Bessel's Inequality).
Structure like this has previously been identified within orthogonal representations of a finitely generated nilpotent group by Leibman~\cite{Lei00}. Using this reasoning, for example, one can show that if
\[\Gamma = \langle a,b\,|\,[a,b] =: c\ \hbox{is central}\rangle\] is the discrete Heisenberg group and $T:\Gamma\actson (X,\Sigma,\mu)$ is any action of it, then the $\sigma$-subalgebras \[\Sigma^{\langle a \rangle} := \{A \in \Sigma:\ \mu(T^aA\triangle A) = 0\}\] and $\Sigma^{\langle b\rangle}$ are relatively independent over the fully invariant factor $\Sigma^T$, even though in this discrete setting it can happen that $\Sigma^{\langle a\rangle} \neq \Sigma^{\langle a\rangle^\rm{n}}$ and $\Sigma^{\langle a\rangle}$ is not globally $T$-invariant. This follows because a judicious appeal to the discrete version of Corollary~\ref{cor:rel-ind-over-common} implies that the $\sigma$-algebras \[\Sigma^{b^k\langle a\rangle b^{-k}},\quad k\in\mathbb{Z},\] are all relatively independent over $\Sigma^{\langle a,c\rangle}$, where $\langle a,c\rangle$ \emph{is} normal in $\Gamma$. If now $f$ and $g$ are $T^a$- and $T^b$-invariant respectively, then applying $T^b$ gives
\[\int f \cdot \mathsf{E}(g\,|\,\Sigma^{\langle a\rangle})\, \mathrm{d}\mu = \int (f \cdot \mathsf{E}(g\,|\,\Sigma^{\langle a\rangle}))\circ T^{b^k}\, \mathrm{d}\mu = \int (f\circ T^{b^k})\cdot(\mathsf{E}(g\,|\,\Sigma^{b^{-k}\langle a\rangle b^k})\, \mathrm{d}\mu.\] Therefore the non-vanishing of this integral implies that $g$ actually has uniformly nonzero conditional expectation onto every $\Sigma^{b^k\langle a\rangle b^{-k}}$. Hence by the argument sketched above, it must actually have nonzero conditional expectation onto $\Sigma^{\langle a,c\rangle}$, and similarly $f$ must have nonzero conditional expectation onto $\Sigma^{\langle b, c\rangle}$. These two $\sigma$-algebras are now globally $T$-invariant and relatively independent over $\Sigma^T$, so putting this together shows that $\Sigma^{\langle a\rangle}$ and $\Sigma^{\langle b\rangle}$ are themselves relatively independent over $\Sigma^T$.
In order to use a similar idea to study off-diagonal or multiple averages, one might, for instance, try to prove a discrete analog of Proposition~\ref{prop:vdC-appn} according to which the characteristic factors $\Lambda_h$ obtained depending on $h$ are not mostly equal to each other, but are all relatively orthogonal over some common smaller $\sigma$-algebra $\Lambda'$. Then it might be possible to replace $\Lambda^h$ with $\Lambda'$ in subsequent arguments and gain more purchase on the asymptotic behaviour of our averages as a result. However, I do not have a precise statement to formulate based on this speculation.
\textbf{Acknowledgements}\quad This work was supported by a research fellowship from the Clay Mathematics Institute. Much of it was carried out during a visit to the Isaac Newton Institute for the Mathematical Sciences. \nolinebreak\hspace{\stretch{1}}$\lhd$
\appendix
\section{A continuous-time van der Corput estimate}\label{app:cts-time-vdC}
We recall here for completeness a continuous-time variant of the classical van der Corput estimate for bounded Hilbert-space-valued sequences. The discrete-time version can be found in Section 1 of~\cite{FurWei96}, and a continuous-time version in Appendix B of Potts~\cite{Pot09}.
\begin{lem}\label{lem:vdC} If $u:[0,\infty)\longrightarrow \mathfrak{H}$ is a bounded strongly measurable map into a Hilbert space, then vector-valued non-convergence \[\barint_0^T u(t)\,\mathrm{d} t \,\,\not\!\!\longrightarrow 0\quad\quad\hbox{as}\ T\longrightarrow\infty\] implies the scalar-valued non-convergence \[\barint_0^S\barint_0^T\langle u(t+s),u(t)\rangle\,\mathrm{d} t\,\mathrm{d} s \,\,\not\!\!\longrightarrow 0\quad\quad\hbox{as}\ T\longrightarrow\infty\ \hbox{and then}\ S\longrightarrow\infty.\] \nolinebreak\hspace{\stretch{1}}$\Box$ \end{lem}
\parskip 0pt
\noindent \small{Mathematics Department, Brown University,}
\noindent \small{Box 1917, 151 Thayer Street,}
\noindent \small{Providence, RI 02912, USA}
\noindent \small{\texttt{[email protected]},}
\noindent \small{\texttt{www.math.brown.edu/$\sim$timaustin}}
\end{document} | arXiv |
\begin{document}
\title{Stability of closed characteristics on compact convex\\ hypersurfaces in $\R^6$}
\begin{abstract} {\it In this paper, let $\Sigma\subset{\bf R}^{6}$ be a compact convex hypersurface. We prove that if $\Sigma$ carries only finitely many geometrically distinct closed characteristics, then at least two of them must possess irrational mean indices. Moreover, if ${\Sigma}$ carries exactly three geometrically distinct closed characteristics, then at least two of them must be elliptic. } \end{abstract}
{\bf Key words}: Compact convex hypersurfaces, closed characteristics, Hamiltonian systems, Morse theory, mean index identity, stability.
{\bf AMS Subject Classification}: 58E05, 37J45, 37C75.
{\bf Running title}: Stability of closed characteristics
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \renewcommand{\thesection.\arabic{figure}}{\thesection.\arabic{figure}}
\setcounter{equation}{0} \section{Introduction and main results}
In this paper, let $\Sigma$ be a fixed $C^3$ compact convex hypersurface in ${\bf R}^{2n}$, i.e., $\Sigma$ is the boundary of a compact and strictly convex region $U$ in ${\bf R}^{2n}$. We denote the set of all such hypersurfaces by ${\cal H}(2n)$. Without loss of generality, we suppose $U$ contains the origin. We consider closed characteristics $(\tau,y)$ on $\Sigma$, which are solutions of the following problem \begin{equation} \left\{\matrix{\dot{y}=JN_{\Sigma}(y), \cr
y(\tau)=y(0), \cr }\right. \label{1.1}\end{equation} where $J=\left(\matrix{0 &-I_n\cr
I_n & 0\cr}\right)$, $I_n$ is the identity matrix in ${\bf R}^n$, $\tau>0$, $N_\Sigma(y)$ is the outward normal vector of $\Sigma$ at $y$ normalized by the condition $N_{\Sigma}(y)\cdot y=1$. Here $a\cdot b$ denotes the standard inner product of $a, b\in{\bf R}^{2n}$. A closed characteristic $(\tau, y)$ is {\it prime}, if $\tau$ is the minimal period of $y$. Two closed characteristics $(\tau, y)$ and $(\sigma, z)$ are {\it geometrically distinct}, if $y({\bf R})\not= z({\bf R})$. We denote by ${\cal J}({\Sigma})$ and $\widetilde{{\cal J}}({\Sigma})$ the set of all closed characteristics $(\tau,\, y)$ on ${\Sigma}$ with $\tau$ being the minimal period of $y$ and the set of all geometrically distinct ones respectively. Note that
${\cal J}({\Sigma})=\{\theta\cdot y\,|\, \theta\in S^1,\;y\; is\; prime\}$, while $\widetilde{{\cal J}}({\Sigma})={\cal J}({\Sigma})/S^1$, where the natural $S^1$-action is defined by $\theta\cdot y(t)=y(t+\tau\theta),\;\;\forall \theta\in S^1,\,t\in{\bf R}$.
Let $j: {\bf R}^{2n}\rightarrow{\bf R}$ be the gauge function of $\Sigma$, i.e., $j(\lambda x)=\lambda$ for $x\in\Sigma$ and $\lambda\ge0$, then $j\in C^3({\bf R}^{2n}\setminus\{0\}, {\bf R})\cap C^0({\bf R}^{2n}, {\bf R})$ and $\Sigma=j^{-1}(1)$. Fix a constant $\alpha\in(1,\,2)$ and define the Hamiltonian function $H_\alpha :{\bf R}^{2n}\rightarrow [0,\,+\infty)$ by \begin{equation} H_\alpha(x)=j(x)^\alpha,\qquad \forall x\in{\bf R}^{2n}.\label{1.2}\end{equation} Then $H_\alpha\in C^3({\bf R}^{2n}\setminus\{0\}, {\bf R})\cap C^1({\bf R}^{2n}, {\bf R})$ is convex and $\Sigma=H_\alpha^{-1}(1)$. It is well known that the problem (\ref{1.1}) is equivalent to the following given energy problem of the Hamiltonian system \begin{equation} \left\{\matrix{\dot{y}(t)=JH_\alpha^\prime(y(t)),
&&\quad H_\alpha(y(t))=1,\qquad \forall t\in{\bf R}. \cr
y(\tau)=y(0). && \cr }\right. \label{1.3}\end{equation} Denote by $\mathcal{J}(\Sigma, \,\alpha)$ the set of all solutions $(\tau,\, y)$ of (\ref{1.3}) where $\tau$ is the minimal period of $y$ and by $\widetilde{\mathcal{J}}(\Sigma, \,\alpha)$ the set of all geometrically distinct solutions of (\ref{1.3}). As above, $\widetilde{\mathcal{J}}(\Sigma, \,\alpha)$ is obtained from $\mathcal{J}(\Sigma, \,\alpha)$ by dividing the natural $S^1$-action. Note that elements in $\mathcal{J}(\Sigma)$ and $\mathcal{J}(\Sigma, \,\alpha)$ are one to one correspondent to each other, similarly for $\widetilde{{\cal J}}({\Sigma})$ and $\widetilde{\mathcal{J}}(\Sigma, \,\alpha)$.
Let $(\tau,\, y)\in\mathcal{J}(\Sigma, \,\alpha)$. The fundamental solution $\gamma_y : [0,\,\tau]\rightarrow {\rm Sp}(2n)$ with $\gamma_y(0)=I_{2n}$ of the linearized Hamiltonian system \begin{equation} \dot w(t)=JH_\alpha^{\prime\prime}(y(t))w(t),\qquad \forall t\in{\bf R},\label{1.4}\end{equation} is called the {\it associate symplectic path} of $(\tau,\, y)$. The eigenvalues of $\gamma_y(\tau)$ are called {\it Floquet multipliers} of $(\tau,\, y)$. By Proposition 1.6.13 of \cite{Eke3}, the Floquet multipliers with their multiplicities of $(\tau,\, y)\in\mathcal{J}(\Sigma)$ do not depend on the particular choice of the Hamiltonian function in (\ref{1.3}). For any $M\in {\rm Sp}(2n)$, we define the {\it elliptic height } $e(M)$ of
$M$ to be the total algebraic multiplicity of all eigenvalues of $M$ on the unit circle ${\bf U}=\{z\in{\bf C}|\; |z|=1\}$ in the complex plane ${\bf C}$. Since $M$ is symplectic, $e(M)$ is even and $0\le e(M)\le 2n$. As usual a $(\tau,\, y)\in{\cal J}({\Sigma})$ is {\it elliptic}, if $e(\gamma_y(\tau))=2n$. It is {\it non-degenerate}, if $1$ is a double Floquet multiplier of it. It is {\it hyperbolic}, if $1$ is a double Floquet multiplier of it and $e(\gamma_y(\tau))=2$. It is well known that these concepts are independent of the choice of $\alpha>1$.
For the existence and multiplicity of geometrically distinct closed characteristics on convex compact hypersurfaces in ${\bf R}^{2n}$ we refer to \cite{Rab1}, \cite{Wei1}, \cite{EkL1}, \cite{EkH1}, \cite{Szu1}, \cite{HWZ1}, \cite{LoZ1}, \cite{LLZ1}, and references therein. Note that recently in \cite{WHL}, Wang, Hu and Long proved $\,^{\#}\td{{\cal J}}({\Sigma})\ge 3$ for every ${\Sigma}\in{\cal H}(6)$.
On the stability problem, in \cite{Eke2} of Ekeland in 1986 and \cite{Lon2} of Long in 1998, for any ${\Sigma}\in{\cal H}(2n)$ the existence of at least one non-hyperbolic closed characteristic on ${\Sigma}$ was proved provided $^\#\td{{\cal J}}({\Sigma})<+\infty$. Ekeland proved also in \cite{Eke2} the existence of at least one elliptic closed characteristic on ${\Sigma}$ provided ${\Sigma}\in{\cal H}(2n)$ is $\sqrt{2}$-pinched. In \cite{DDE1} of 1992, Dell'Antonio, D'Onofrio and Ekeland proved the existence of at least one elliptic closed characteristic on ${\Sigma}$ provided ${\Sigma}\in{\cal H}(2n)$ satisfies ${\Sigma}=-{\Sigma}$. In \cite{Lon3} of 2000, Long proved that ${\Sigma}\in{\cal H}(4)$ and $\,^{\#}\td{{\cal J}}({\Sigma})=2$ imply that both of the closed characteristics must be elliptic. In \cite{LoZ1} of 2002, Long and Zhu further proved when $^\#\td{{\cal J}}({\Sigma})<+\infty$, there exists at least one elliptic closed characteristic and there are at least $[\frac{n}{2}]$ geometrically distinct closed characteristics on ${\Sigma}$ possessing irrational mean indices, which are then non-hyperbolic. In the recent paper \cite{LoW1}, Long and Wang proved that there exist at least two non-hyperbolic closed characteristic on ${\Sigma}\in{\cal H}(6)$ when $^\#\td{{\cal J}}({\Sigma})<+\infty$. Motivated by these results, we prove the following results in this paper:
{\bf Theorem 1.1.} {\it On every ${\Sigma}\in{\cal H}(6)$ satisfying $^\#\td{{\cal J}}({\Sigma})<+\infty$, there exist at least two geometrically distinct closed characteristics possessing irrational mean indices. }
{\bf Theorem 1.2.} {\it Suppose $^\#\td{{\cal J}}(\Sigma)=3$ for some $\Sigma\in{\cal H}(6)$. Then there exist at least two elliptic closed characteristics in $\td{{\cal J}}(\Sigma)$.}
The proofs of Theorems 1.1 and 1.2 are given in Section 3. Mainly ingredients in the proofs inculde: the mean index identity for closed characteristics established in \cite{WHL} recently, Morse inequality and the index iteration theory developed by Long and his coworkers, specially the common index jump theorem of Long and Zhu (Theorem 4.3 of \cite{LoZ1}, cf. Theorem 11.2.1 of \cite{Lon4}). In Section 2, we review briefly the equivariant Morse theory and the mean index identity for closed characteristics on compact convex hypersurfaces in ${\bf R}^{2n}$ developed in the recent \cite{WHL}.
In this paper, let ${\bf N}$, ${\bf N}_0$, ${\bf Z}$, ${\bf Q}$, ${\bf R}$, and ${\bf R}^+$ denote the sets of natural integers, non-negative integers, integers, rational numbers, real numbers, and positive real numbers respectively. Denote by $a\cdot b$ and $|a|$ the standard inner product and norm in
${\bf R}^{2n}$. Denote by $\langle\cdot,\cdot\rangle$ and $\|\cdot\|$ the standard $L^2$-inner product and $L^2$-norm. For an $S^1$-space $X$, we denote by $X_{S^1}$ the homotopy quotient of $X$ module the $S^1$-action, i.e., $X_{S^1}=S^\infty\times_{S^1}X$. We define the functions
\begin{equation} \left\{\matrix{[a]=\max\{k\in{\bf Z}\,|\,k\le a\}, &
E(a)=\min\{k\in{\bf Z}\,|\,k\ge a\} , \cr
\varphi(a)=E(a)-[a], \cr}\right. \label{1.5}\end{equation} Specially, $\varphi(a)=0$ if $ a\in{\bf Z}\,$, and $\varphi(a)=1$ if $a\notin{\bf Z}\,$. In this paper we use only ${\bf Q}$-coefficients for all homological modules. For a ${\bf Z}_m$-space pair $(A, B)$, let
$H_{\ast}(A, B)^{\pm{\bf Z}_m}= \{\sigma\in H_{\ast}(A, B)\,|\,L_{\ast}\sigma=\pm \sigma\}$, where $L$ is a generator of the ${\bf Z}_m$-action.
\setcounter{equation}{0} \section{ Equivariant Morse theory for closed characteristics}
In the rest of this paper, we fix a ${\Sigma}\in{\cal H}(2n)$ and assume the following condition on ${\Sigma}$:
\noindent (F) {\bf There exist only finitely many geometrically distinct closed characteristics \\$\quad \{(\tau_j, y_j)\}_{1\le j\le k}$ on $\Sigma$. }
In this section, we review briefly the equivariant Morse theory for closed characteristics on ${\Sigma}$ developed in \cite{WHL} which will be needed in Section 3 of this paper. All the details of proofs can be found in \cite{WHL}.
Let $\hat{\tau}=\inf\{\tau_j|\;1\le j\le k\}$. Note that here $\tau_j$'s are prime periods of $y_j$'s for $1\le j\le k$. Then by \S2 of \cite{WHL}, for any $a>\hat{\tau}$, we can construct a function $\varphi_a\in C^\infty({\bf R},{\bf R}^+)$ which has $0$ as its unique critical point in $[0,\,+\infty)$ such that $\varphi_a$ is strictly convex for $t\ge 0$. Moreover, $\frac{\varphi_a^\prime(t)}{t}$ is strictly decreasing for $t> 0$ together with $\lim_{t\rightarrow 0^+}\frac{\varphi_a^\prime(t)}{t}=1$ and $\varphi_a(0)=0=\varphi_a^\prime(0)$. More precisely, we define $\varphi_a$ via Propositions 2.2 and 2.4 in \cite{WHL}. The precise dependence of $\varphi_a$ on $a$ is explained in Remark 2.3 of \cite{WHL}.
Define the Hamiltonian function $H_a(x)=a\varphi_a(j(x))$ and consider the fixed period problem \begin{equation} \left\{\matrix{\dot{x}(t)=JH_a^\prime(x(t)), \cr
x(1)=x(0). \cr }\right. \label{2.1}\end{equation} Then $H_a\in C^3({\bf R}^{2n}\setminus\{0\}, {\bf R})\cap C^1({\bf R}^{2n}, {\bf R})$ is strictly convex. Solutions of (\ref{2.1}) are $x\equiv0$ and $x=\rho y(\tau t)$ with $\frac{\varphi_a^\prime(\rho)}{\rho}=\frac{\tau}{a}$, where $(\tau, y)$ is a solution of (\ref{1.1}). In particular, nonzero solutions of (\ref{2.1}) are one to one correspondent to solutions of (\ref{1.1}) with period $\tau<a$.
In the following, we use the Clarke-Ekeland dual action principle. As usual, let $G_a$ be the Fenchel transform of $H_a$ defined by
$G_a(y)=\sup\{x\cdot y-H_a(x)\;|\; x\in {\bf R}^{2n}\}$. Then $G_a\in C^2({\bf R}^{2n}\setminus\{0\},{\bf R})\cap C^1({\bf R}^{2n},{\bf R})$ is strictly convex. Let \begin{equation} L_0^2(S^1, \;{\bf R}^{2n})= \left\{u\in L^2([0, 1],\;{\bf R}^{2n})
\left|\frac{}{}\right.\int_0^1u(t)dt=0\right\}. \label{2.2}\end{equation} Define a linear operator $M: L_0^2(S^1,{\bf R}^{2n})\to L_0^2(S^1,{\bf R}^{2n})$ by $\frac{d}{dt}Mu(t)=u(t)$, $\int_0^1Mu(t)dt=0$. The dual action functional on $L_0^2(S^1, \;{\bf R}^{2n})$ is defined by \begin{equation} \Psi_a(u)=\int_0^1\left(\frac{1}{2}Ju\cdot Mu+G_a(-Ju)\right)dt.
\label{2.3}\end{equation} Then the functional $\Psi_a\in C^{1, 1}(L_0^2(S^1,\; {\bf R}^{2n}),\;{\bf R})$ is bounded from below and satisfies the Palais-Smale condition. Suppose $x$ is a solution of (\ref{2.1}). Then $u=\dot{x}$ is a critical point of $\Psi_a$. Conversely, suppose $u$ is a critical point of $\Psi_a$. Then there exists a unique $\xi\in{\bf R}^{2n}$ such that $Mu-\xi$ is a solution of (\ref{2.1}). In particular, solutions of (\ref{2.1}) are in one to one correspondence with critical points of $\Psi_a$. Moreover, $\Psi_a(u)<0$ for every critical point $u\not= 0$ of $\Psi_a$.
Suppose $u$ is a nonzero critical point of $\Psi_a$. Then following \cite{Eke3} the formal Hessian of $\Psi_a$ at $u$ is defined by $$ Q_a(v,\; v)=\int_0^1 (Jv\cdot Mv+G_a^{\prime\prime}(-Ju)Jv\cdot Jv)dt, $$ which defines an orthogonal splitting $L_0^2=E_-\oplus E_0\oplus E_+$ of $L_0^2(S^1,\; {\bf R}^{2n})$ into negative, zero and positive subspaces. The index of $u$ is defined by $i(u)=\dim E_-$ and the nullity of $u$ is defined by $\nu(u)=\dim E_0$. Let $u=\dot{x}$ be the critical point of $\Psi_a$ such that $x$ corresponds to the closed characteristic $(\tau,\,y)$ on $\Sigma$. Then the index $i(u)$ and the nullity $\nu(u)$ defined above coincide with the Ekeland indices defined by I. Ekeland in \cite{Eke1} and \cite{Eke3}. Specially $1\le \nu(u)\le 2n-1$ always holds.
We have a natural $S^1$-action on $L_0^2(S^1,\; {\bf R}^{2n})$ defined by ${\theta}\cdot u(t)=u({\theta}+t)$ for all ${\theta}\in S^1$ and $t\in{\bf R}$. Clearly $\Psi_a$ is $S^1$-invariant. For any $\kappa\in{\bf R}$, we denote by
\begin{equation} \Lambda_a^\kappa=\{u\in L_0^2(S^1,\; {\bf R}^{2n})\;|\;\Psi_a(u)\le\kappa\}.
\label{2.4}\end{equation} For a critical point $u$ of $\Psi_a$, we denote by \begin{equation} \Lambda_a(u)=\Lambda_a^{\Psi_a(u)}
=\{w\in L_0^2(S^1,\; {\bf R}^{2n}) \;|\; \Psi_a(w)\le\Psi_a(u)\}.\label{2.5}\end{equation} Clearly, both sets are $S^1$-invariant. Since the $S^1$-action preserves $\Psi_a$, if $u$ is a critical point of $\Psi_a$, then the whole orbit $S^1\cdot u$ is formed by critical points of $\Psi_a$. Denote by $crit(\Psi_a)$ the set of critical points of $\Psi_a$. Note that by the condition (F), the number of critical orbits of $\Psi_a$ is finite. Hence as usual we can make the following definition.
{\bf Definition 2.1.} {\it Suppose $u$ is a nonzero critical point of $\Psi_a$ and ${\cal N}$ is an $S^1$-invariant open neighborhood of $S^1\cdot u$ such that $crit(\Psi_a)\cap(\Lambda_a(u)\cap {\cal N})=S^1\cdot u$. Then the $S^1$-critical modules of $S^1\cdot u$ are defined by} $$ C_{S^1,\; q}(\Psi_a, \;S^1\cdot u) =H_{q}((\Lambda_a(u)\cap{\cal N})_{S^1},\; ((\Lambda_a(u)\setminus S^1\cdot u)\cap{\cal N})_{S^1}). $$
We have the following proposition for critical modules.
{\bf Proposition 2.2.} (Proposition 3.2 of \cite{WHL}) {\it The critical module $C_{S^1,\;q}(\Psi_a, \;S^1\cdot u)$ is independent of $a$ in the sense that if $x_i$ are solutions of (\ref{2.1}) with Hamiltonian functions $H_{a_i}(x)\equiv a_i\varphi_{a_i}(j(x))$ for $i=1$ and $2$ respectively such that both $x_1$ and $x_2$ correspond to the same closed characteristic $(\tau, y)$ on $\Sigma$. Then we have} $$ C_{S^1,\; q}(\Psi_{a_1}, \;S^1\cdot\dot {x}_1) \cong
C_{S^1,\; q}(\Psi_{a_2}, \;S^1\cdot \dot {x}_2), \quad \forall q\in {\bf Z}. $$
Now let $u\neq 0$ be a critical point of $\Psi_a$ with multiplicity $mul(u)=m$, i.e., $u$ corresponds to a closed characteristic $(m\tau, y)\subset\Sigma$ with $(\tau, y)$ being prime. Hence $u(t+\frac{1}{m})=u(t)$ holds for all $t\in {\bf R}$ and the orbit of $u$, namely, $S^1\cdot u\cong S^1/{\bf Z}_m\cong S^1$. Let $f: N(S^1\cdot u)\rightarrow S^1\cdot u$ be the normal bundle of $S^1\cdot u$ in $L_0^2(S^1,\; {\bf R}^{2n})$ and let $f^{-1}(\theta\cdot u)=N(\theta\cdot u)$ be the fibre over $\theta\cdot u$, where $\theta\in S^1$. Let $DN(S^1\cdot u)$ be the $\varrho$-disk bundle of $N(S^1\cdot u)$ for some $\varrho>0$ sufficiently small, i.e.,
$DN(S^1\cdot u)=\{\xi\in N(S^1\cdot u)\;| \; \|\xi\|<\varrho\}$ and let $DN(\theta\cdot u)=f^{-1}({\theta}\cdot u)\cap DN(S^1\cdot u)$ be the disk over $\theta\cdot u$. Clearly, $DN(\theta\cdot u)$ is ${\bf Z}_m$-invariant and we have $DN(S^1\cdot u)=DN(u)\times_{{\bf Z}_m}S^1$, where the $Z_m$-action is given by $$ ({\theta}, v, t)\in {\bf Z}_m\times DN(u)\times S^1\mapsto
({\theta}\cdot v, \;\theta^{-1}t)\in DN(u)\times S^1. $$ Hence for an $S^1$-invariant subset $\Gamma$ of $DN(S^1\cdot u)$, we have $\Gamma/S^1=(\Gamma_u\times_{{\bf Z}_m}S^1)/S^1=\Gamma_u/{\bf Z}_m$, where $\Gamma_u=\Gamma\cap DN(u)$. Since $\Psi_a$ is not $C^2$ on $L_0^2(S^1,\; {\bf R}^{2n})$, we need to use a finite dimensional approximation introduced by Ekeland in order to apply Morse theory. More precisely, we can construct a finite dimensional submanifold
$\Gamma(\iota)$ of $L_0^2(S^1,\; {\bf R}^{2n})$ which admits a ${\bf Z}_\iota$-action with $m|\iota$. Moreover $\Psi_a$ and $\Psi_a|_{\Gamma(\iota)}$ have the same critical points. $\Psi_a|_{\Gamma(\iota)}$ is $C^2$ in a small tubular neighborhood of the critical orbit $S^1\cdot u$ and the Morse index and nullity of its critical points coincide with those of the corresponding critical points of $\Psi_a$. Let \begin{equation} D_\iota N(S^1\cdot u)=DN(S^1\cdot u)\cap\Gamma(\iota), \quad D_\iota N(\theta\cdot u)=DN(\theta\cdot u)\cap\Gamma(\iota). \label{2.6}\end{equation} Then we have \begin{equation} C_{S^1,\; \ast}(\Psi_a, \;S^1\cdot u) \cong H_\ast(\Lambda_a(u)\cap D_\iota N(u),\;
(\Lambda_a(u)\setminus\{u\})\cap D_\iota N(u))^{{\bf Z}_m}. \label{2.7}\end{equation} Now we can apply the results of Gromoll and Meyer in \cite{GrM1} to the manifold $D_{p\iota}N(u^p)$ with $u^p$ as its unique critical point, where $p\in{\bf N}$. Then $mul(u^p)=pm$ is the multiplicity of $u^p$ and the isotropy group ${\bf Z}_{pm}\subseteq S^1$ of $u^p$ acts on $D_{p\iota}N(u^p)$ by isometries. According to Lemma 1 of \cite{GrM1}, we have a ${\bf Z}_{pm}$-invariant decomposition of $T_{u^p}(D_{p\iota}N(u^p))$ $$ T_{u^p}(D_{p\iota}N(u^p)) =V^+\oplus V^-\oplus V^0=\{(x_+, x_-, x_0)\} $$ with $\dim V^-=i(u^p)$, $\dim V^0=\nu(u^p)-1$ and a ${\bf Z}_{pm}$-invariant neighborhood $B=B_+\times B_-\times B_0$ for $0$ in $T_{u^p}(D_{p\iota}N(u^p))$ together with two $Z_{pm}$-invariant diffeomorphisms $$\Phi :B=B_+\times B_-\times B_0\rightarrow \Phi(B_+\times B_-\times B_0)\subset D_{p\iota}N(u^p)$$ and $$ \eta : B_0\rightarrow W(u^p)\equiv\eta(B_0)\subset D_{p\iota}N(u^p)$$ such that $\Phi(0)=\eta(0)=u^p$ and
\begin{equation} \Psi_a\circ\Phi(x_+,x_-,x_0)=|x_+|^2 - |x_-|^2 + \Psi_a\circ\eta(x_0),
\label{2.8}\end{equation} with $d(\Psi_a\circ \eta)(0)=d^2(\Psi_a\circ\eta)(0)=0$. As \cite{GrM1}, we call $W(u^p)$ a local {\it characteristic manifold} and $U(u^p)=B_-$ a local {\it negative disk} at $u^p$. By the proof of Lemma 1 of \cite{GrM1}, $W(u^p)$ and $U(u^p)$ are ${\bf Z}_{pm}$-invariant. Then we have \begin{eqnarray} && H_\ast(\Lambda_a(u^p)\cap D_{p\iota}N(u^p),\;
(\Lambda_a(u^p)\setminus\{u^p\})\cap D_{p\iota}N(u^p)) \nonumber\\ &&\qquad = H_\ast (U(u^p),\;U(u^p)\setminus\{u^p\}) \otimes H_\ast(W(u^p)\cap \Lambda_a(u^p),\; (W(u^p)\setminus\{u^p\})\cap \Lambda_a(u^p)),
\label{2.9}\end{eqnarray} where \begin{equation} H_q(U(u^p),U(u^p)\setminus\{u^p\} )
= \left\{\matrix{{\bf Q}, & {\rm if\;}q=i(u^p), \cr
0, & {\rm otherwise}. \cr}\right. \label{2.10}\end{equation} Now we have the following proposition.
{\bf Proposition 2.3.} (Proposition 3.10 of \cite{WHL}) {\it Let $u\neq 0$ be a critical point of $\Psi_a$ with $mul(u)=1$. Then for all $p\in{\bf N}$ and $q\in{\bf Z}$, we have \begin{equation} C_{S^1,\; q}(\Psi_a, \;S^1\cdot u^p)\cong \left(\frac{}{}H_{q-i(u^p)}(W(u^p)\cap \Lambda_a(u^p),\; (W(u^p)\setminus\{u^p\})\cap \Lambda_a(u^p))\right)^{\beta(u^p){\bf Z}_p},
\label{2.11}\end{equation} where $\beta(u^p)=(-1)^{i(u^p)-i(u)}$. Thus \begin{equation} C_{S^1,\; q}(\Psi_a, \;S^1\cdot u^p)=0, \quad {\rm for}\;\;
q<i(u^p) \;\;{\rm or}\;\;q>i(u^p)+\nu(u^p)-1. \label{2.12}\end{equation} In particular, if $u^p$ is non-degenerate, i.e., $\nu(u^p)=1$, then} \begin{equation} C_{S^1,\; q}(\Psi_a, \;S^1\cdot u^p)
= \left\{\matrix{{\bf Q}, & {\rm if\;}q=i(u^p)\;{\rm and\;}\beta(u^p)=1, \cr
0, & {\rm otherwise}. \cr}\right. \label{2.13}\end{equation}
We make the following definition
{\bf Definition 2.4.} {\it Let $u\neq 0$ be a critical point of $\Psi_a$ with $mul(u)=1$. Then for all $p\in{\bf N}$ and $l\in{\bf Z}$, let \begin{eqnarray} k_{l, \pm 1}(u^p)&=&\dim\left(\frac{}{}H_l(W(u^p)\cap \Lambda_a(u^p),\; (W(u^p)\setminus\{u^p\})\cap \Lambda_a(u^p))\right)^{\pm{\bf Z}_p}, \nonumber\\ k_l(u^p)&=&\dim\left(\frac{}{}H_l(W(u^p)\cap \Lambda_a(u^p), (W(u^p)\setminus\{u^p\})\cap \Lambda_a(u^p))\right)^{\beta(u^p){\bf Z}_p}. \nonumber\end{eqnarray} $k_l(u^p)$'s are called critical type numbers of $u^p$. }
We have the following properties for critical type numbers
{\bf Proposition 2.5.} (Proposition 3.13 of \cite{WHL}) {\it Let $u\neq 0$ be a critical point of $\Psi_a$ with $mul(u)=1$. Then there exists a minimal $K(u)\in {\bf N}$ such that $$ \nu(u^{p+K(u)})=\nu(u^p),\quad i(u^{p+K(u)})-i(u^p)\in 2{\bf Z}, $$ and $k_l(u^{p+K(u)})=k_l(u^p)$ for all $p\in {\bf N}$ and $l\in{\bf Z}$. We call $K(u)$ the minimal period of critical modules of iterations of the functional $\Psi_a$ at $u$. }
For a closed characteristic $(\tau,y)$ on $\Sigma$, we denote by $y^m\equiv (m\tau, y)$ the $m$-th iteration of $y$ for $m\in{\bf N}$. Let $a>\tau$ and choose ${\varphi}_a$ as above. Determine $\rho$ uniquely by $\frac{{\varphi}_a'(\rho)}{\rho}=\frac{\tau}{a}$. Let $x=\rho y(\tau t)$ and $u=\dot{x}$. Then we define the index $i(y^m)$ and nullity $\nu(y^m)$ of $(m\tau,y)$ for $m\in{\bf N}$ by $$ i(y^m)=i(u^m), \qquad \nu(y^m)=\nu(u^m). $$ These indices are independent of $a$ when $a$ tends to infinity. Now the mean index of $(\tau,y)$ is defined by $$ \hat{i}(y)=\lim_{m\rightarrow\infty}\frac{i(y^m)}{m}. $$ Note that $\hat{i}(y)>2$ always holds which was proved by Ekeland and Hofer in \cite{EkH1} of 1987 (cf. Corollary 8.3.2 and Lemma 15.3.2 of \cite{Lon4} for a different proof).
By Proposition 2.2, we can define the critical type numbers $k_l(y^m)$ of $y^m$ to be $k_l(u^m)$, where $u^m$ is the critical point of $\Psi_a$ corresponding to $y^m$. We also define $K(y)=K(u)$. Then we have
{\bf Proposition 2.6.} {\it We have $k_l(y^m)=0$ for $l\notin [0, \nu(y^m)-1]$ and it can take only values $0$ or $1$ when $l=0$ or $l=\nu(y^m)-1$. Moreover, the following properties hold (cf. Lemma 3.10 of \cite{BaL1}, \cite{Cha1} and \cite{MaW1}):
(i) $k_0(y^m)=1$ implies $k_l(y^m)=0$ for $1\le l\le \nu(y^m)-1$.
(ii) $k_{\nu(y^m)-1}(y^m)=1$ implies $k_l(y^m)=0$ for $0\le l\le \nu(y^m)-2$.
(iii) $k_l(y^m)\ge 1$ for some $1\le l\le \nu(y^m)-2$ implies $k_0(y^m)=k_{\nu(y^m)-1}(y^m)=0$.
(iv) If $\nu(y^m)\le 3$, then at most one of the $k_l(y^m)$'s for $0\le l\le \nu(y^m)-1$ can be non-zero.
(v) If $i(y^m)-i(y)\in 2{\bf Z}+1$ for some $m\in{\bf N}$, then $k_0(y^m)=0$.}
{\bf Proof.} By Definition 2.4 we have $$ k_l(y^m)\le \dim H_l(W(u^m)\cap \Lambda_a(u^m),\; (W(u^m)\setminus\{u^m\})\cap \Lambda_a(u^m))\equiv \eta_l(y^m). $$ Then from Corollary 1.5.1 of \cite{Cha1} or Corollary 8.4 of \cite{MaW1}, (i)-(iv) hold.
For (v), if $\eta_0(y^m)=0$, then (v) follows directly from Definition 2.4.
By Corollary 8.4 of \cite{MaW1}, $\eta_0(y^m)=1$ if and only if $u^m$ is a local minimum in the local characteristic manifold $W(u^m)$. Hence $(W(u^m)\cap \Lambda_a(u^m),\;(W(u^m)\setminus\{u^m\})\cap \Lambda_a(u^m))=(\{u^m\},\; \emptyset)$. By Definition 2.4, we have: \begin{eqnarray} k_{0, +1}(u^m) &=& \dim H_0(W(u^m)\cap \Lambda_a(u^m),\;
(W(u^m)\setminus\{u^m\})\cap \Lambda_a(u^m))^{+{\bf Z}_m}\nonumber\\ &=& \dim H_0(\{u^m\})^{+{\bf Z}_m}\nonumber\\ &=& 1. \nonumber\end{eqnarray} This implies $k_0(u^m)=k_{0, -1}(u^m)=0$.
\vrule height0.18cm width0.14cm $\,$
For a closed characteristic $(\tau, y)$ on $\Sigma$, we define as in \cite{WHL} \begin{equation} \hat\chi(y)=\frac{1}{K(y)}
\sum_{1\le m\le K(y)\atop 0\le l\le 2n-2}
(-1)^{i(y^{m})+l}k_l(y^{m}). \label{2.14}\end{equation} In particular, if all $y^m$'s are non-degenerate, then by Proposition 2.3 we have \begin{equation} \hat\chi(y)
= \left\{\matrix{(-1)^{i(y)}, & {\rm if\;\;} i(y^2)-i(y)\in 2{\bf Z}, \cr
\frac{(-1)^{i(y)}}{2}, & {\rm otherwise}. \cr}\right. \label{2.15}\end{equation}
We have the following mean index identity for closed characteristics.
{\bf Theorem 2.7.} (Theorem 1.2 of \cite{WHL}) {\it Suppose $\Sigma\in {\cal H}(2n)$ satisfies $\,^{\#}\widetilde{{\cal J}}({\Sigma})<+\infty$. Denote all the geometrically distinct closed characteristics by $\{(\tau_j,y_j)\}_{1\le j\le k}$. Then the following identity holds } $$ \sum_{1\le j\le k}\frac{\hat{\chi}(y_j)}{\hat{i}(y_j)}=\frac{1}{2}. $$
Let $\Psi_a$ be the functional defined by (\ref{2.3}) for some $a\in{\bf R}$ large enough and let $\varepsilon>0$ be small enough such that $[-\varepsilon, +\infty)\setminus\{0\}$ contains no critical values of $\Psi_a$. Denote by $I_a$ the greatest integer in ${\bf N}_0$ such that $I_a<i(\tau, y)$ hold for all closed characteristics $(\tau,\, y)$ on $\Sigma$ with $\tau\ge a$. Then by Section 5 of \cite{WHL}, we have \begin{equation} H_{S^1,\; q}(\Lambda_a^{-\varepsilon} ) \cong H_{S^1,\; q}( \Lambda_a^\infty)
\cong H_q(CP^\infty), \quad \forall q<I_a. \label{2.16}\end{equation} For any $q\in{\bf Z}$, let \begin{equation} M_q(\Lambda_a^{-\varepsilon})
=\sum_{1\le j\le k,\,1\le m_j<a/\tau_j} \dim C_{S^1,\;q}(\Psi_a, \;S^1\cdot u_j^{m_j}).
\label{2.17} \end{equation} Then the equivariant Morse inequalities for the space $\Lambda_a^{-\varepsilon}$ yield \begin{eqnarray} M_q(\Lambda_a^{-\varepsilon})
&\ge& b_q(\Lambda_a^{-\varepsilon}),\label{2.18}\\ M_q(\Lambda_a^{-\varepsilon}) &-& M_{q-1}(\Lambda_a^{-\varepsilon})
+ \cdots +(-1)^{q}M_0(\Lambda_a^{-\varepsilon}) \nonumber\\ &\ge& b_q(\Lambda_a^{-\varepsilon}) - b_{q-1}(\Lambda_a^{-\varepsilon})
+ \cdots + (-1)^{q}b_0(\Lambda_a^{-\varepsilon}), \label{2.19}\end{eqnarray} where $b_q(\Lambda_a^{-\varepsilon})=\dim H_{S^1,\; q}(\Lambda_a^{-\varepsilon})$. Now we have the following Morse inequalities for closed characteristics
{\bf Theorem 2.8.} {\it Let $\Sigma\in {\cal H}(2n)$ satisfy $\,^{\#}\widetilde{{\cal J}}({\Sigma})<+\infty$. Denote all the geometrically distinct closed characteristics by $\{(\tau_j,\; y_j)\}_{1\le j\le k}$. Let \begin{eqnarray} M_q&=&\lim_{a\rightarrow+\infty}M_q(\Lambda_a^{-\varepsilon}),\quad
\forall q\in{\bf Z},\label{2.20}\\ b_q &=& \lim_{a\rightarrow+\infty}b_q(\Lambda_a^{-\varepsilon})= \left\{\matrix{1, & {\rm if\;}q\in 2{\bf N}_0, \cr
0, & {\rm otherwise}. \cr}\right. \label{2.21} \end{eqnarray} Then we have} \begin{eqnarray} M_q &\ge& b_q,\label{2.22}\\
M_q-M_{q-1}+\cdots +(-1)^{q}M_0 &\ge& b_q-b_{q-1}+\cdots +(-1)^{q}b_0,
\qquad\forall \;q\in{\bf Z}. \label{2.23}\end{eqnarray}
{\bf Proof.} As we have mentioned before, $\hat i(y_j)>2$ holds for $1\le j\le k$. Hence the Ekeland index satisfies $i(y_j^m)=i(u_j^m)\to\infty$ as $m\to\infty$ for $1\le j\le k$. Note that $I_a\to +\infty$ as $a\to +\infty$. Now fix a $q\in{\bf Z}$ and a sufficiently great $a>0$. By Propositions 2.2, 2.3 and (\ref{2.17}), $M_i(\Lambda_a^{-\varepsilon})$ is invariant for all $a>A_q$ and $0\le i\le q$, where $A_q>0$ is some constant. Hence (\ref{2.20}) is meaningful. Now for any $a$ such that $I_a>q$, (\ref{2.16})-(\ref{2.19}) imply that (\ref{2.21})-(\ref{2.23}) hold.
\vrule height0.18cm width0.14cm $\,$
\setcounter{equation}{0} \section{Proofs of the main theorems }
In this section, we give proofs of Theorems 1.1 and 1.2 by using the mean index identity of \cite{WHL}, Morse inequality and the index iteration theory developed by Long and his coworkers.
As Definition 1.1 of \cite{LoZ1}, we define
{ \bf Definition 3.1.} For $\alpha\in(1,2)$, we define a map $\varrho_n\colon{\cal H}(2n)\to{\bf N}\cup\{ +\infty\}$ \begin{equation} \varrho_n({\Sigma}) = \left\{\matrix{+\infty, & {\rm if\;\;}^\#\mathcal{V}(\Sigma,\alpha)=+\infty, \cr \min\left\{[\frac{i(x,1) + 2S^+(x) - \nu(x,1)+n}{2}]\,
\left|\frac{}{}\right.\,(\tau,x)\in\mathcal{V}_\infty(\Sigma, \alpha)\right\},
& {\rm if\;\;} ^\#\mathcal{V}(\Sigma, \alpha)<+\infty, \cr}\right. \label{3.1}\end{equation} where $\mathcal{V}(\Sigma,\alpha)$ and $\mathcal{V}_\infty(\Sigma,\alpha)$ are variationally visible and infinite variationally visible sets respectively given by Definition 1.4 of \cite{LoZ1} (cf. Definition 15.3.3 of \cite{Lon4}).
{\bf Theorem 3.2.} (cf. Theorem 15.1.1 of \cite{Lon4}) {\it Suppose $(\tau,y)\in {\cal J}(\Sigma)$. Then we have \begin{equation} i(y^m)\equiv i(m\tau ,y)=i(y, m)-n,\quad \nu(y^m)\equiv\nu(m\tau, y)=\nu(y, m),
\qquad \forall m\in{\bf N}, \label{3.2}\end{equation} where $i(y, m)$ and $\nu(y, m)$ are the Maslov-type index and nullity of $(m\tau, y)$ defined by Conley, Zehnder and Long (cf. \S 5.4 of \cite{Lon4}).}
Recall that for a principal $U(1)$-bundle $E\to B$, the Fadell-Rabinowitz index
(cf. \cite{FaR1}) of $E$ is defined to be $\sup\{k\;|\, c_1(E)^{k-1}\not= 0\}$, where $c_1(E)\in H^2(B,{\bf Q})$ is the first rational Chern class. For a $U(1)$-space, i.e., a topological space $X$ with a $U(1)$-action, the Fadell-Rabinowitz index is defined to be the index of the bundle $X\times S^{\infty}\to X\times_{U(1)}S^{\infty}$, where $S^{\infty}\to CP^{\infty}$ is the universal $U(1)$-bundle.
As in P.199 of \cite{Eke3}, choose some $\alpha\in(1,\, 2)$ and associate with $U$ a convex function $H$ such that $H(\lambda x)=\lambda^\alpha H(x)$ for $\lambda\ge 0$. Consider the fixed period problem \begin{equation} \left\{\matrix{\dot{x}(t)=JH^\prime(x(t)), \cr
x(1)=x(0). \cr }\right. \label{3.3}\end{equation}
Define \begin{equation} L_0^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n})
=\{u\in L^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n})\,|\,\int_0^1udt=0\}. \label{3.4}\end{equation} The corresponding Clarke-Ekeland dual action functional is defined by \begin{equation} \Phi(u)=\int_0^1\left(\frac{1}{2}Ju\cdot Mu+H^{\ast}(-Ju)\right)dt,
\qquad \forall\;u\in L_0^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n}), \label{3.5}\end{equation} where $Mu$ is defined by $\frac{d}{dt}Mu(t)=u(t)$ and $\int_0^1Mu(t)dt=0$, $H^\ast$ is the Fenchel transform of $H$ defined in \S2.
For any $\kappa\in{\bf R}$, we denote by
\begin{equation} \Phi^{\kappa-}=\{u\in L_0^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n})\;|\;
\Phi(u)<\kappa\}. \label{3.6}\end{equation} Then as in P.218 of \cite{Eke3}, we define
\begin{equation} c_i=\inf\{\delta\in{\bf R}\;|\: \hat I(\Phi^{\delta-})\ge i\},\label{3.7}\end{equation} where $\hat I$ is the Fadell-Rabinowitz index given above. Then by Proposition 3 in P.218 of \cite{Eke3}, we have
{\bf Proposition 3.3.} {\it Every $c_i$ is a critical value of $\Phi$. If $c_i=c_j$ for some $i<j$, then there are infinitely many geometrically distinct closed characteristics on ${\Sigma}$.}
As in Definition 2.1, we define the following
{\bf Definition 3.4.} {\it Suppose $u$ is a nonzero critical point of $\Phi$, and ${\cal N}$ is an $S^1$-invariant open neighborhood of $S^1\cdot u$ such that $crit(\Phi)\cap(\Lambda(u)\cap {\cal N})=S^1\cdot u$. Then the $S^1$-critical modules of $S^1\cdot u$ is defined by \begin{eqnarray} C_{S^1,\; q}(\Phi, \;S^1\cdot u) =H_{q}((\Lambda(u)\cap{\cal N})_{S^1},\; ((\Lambda(u)\setminus S^1\cdot u)\cap{\cal N})_{S^1}),\label{3.8}
\end{eqnarray} where $\Lambda(u)=\{w\in L_0^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n})\;|\; \Phi(w)\le\Phi(u)\}$.}
Comparing with Theorem 4 in P.219 of \cite{Eke3}, we have the following
{\bf Proposition 3.5.} {\it For every $i\in{\bf N}$, there exists a point $u\in L_0^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n})$ such that} \begin{eqnarray} && \Phi^\prime(u)=0,\quad \Phi(u)=c_i, \label{3.9}\\ && C_{S^1,\; 2(i-1)}(\Phi, \;S^1\cdot u)\neq 0. \label{3.10}\end{eqnarray}
{\bf Proof.} By Lemma 8 in P.206 of \cite{Eke3}, we can use Theorem 1.4.2 of \cite{Cha1} in the equivariant form to obtain \begin{equation} H_{S^1,\,\ast}(\Phi^{c_i+\epsilon},\;\Phi^{c_i-\epsilon}) =\bigoplus_{\Phi(u)=c_i}C_{S^1,\; \ast}(\Phi, \;S^1\cdot u),\label{3.11}\end{equation} for $\epsilon$ small enough such that the interval $(c_i-\epsilon,\,c_i+\epsilon)$ contains no critical values of $\Phi$ except $c_i$.
Similar to P.431 of \cite{EkH1}, we have \begin{equation} H^{2(i-1)}((\Phi^{c_i+\epsilon})_{S^1},\,(\Phi^{c_i-\epsilon})_{S^1}) \mapright{q^\ast} H^{2(i-1)}((\Phi^{c_i+\epsilon})_{S^1} ) \mapright{p^\ast}H^{2(i-1)}((\Phi^{c_i-\epsilon})_{S^1}), \label{3.12}\end{equation} where $p$ and $q$ are natural inclusions. Denote by $f: (\Phi^{c_i+\epsilon})_{S^1}\rightarrow CP^\infty$ a classifying map and let
$f^{\pm}=f|_{(\Phi^{c_i\pm\epsilon})_{S^1}}$. Then clearly each $f^{\pm}: (\Phi^{c_i\pm\epsilon})_{S^1}\rightarrow CP^\infty$ is a classifying map on $(\Phi^{c_i\pm\epsilon})_{S^1}$. Let $\eta \in H^2(CP^\infty)$ be the first universal Chern class.
By definition of $c_i$, we have $\hat I(\Phi^{c_i-\epsilon})< i$, hence $(f^-)^\ast(\eta^{i-1})=0$. Note that $p^\ast(f^+)^\ast(\eta^{i-1})=(f^-)^\ast(\eta^{i-1})$. Hence the exactness of (\ref{3.12}) yields a $\sigma\in H^{2(i-1)}((\Phi^{c_i+\epsilon})_{S^1},\,(\Phi^{c_i-\epsilon})_{S^1})$ such that $q^\ast(\sigma)=(f^+)^\ast(\eta^{i-1})$. Since $\hat I(\Phi^{c_i+\epsilon})\ge i$, we have $(f^+)^\ast(\eta^{i-1})\neq 0$. Hence $\sigma\neq 0$, and then $$H^{2(i-1)}_{S^1}(\Phi^{c_i+\epsilon},\Phi^{c_i-\epsilon})= H^{2(i-1)}((\Phi^{c_i+\epsilon})_{S^1},\,(\Phi^{c_i-\epsilon})_{S^1})\neq 0. $$ Now the proposition follows from (\ref{3.11}) and the universal coefficient theorem.
\vrule height0.18cm width0.14cm $\,$
{\bf Proposition 3.6.} {\it Suppose $u$ is the critical point of $\Phi$ found in Proposition 3.5. Then we have \begin{equation} C_{S^1,\; 2(i-1)}(\Psi_a, \;S^1\cdot u_a)\neq 0, \label{3.13}\end{equation} where $\Psi_a$ is given by (\ref{2.3}) and $u_a\in L_0^2(S^1,\;{\bf R}^{2n})$ is its critical point corresponding to $u$ in the natural sense.}
{\bf Proof.} Fix this $u$, we modify the function $H$ only in a small neighborhood $\Omega$ of $0$ as in \cite{Eke1} so that the corresponding orbit of $u$ does not enter $\Omega$ and the resulted function $\widetilde{H}$ satisfies similar properties as Definition 1 in P. 26 of \cite{Eke1} by just replacing $\frac{3}{2}$ there by $\alpha$. Define the dual action functional $\widetilde{\Phi}:L_0^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n})\to{\bf R}$ by \begin{equation} \widetilde{\Phi}(v)=\int_0^1\left(\frac{1}{2}Jv\cdot
Mv+\widetilde{H}^{\ast}(-Jv)\right)dt, \label{3.14}\end{equation} since clearly $\Phi$ and $\widetilde{\Phi}$ are $C^1$ close to each other. Then by the continuity of critical modules (cf. Theorem 8.8 of \cite{MaW1} or Theorem 1.5.6 in P.53 of \cite{Cha1}, which can be easily generalized to the equivariant sense) for the $u$ in the proposition, we have \begin{equation} C_{S^1,\; \ast}(\Phi, \;S^1\cdot u)\cong C_{S^1,\; \ast}(\widetilde{\Phi},
\;S^1\cdot u).\label{3.15}\end{equation}
Using a finite dimensional approximation as in Lemma 3.9 of \cite{Eke1}, we have \begin{equation} C_{S^1,\; \ast}(\widetilde{\Phi}, \;S^1\cdot u) \cong H_\ast(\widetilde{\Lambda}(u)\cap D_\iota N(u),\;
(\widetilde{\Lambda}(u)\setminus\{u\})\cap D_\iota N(u))^{{\bf Z}_m}, \label{3.16}\end{equation} where $\widetilde{\Lambda}(u)=\{w\in L_0^{\frac{\alpha}{\alpha-1}}(S^1,{\bf R}^{2n})\;|\; \widetilde{\Phi}(w)\le\widetilde{\Phi}(u)\}$ and $D_\iota N(u)$ is a ${\bf Z}_m$-invariant finite dimensional disk transversal to $S^1\cdot u$ at $u$ (cf. Lemma 3.9 of \cite{WHL}), $m$ is the multiplicity of $u$.
By Lemma 3.9 of \cite{WHL}, we have \begin{equation} C_{S^1,\; \ast}(\Psi_a, \;S^1\cdot u_a) \cong H_\ast(\Lambda_a(u_a)\cap D_\iota N(u_a),\;
(\Lambda_a(u_a)\setminus\{u_a\})\cap D_\iota N(u_a))^{{\bf Z}_m}.\label{3.17}\end{equation} By the construction of $H_a$ in \cite{WHL}, $H_a=\widetilde{H}$ in a $L^\infty$-neighborhood of $S^1\cdot u$. We remark here that multiplying $H$ by a constant will not affect the corresponding critical modules, i.e., the corresponding critical orbits have isomorphic critical modules. Hence we can assume $H_a=H$ in a $L^\infty$-neighborhood of $S^1\cdot u$ and then the above conclusion. Hence $\Psi_a$ and $\widetilde{\Phi}$ coincide in a $L^\infty$-neighborhood of $S^1\cdot u$. Note also by Lemma 3.9 of \cite{Eke1}, the two finite dimensional approximations are actually the same. Hence we have \begin{eqnarray} && H_\ast(\widetilde{\Lambda}(u)\cap D_\iota N(u),\;
(\widetilde{\Lambda}(u)\setminus\{u\})\cap D_\iota N(u))^{{\bf Z}_m}\nonumber\\ &&\quad\cong H_\ast(\Lambda_a(u_a)\cap D_\iota N(u_a),\;
(\Lambda_a(u_a)\setminus\{u_a\})\cap D_\iota N(u_a))^{{\bf Z}_m}.\label{3.18}\end{eqnarray} Now the proposition follows from Proposition 3.5 and (\ref{3.16})-(\ref{3.18}).
\vrule height0.18cm width0.14cm $\,$
Now we can give:
{\bf Proof of Theorem 1.1.} By the assumption (F) at the beginning of Section 2, we denote by $\{(\tau_j, y_j)\}_{1\le j\le k}$ all the geometrically distinct closed characteristics on ${\Sigma}$, and by ${\gamma}_j\equiv \gamma_{y_j}$ the associated symplectic path of $(\tau_j,\,y_j)$ on ${\Sigma}$ for $1\le j\le k$. Then by Lemma 15.2.4 of \cite{Lon4}, there exist $P_j\in {\rm Sp}(6)$ and $M_j\in {\rm Sp}(4)$ such that \begin{equation} {\gamma}_j(\tau_j)=P_j^{-1}(N_1(1,\,1){\rm \diamond} M_j)P_j, \quad\forall\; 1\le j\le k,
\label{3.19}\end{equation} where recall $N_1(1,b)=\left(\matrix{1 & b\cr
0 & 1\cr}\right)$ for $b\in{\bf R}$.
Without loss of generality, by Theorem 1.3 of \cite{LoZ1} (cf. Theorem 15.5.2 of \cite{Lon4}), we may assume that $(\tau_1,y_1)$ has irrational mean index. Hence by Theorem 8.3.1 and Corollary 8.3.2 of \cite{Lon4}, $M_1\in {\rm Sp}(4)$ in (\ref{3.19}) can be connected to $R({\theta}_1){\rm \diamond} Q_1$ within ${\Omega}^0(M_1)$ for some $\frac{{\theta}_1}{\pi}\notin{\bf Q}$ and $Q_1\in {\rm Sp}(2)$, where $R({\theta})=\left(\matrix{\cos{\theta} & -\sin{\theta}\cr
\sin{\theta} & \cos{\theta}\cr}\right)$ for ${\theta}\in{\bf R}$. Here we use notations from Definition 1.8.5 and Theorem 1.8.10 of \cite{Lon4}. By Theorem 2.7, the following identity holds \begin{equation} \frac{\hat{\chi}(y_1)}{\hat{i}(y_1)} +
\sum_{2\le j\le k}\frac{\hat{\chi}(y_j)}{\hat{i}(y_j)}=\frac{1}{2}. \label{3.20}\end{equation} Now we have the following four cases according to the classification of basic norm forms (cf. Definition 1.8.9 of \cite{Lon4}).
{\bf Case 1.} {\it $Q_1=R({\theta}_2)$ with $\frac{{\theta}_2}{\pi}\notin{\bf Q}$ or $Q_1=D(\pm 2)\equiv\left(\matrix{ \pm 2 & 0\cr
0 & \pm\frac{1}{2}\cr}\right)$}.
In this case, by Theorems 8.1.6 and 8.1.7 of \cite{Lon4}, we have $\nu(y_1^m)\equiv 1$, i.e., $y_1^m$ is non-degenerate for all $m\in{\bf N}$. Hence it follows from (\ref{2.15}) that $\hat\chi(y_1)\neq 0$. Now (\ref{3.20}) implies that at least one of the $y_j$'s for $2\le j\le k$ must have irrational mean index. Hence the theorem holds.
{\bf Case 2.} {\it $Q_1=N_1(1,b)$ with $b=\pm 1,\, 0$}.
We have two subcases according to the value of $\hat\chi(y_1)$.
{\bf Subcase 2.1.} $\hat\chi(y_1)\neq 0$.
In this case, (\ref{3.20}) implies that at least one of the $y_j$'s for $2\le j\le k$ must have irrational mean index. Hence the theorem holds.
{\bf Subcase 2.2.} $\hat\chi(y_1)=0$.
Note that by Theorems 8.1.4 and 8.1.7 of \cite{Lon4} and our above Proposition 2.5, we have $K(y_1)=1$. Since $\nu(y_1)\le 3$, it follows from Proposition 2.6 and (\ref{2.14}): \begin{equation} 0=\hat\chi(y_1)=(-1)^{i(y_1)}(k_0(y_1)-k_1(y_1)+k_2(y_1)). \label{3.21}\end{equation} By (iv) of Proposition 2.6, at most one of $k_l(y_1)$ for $l=0,\,1,\,2$ can be nonzero. Then (\ref{3.21}) yields $k_l(y_1)=0$ for $l=0,\,1,\,2$. Hence it follows from Proposition 2.3 and Definition 2.4 that \begin{equation} C_{S^1,\; q}(\Psi_a, \;S^1\cdot u_1^p)=0,\qquad \forall p\in{\bf N},\; q\in{\bf Z},
\label{3.22}\end{equation} where we denote by $u_1$ the critical point of $\Psi_a$ corresponding to $(\tau_1,\, y_1)$. In other words, $u_1^m$ is homologically invisible for all $m\in{\bf N}$.
By Propositions 3.5 and 3.6, we can replace the term {\it infinite variationally visible } in Definition 1.4 of \cite{LoZ1} (cf. Definition 15.3.3 of \cite{Lon4}) by {\it homologically visible}, and it is easy to check that all the results in \cite{LoZ1} remain true under this change. Hence by Theorem 1.3 of \cite{LoZ1} (cf. Theorem 15.5.2 of \cite{Lon4}), at least one of the $y_j$'s for $2\le j\le k$ must have irrational mean index, i.e., we can forget $y_1$ and consider only $y_j$'s for $2\le j\le k$, then apply that theorem. This proves our theorem.
{\bf Case 3.} $Q_1=N_1(-1,\, 1)$.
In this case, by Theorems 8.1.4, 8.1.5 and 8.1.7 of \cite{Lon4}, we have $$ i(y_1,\,m)=mi(y_1,\, 1)+2E\left(\frac{m\theta_1}{2\pi}\right)-2, \quad \nu(y_1,\, m)=1+\frac{1+(-1)^m}{2},\qquad \forall m\in{\bf N}, $$ with $i(y_1,1)\in 2{\bf Z}+1$. Hence $K(y_1)=2$ by Proposition 2.5. Because $y_1$ is non-degenerate, we have $k_l(y_1)={\delta}_0^l$ for all $l\in{\bf Z}$ by (\ref{2.11}), (\ref{2.13}) and Definition 2.4. By Theorem 3.2, we have $i(y_1)=i(y_1,1)-3\in 2{\bf Z}$ and $i(y_1^2)-i(y_1)=i(y_1,2)-i(y_1,1)\in 2{\bf Z}+1$. Hence $k_0(y_1^2)=0$ by (v) of Proposition 2.6. Because $\nu(y_1^2)=2$, we have $k_l(y_1^2)=0$ for $l\ge 2$. Then (\ref{2.14}) implies $$ \hat\chi(y_1)=\frac{1+k_1(y_1^2)}{2}\neq 0. $$ Now (\ref{3.20}) implies that at least one of the $y_j$'s for $2\le j\le k$ must have irrational mean index. Hence the theorem holds.
{\bf Case 4.} {\it $Q_1=N_1(-1,\, b)$ with $b=0,\, -1$ or $Q_1=R(\theta_2)$ with $\frac{\theta_2}{2\pi}=\frac{L}{N}\in{\bf Q}\cap(0,\,1)$ with $N>1$ and $(L,\,N)=1$}.
Note first that if $Q_1=N_1(-1,\, b)$ with $b=0,\, -1$, then Theorems 8.1.5 and 8.1.7 of \cite{Lon4} imply that their index iteration formulae coincide with that of a rotational matrix $R({\theta})$ with ${\theta}=\pi$. Hence in the following we shall only consider the case $Q_1=R({\theta}_2)$ with ${\theta}_2/\pi\in (0,\,2)\cap {\bf Q}$. The same argument also shows that the theorem is true for $Q_1=N_1(-1,-1)$.
By Theorems 8.1.4 and 8.1.7 of \cite{Lon4}, we have \begin{eqnarray} i(y_1,m)&=& m(i(y_1,1)-1) + 2E\left(\frac{m\theta_1}{2\pi}\right)
+ 2E\left(\frac{m\theta_2}{2\pi}\right)-3,\label{3.23}\\ \nu(y_1,m)&=&3-2\varphi\left(\frac{m\theta_2}{2\pi}\right),
\label{3.24}\end{eqnarray} with $i(y_1,\,1)\in 2{\bf Z}+1$ and all $m\in{\bf N}$. By Proposition 2.5, we have $K(y_1)=N$. Note that because $y_1^m$ is non-degenerate for $1\le m\le N-1$, $k_l(y_1^m)={\delta}_0^l$ holds for $1\le m\le N-1$ by (\ref{2.11}), (\ref{2.13}) and Definition 2.4. By Theorem 3.2, we have $i(y_1)=i(y_1,\,1)-3\in 2{\bf Z}$. Then (\ref{2.14}) implies \begin{equation} \hat\chi(y_1)=\frac{N-1+k_0(y_1^N)-k_1(y_1^N)+k_2(y_1^N)}{N}. \label{3.25}\end{equation} This follows from $\nu(y_1^m)\le 3$ for all $m\in{\bf N}$.
We have two subcases according to the value of $\hat\chi(y_1)$.
{\bf Subcase 4.1.} $\hat\chi(y_1)\neq 0$.
In this subcase, (\ref{3.20}) implies that at least one of the $y_j$'s for $2\le j\le k$ must have irrational mean index. Hence the theorem holds.
{\bf Subcase 4.2.} $\hat\chi(y_1)=0$.
In this subcase, it follows from (\ref{3.25}) and (iv) of Proposition 2.6 that \begin{equation} k_1(y_1^N)=N-1>0. \label{3.26}\end{equation}
Using the common index jump theorem (Theorems 4.3 and 4.4 of \cite{LoZ1}, Theorems 11.2.1 and 11.2.2 of \cite{Lon4}), we obtain some $(T, m_1,\ldots,m_k)\in{\bf N}^{k+1}$ such that $\frac{m_1\theta_2}{\pi}\in{\bf Z}$ (cf. (11.2.18) of \cite{Lon4}) and the following hold by (11.2.6), (11.2.7) and (11.2.26) of \cite{Lon4}: \begin{eqnarray} i(y_j,\, 2m_j) &\ge& 2T-\frac{e(\gamma_j(\tau_j))}{2}, \label{3.27}\\ i(y_j,\, 2m_j)+\nu(y_j,\, 2m_j) &\le& 2T+\frac{e(\gamma_j(\tau_j))}{2}-1, \label{3.28}\\ i(y_j,\, 2m_j+1) &=& 2T+i(y_j,\,1). \label{3.29}\\ i(y_j,\, 2m_j-1)+\nu(y_j,\, 2m_j-1)
&=& 2T-(i(y_j,\,1)+2S^+_{\gamma_j(\tau_j)}(1)-\nu(y_j, 1)). \label{3.30} \end{eqnarray}
By P. 340 of \cite{Lon4}, we have \begin{eqnarray} && 2S^+_{\gamma_j(\tau_j)}(1)-\nu(y_j,\,1) \nonumber\\ &&\qquad = 2S^+_{N_1(1,\,1)}(1)-\nu_1(N_1(1,\,1))
+2S^+_{M_j}(1)-\nu_1(M_j)\nonumber\\ &&\qquad = 1 + 2S^+_{M_j}(1) - \nu_1(M_j)\nonumber\\ &&\qquad \ge -1, \qquad 1\le j\le k.\label{3.31}\end{eqnarray} In the last inequality, we have used the fact that the worst case for $2S^+_{M_j}(1) - \nu_1(M_j)$ happens when $M_j=N_1(1,\,-1)^{\diamond 2}$ which gives the lower bound $-2$.
By Corollary 15.1.4 of \cite{Lon4}, we have $i(y_j,\,1)\ge 3$ for $1\le j\le k$. Note that $e(\gamma_j(\tau_j))\le6$ for $1\le j\le k$. Hence Theorem 10.2.4 of \cite{Lon4} yields \begin{eqnarray} i(y_j,\, m)+\nu(y_j,\, m) &\le& i(y_j, m+1)-i(y_j, 1)+\frac{e(\gamma_j(\tau_j))}{2}-1\nonumber\\ &\le& i(y_j, m+1)-1. \quad \forall m\in{\bf N},\;1\le j\le k.\label{3.32}\end{eqnarray} Specially, we have $$ i(y_j,\, m)<i(y_j,\, m+1),\qquad \forall m\in{\bf N},\;1\le j\le k. $$ Now (\ref{3.27})-(\ref{3.30}) become \begin{eqnarray} i(y_j,\, 2m_j) &\ge& 2T-3, \label{3.33}\\ i(y_j,\, 2m_j)+\nu(y_j,\, 2m_j)-1 &\le& 2T+1, \label{3.34}\\ i(y_j,\, 2m_j+m) &\ge& 2T+3, \quad\forall\; m\ge 1, \label{3.35}\\ i(y_j,\, 2m_j-m)+\nu(y_j,\, 2m_j-m)-1 &\le& 2T-3,\quad\forall\; m\ge 1,\label{3.36} \end{eqnarray} where $1\le j\le k$. By Proposition 2.3, we have \begin{eqnarray} C_{S^1,\; q}(\Psi_a, \;S^1\cdot u_1^{2m_1})= {\delta}_{i(u_1^{2m_1})+1}^q{\bf Q}^{k_1(y_1^N)} ={\delta}_{i(u_1^{2m_1})+1}^q{\bf Q}^{N-1},\label{3.37} \end{eqnarray} Note that by Theorem 3.2 \begin{equation} i(y_j^m)=i(y_j, m)-3,\qquad \forall m\in{\bf N},\quad 1\le j\le k.\label{3.38}\end{equation} Hence (\ref{3.23}) implies that $i(y_1^m)$ is even for all $m\in{\bf N}$. This together with (\ref{3.35})-(\ref{3.38}) and Proposition 2.3 yield \begin{eqnarray} &&C_{S^1,\; 2T-2}(\Psi_a, \;S^1\cdot u_1^{m})=0,\quad \forall m\in{\bf N},\label{3.39}\\ &&C_{S^1,\; 2T-4}(\Psi_a, \;S^1\cdot u_1^{m})=0,\quad \forall m\in{\bf N},\label{3.40}\\ &&C_{S^1,\; 2T-2}(\Psi_a, \;S^1\cdot u_j^{m})=0,\quad \forall m\neq 2m_j,\;2\le j\le k.\label{3.41}\\ &&C_{S^1,\; 2T-4}(\Psi_a, \;S^1\cdot u_j^{m})=0,\quad \forall m\neq 2m_j,\;2\le j\le k.\label{3.42} \end{eqnarray} In fact, by (\ref{3.35}), (\ref{3.36}) and (\ref{3.38}) for $1\le j\le k$, we have $i(u_j^m)=i(y_j^m)\ge 2T$ for all $m>2m_j$ and $i(u_j^m)+\nu(u_j^m)-1=i(y_j^m)+\nu(y_j^m)-1\le 2T-6$ for all $m<2m_j$. Thus (\ref{3.41})-(\ref{3.42}) hold and (\ref{3.39})-(\ref{3.40}) hold for $m\neq 2m_1$ by Proposition 2.3. Since $i(y_1^{2m_1})$ is even, by (\ref{3.37}), (\ref{3.39})-(\ref{3.40}) also hold for $m = 2m_1$.
Thus by Propositions 3.5 and 3.6 we can find $p,q\in \{2,\ldots,k\}$ such that \begin{eqnarray} &&\Phi^\prime(u_p^{2m_p})=0,\quad \Phi(u_p^{2m_p})=c_{T-1}, \qquad C_{S^1,\; 2T-4}(\Psi_a, \;S^1\cdot u_p^{2m_p})\neq 0,\label{3.43}\\ &&\Phi^\prime(u_q^{2m_q})=0,\quad \Phi(u_q^{2m_q})=c_{T}, \quad\qquad C_{S^1,\; 2T-2}(\Psi_a, \;S^1\cdot u_q^{2m_q})\neq 0,\label{3.44} \end{eqnarray} where we denote also by $u_p^{2m_p}$ and $u_q^{2m_q}$ the corresponding critical points of $\Phi$ and which will not be confused.
Note that by assumption (F) and Proposition 3.3, we have $c_{T-1}<c_{T}$. Hence $p\neq q$ by (\ref{3.43}) and (\ref{3.44}). Then the proof of Lemma 3.1 in \cite{LoZ1}(cf. lemma 15.3.5 of \cite{Lon4}) yields \begin{equation} \hat i(y_p, 2m_p)<\hat i(y_q, 2m_q).\label{3.45}\end{equation} Now if both $\hat i(y_p)\in{\bf Q}$ and $\hat i(y_q)\in{\bf Q}$ hold, then the proof of Theorem 5.3 in \cite{LoZ1}(cf. Theorem 15.5.2 of \cite{Lon4}) yields $$ \hat i(y_p, 2m_p)=\hat{i}(y_q, 2m_q). $$ Note that we may choose $T$ firstly such that $\frac{T}{M\hat i(y_j)}\in{\bf N}$ hold for all $\hat i(y_j)\in{\bf Q}$ then use the proof of Theorem 5.3 in \cite{LoZ1}. Here $M$ is the least integer in ${\bf N}$ that satisfies $\frac{M\theta}{\pi}\in{\bf Z}$, whenever $e^{\sqrt{-1}\theta}\in\sigma(\gamma_j(\tau_j))$ and $\frac{\theta}{\pi}\in{\bf Q}$ for some $1\le j\le k$. Hence either $\hat i(y_p)\notin{\bf Q}$ or $\hat i(y_q)\notin{\bf Q}$ holds. This together with $\hat i(y_1)\notin{\bf Q}$ and $p, q\neq 1$ proves the theorem.
\vrule height0.18cm width0.14cm $\,$
{\bf Proof of Theorem 1.2.} We denote by $\{(\tau_j, y_j)\}_{1\le j\le 3}$ the three geometrically distinct closed characteristics on ${\Sigma}$, and by ${\gamma}_j\equiv \gamma_{y_j}$ the associated symplectic path of $(\tau_j,\,y_j)$ on ${\Sigma}$ for $1\le j\le 3$. Then as in the proof of Theorem 1.1, there exist $P_j\in {\rm Sp}(6)$ and $M_j\in {\rm Sp}(4)$ such that \begin{equation} {\gamma}_j(\tau_j)=P_j^{-1}(N_1(1,\,1){\rm \diamond} M_j)P_j, \quad\forall\; 1\le j\le 3.
\label{3.46}\end{equation}
As in P.356 of \cite{LoZ1}, if there is no $(\tau_j, y_j)$ with $M_j=N_1(1,\,-1)^{\diamond 2}$ and $i(y_j, 1)=3$ in $\mathcal{V}_\infty(\Sigma, \alpha)$, then $\varrho_n({\Sigma})=3$. Hence we can use Theorem 1.4 of \cite{LoZ1} (Theorem 15.5.2 of \cite{Lon4}) to obtain the existence of at least two elliptic closed characteristics. This proves the theorem.
It remains to show that if there exists a $(\tau_j, y_j)$ with $M_j=N_1(1,\,-1)^{\diamond 2}$ and $i(y_j, 1)=3$ in $\mathcal{V}_\infty(\Sigma, \alpha)$, we have at least two elliptic closed characteristics. We may assume $M_1=N_1(1,\,-1)^{\diamond 2}$ and $i(y_1,1)=3$ without loss of generality. Note that $(\tau_1,y_1)$ has rational mean index by Theorem 8.3.1 of {\cite{Lon4} and Theorem 3.2.
By Theorem 1.3 of \cite{LoZ1}, we may assume that $(\tau_2,y_2)$ has irrational mean index. Hence by Theorem 8.3.1 and Corollary 8.3.2 of \cite{Lon4}, $M_2\in {\rm Sp}(4)$ in (\ref{3.46}) can be connected to $R({\theta}_2){\rm \diamond} Q_2$ within ${\Omega}^0(M_2)$ for some $\frac{{\theta}_2}{\pi}\in{\bf R}\setminus{\bf Q}$ and $Q_2\in {\rm Sp}(2)$, where $R({\theta})=\left(\matrix{\cos{\theta} & -\sin{\theta}\cr
\sin{\theta} & \cos{\theta}\cr}\right)$ for ${\theta}\in{\bf R}$. Here we use notations from Definition 1.8.5 and Theorem 1.8.10 of \cite{Lon4}. By Theorem 2.7, the following identity holds \begin{equation} \frac{\hat{\chi}(y_1)}{\hat{i}(y_1)} +\frac{\hat{\chi}(y_2)}{\hat{i}(y_2)} +\frac{\hat{\chi}(y_3)}{\hat{i}(y_3)}=\frac{1}{2}. \label{3.47}\end{equation}
Now if $Q_2$ is not hyperbolic, then both $(\tau_1, y_1)$ and $(\tau_2, y_2)$ are elliptic, so the theorem holds.
Hence it remains to consider the case that $Q_2$ is hyperbolic. Clearly $(\tau_2, y_2)$ is non-degenerate, then it follows from (\ref{2.15}) that $\hat\chi(y_2)\neq 0$. Hence (\ref{3.47}) implies that $\hat i(y_3)\in{\bf R}\setminus{\bf Q}$. Now by Theorem 8.3.1 and Corollary 8.3.2 of \cite{Lon4}, $M_3\in {\rm Sp}(4)$ in (\ref{3.46}) can be connected to $R({\theta}_3){\rm \diamond} Q_3$ within ${\Omega}^0(M_3)$ for some $\frac{{\theta}_3}{\pi}\in{\bf R}\setminus{\bf Q}$ and $Q_3\in {\rm Sp}(2)$. By the same reason as above, it suffices to consider the case that $Q_3$ is hyperbolic.
Combining all the above, the only case we need to kick off is that \begin{equation} M_1=N_1(1,\,-1)^{\diamond 2},\quad i(y_1, 1)=3,\quad M_2=R(\theta_2)\diamond Q_2,\quad M_3=R(\theta_3)\diamond Q_3, \label{3.48}\end{equation} where both $Q_2$ and $Q_3$ are hyperbolic. Hence by Theorem 8.3.1 of \cite{Lon4} and Theorem 3.2, we have \begin{eqnarray} i(y_1^m)&=&m(i(y_1, 1)+1)-4=4m-4,\;\nu(y_1^m)=3,\quad \forall m\in{\bf N},\label{3.49}\\ i(y_j^m)&=&m(i(y_j)+3)+2E\left(\frac{m\theta_j}{2\pi}\right)-5,\;\nu(y_j^m)=1,
\quad \forall m\in{\bf N},\;j=2,3.\label{3.50} \end{eqnarray} By Proposition 2.5, we have $K(y_1)=1$. Note that $i(y_1)=i(y_1, 1)-3=0$ by Theorem 3.2. Hence Proposition 2.6, (\ref{2.14}) and (\ref{2.15}) imply \begin{eqnarray} \hat\chi (y_1)&\le& 1,\qquad \hat\chi (y_1)\in{\bf Z},\label{3.51}\\ \hat\chi(y_j)
&=& \left\{\matrix{-1, & {\rm if\;\;} i(y_j)\in 2{\bf N}_0+1, \cr
\frac{1}{2}, & {\rm if\;\;} i(y_j)\in 2{\bf N}_0,\cr}\right.\quad j=1,2.\label{3.52} \end{eqnarray} By (\ref{3.49}) and (\ref{3.50}), we have \begin{eqnarray} \hat i(y_1)&=&4,\label{3.53}\\ \hat i(y_j)&=&i(y_j)+3+\frac{\theta_j}{\pi}>3,\quad j=2,3.\label{3.54} \end{eqnarray} By (\ref{3.51})-(\ref{3.54}), in order to make (\ref{3.47}) hold, we must have \begin{eqnarray} \hat\chi (y_1)&=&1,\label{3.55}\\ i(y_j)&\in&2{\bf N}_0,\quad j=2,3.\label{3.56} \end{eqnarray} In fact, by (\ref{3.52}) and (\ref{3.54}), we have $$\frac{\hat{\chi}(y_2)}{\hat{i}(y_2)} +\frac{\hat{\chi}(y_3)}{\hat{i}(y_3)}<\frac{1}{6}+\frac{1}{6}<\frac{1}{2}.$$ Thus to make (\ref{3.47}) hold, we must have $\frac{\hat{\chi}(y_1)}{\hat{i}(y_1)}>0$. Hence (\ref{3.55}) follows from (\ref{3.51}). Now if $i(y_2)\in2{\bf N}_0+1$ or $i(y_3)\in2{\bf N}_0+1$ holds, then by (\ref{3.52}), we have $$ \frac{\hat{\chi}(y_1)}{\hat{i}(y_1)} +\frac{\hat{\chi}(y_2)}{\hat{i}(y_2)} +\frac{\hat{\chi}(y_3)}{\hat{i}(y_3)}<\frac{1}{4}+\frac{1}{6}<\frac{1}{2}. $$ Hence (\ref{3.56}) must hold.
By (\ref{2.14}), (\ref{3.49}) and (\ref{3.55}), we have $1=\hat\chi (y_1)=k_0(y_1)-k_1(y_1)+k_2(y_1)$. Since $\nu(y_1)=3$, by Proposition 2.6, only one of $k_0(y_1),\,k_1(y_1),\,k_2(y_1)$ can be nonzero. Hence we obtain \begin{equation} k_1(y_1)=0,\quad k_0(y_1)+k_2(y_1)=1,\label{3.57}\end{equation} By Proposition 2.3, we have \begin{equation} C_{S^1,\; q}(\Psi_a, \;S^1\cdot u_j^p)=0,\quad \forall p\in{\bf N},\;q\in2{\bf Z}+1,
\;1\le j\le 3.\label{3.58}\end{equation} In fact, by (\ref{3.49}), we have $i(y_1^m)\in2{\bf N}$ for all $m\in{\bf N}$. Thus (\ref{3.58}) holds for $j=1$ by (\ref{2.11}), (\ref{3.57}) and Definition 2.4. By (\ref{3.50}) and (\ref{3.56}), for $j=2, 3$, we have $i(y_j^m)\in2{\bf N}$ when $m\in2{\bf N}_0+1$ and $i(y_j^m)\in2{\bf N}_0+1$ when $m\in2{\bf N}$. In particular, all $y_j^m$ are non-degenerate for $m\in{\bf N}$ and $j=2, 3$. Thus (\ref{3.58}) holds for $j=2, 3$ by (\ref{2.13}).
Note that (\ref{3.58}) implies \begin{equation} M_q=0,\quad \forall q\in2{\bf Z}+1.\label{3.59}\end{equation} Together with the Morse inequality Theorem 2.8, it yields $$ -M_{2k}-\cdots -M_2 -M_0 \ge -b_{2k}-\cdots - b_2 -b_0. $$ Thus together with the Morse inequality again, it yields $$ b_{2k}+\cdots + b_2 + b_0\ge M_{2k}+\cdots +M_2 +M_0
\ge b_{2k}+\cdots + b_2 + b_0, $$ for all $k\ge 0$. Therefore we obtain \begin{equation} M_q=b_q, \quad \forall q\in{\bf Z}. \label{3.60}\end{equation}
By (\ref{3.57}), we have two cases according to the values of $k_l(y_1)$s.
{\bf Case 1.} $k_0(y_1)=1$ and $k_2(y_1)=0$.
In this case, by Propositions 2.3, 2.5 and Definition 2.4, we have \begin{equation} \dim C_{S^1,\; q}(\Psi_a, \;S^1\cdot u_1^m)=\delta_{4m-4}^q,\quad
\forall m\in{\bf N},\;q\in{\bf Z}.\label{3.61}\end{equation} Then by (\ref{3.60}) and (\ref{2.21}), we must have \begin{equation} C_{S^1,\; 4m-4}(\Psi_a, \;S^1\cdot u_j^p)=0,\quad \forall p,\,m\in{\bf N},\; \;j=2,3.\label{3.62} \end{equation} By (\ref{3.60}) and (\ref{2.21}) again, $M_2=b_2=1$ implies \begin{equation} C\equiv C_{S^1,\; 2}(\Psi_a, \;S^1\cdot u_j^p)={\bf Q},\label{3.63}\end{equation} for some $p\in{\bf N}$ and $j=2$ or $3$. If $p\ge 2$, by (\ref{3.50}), we have \begin{equation} i(y_j^p)\ge 3p+2E\left(\frac{p\theta_j}{2\pi}\right)-5\ge 3.\label{3.64}\end{equation} Thus $C=0$ by Proposition 2.3. Hence $p=1$. Without loss of generality, we assume $j=2$. Then by Proposition 2.3 and (\ref{3.63}), we have \begin{equation} i(y_2)=2.\label{3.65}\end{equation} Then by (\ref{3.50}), we have \begin{equation} i(y_2^m)\ge 7,\quad\forall m\ge 2.\label{3.66}\end{equation} By (\ref{3.60}) and (\ref{2.21}), $M_6=b_6=1$ implies \begin{equation} C_{S^1,\; 6}(\Psi_a, \;S^1\cdot u_j^p)={\bf Q},\label{3.67}\end{equation} for some $p\in{\bf N}$ and $j=2$ or $3$. By (\ref{3.65}) and (\ref{3.66}), we have $j\neq 2$, i.e., $j=3$. We must have $p=1$. In fact, by (\ref{3.61}) and (\ref{3.63}), $y_1^m$ and $y_2^n$ already contribute a $1$ to $M_q$ for $q=0,\,2,\,4$. Hence by (\ref{2.21}), (\ref{3.60}) and (\ref{3.56}), we have $i(y_3)\ge 6$, and then $i(y_3^m)\ge 15$ by (\ref{3.50}) for $m\ge 2$. Thus $p=1$ follows from Proposition 2.3. Now we have \begin{equation} i(y_3)=6.\label{3.68}\end{equation} Hence by (\ref{3.53}) and (\ref{3.55}) for $y_1$, (\ref{3.50}), (\ref{3.52}), (\ref{3.65}) and (\ref{3.68}) for $y_2$ and $y_3$, we have $$ \frac{\hat{\chi}(y_1)}{\hat{i}(y_1)} +\frac{\hat{\chi}(y_2)}{\hat{i}(y_2)} +\frac{\hat{\chi}(y_3)}{\hat{i}(y_3)} =\frac{1}{4}+\frac{1}{2(5+\frac{\theta_2}{\pi})}+\frac{1}{2(9+\frac{\theta_3}{\pi})} <\frac{1}{2}. $$ This contradicts (\ref{3.47}) and proves Case 1.
{\bf Case 2.} $k_0(y_1)=0$ and $k_2(y_1)=1$.
The study for this case is similar to that of Case 1. Thus we are rather sketch here.
In this case, by Proposition 2.3 and Definition 2.4, we have \begin{equation} \dim C_{S^1,\; q}(\Psi_a, \;S^1\cdot u_1^m)=\delta_{4m-2}^q,\quad
\forall m\in{\bf N},\;q\in{\bf Z}.\label{3.69}\end{equation} Then by (\ref{3.60}) and (\ref{2.21}), we must have \begin{eqnarray} C_{S^1,\; 4m-2}(\Psi_a, \;S^1\cdot u_j^p)=0,\quad \forall p,\,m\in{\bf N},\; \;j=2,3.\label{3.70} \end{eqnarray} By (\ref{3.69}), (\ref{3.60}) and (\ref{2.21}), $M_0=b_0=1$ implies \begin{eqnarray} C_{S^1,\; 0}(\Psi_a, \;S^1\cdot u_j^p)={\bf Q},\label{3.71} \end{eqnarray} for some $p\in{\bf N}$ and $j=2$ or $3$. By (\ref{3.64}), we have $p=1$. Without loss of generality, we assume $j=2$. Then by Proposition 2.3 and (\ref{3.50}), we have \begin{equation} i(y_2)=0,\qquad i(y_2^m)\ge 6,\quad\forall m\ge 3.\label{3.72}\end{equation} By (\ref{3.60}) and (\ref{2.21}), $M_4=b_4=1$ implies \begin{equation} C_{S^1,\; 4}(\Psi_a, \;S^1\cdot u_j^p)={\bf Q},\label{3.73}\end{equation} for some $p\in{\bf N}$ and $j=2$ or $3$. By (\ref{3.69}) and (\ref{3.72}), as in the verification of (\ref{3.68}), we have $j=3$ and $p=1$. Then by Proposition 2.3, we have \begin{equation} i(y_3)=4.\label{3.74}\end{equation} Hence by (\ref{3.53}) and (\ref{3.55}) for $y_1$, (\ref{3.50}), (\ref{3.52}), (\ref{3.72}) and (\ref{3.74}) for $y_2$ and $y_3$, we have $$ \frac{\hat{\chi}(y_1)}{\hat{i}(y_1)} +\frac{\hat{\chi}(y_2)}{\hat{i}(y_2)} +\frac{\hat{\chi}(y_3)}{\hat{i}(y_3)} =\frac{1}{4}+\frac{1}{2(3+\frac{\theta_2}{\pi})}+\frac{1}{2(7+\frac{\theta_3}{\pi})} <\frac{1}{2}. $$ This contradicts (\ref{3.47}) and proves Case 2 and then the whole theorem.
\vrule height0.18cm width0.14cm $\,$
\noindent {\bf Acknowledgements.} I would like to sincerely thank my Ph. D. thesis advisor, Professor Yiming Long, for introducing me to Hamiltonian dynamics and for his valuable help and encouragement during the writing of this paper. I would like to say that how enjoyable it is to work with him. I would like to sincerely thank the referee for his/her careful reading and valuable comments and suggestions.
\end{document} | arXiv |
Solving recurrence relation with minimum and factorial
I need to solve the following recurrence relation, where $T(n,m)$ is defined over $\Bbb N_+\times\Bbb N_+$.
$T(n,m)=\begin{cases} 1, & n=1\text{ or }m\leq 2(n-1)!\\ \min\limits_{a,b,c\geq 1,\ c\le n-1\\a\leq c!,\ b\leq(n-c)!}{T(c,a)+T(n-c,b)+T(n,m-ab)}, & \text{else.} \end{cases}$
Note: This question is highly related to my previous question here, since $ab\leq\max\limits_{1\leq c\leq n-1}{c!(n-c)!}=(n-1)!$
I guess that the minimum is obtained at $c=\lceil n/2\rceil,a=c!,b=(n-c)!$, but I don't know how to prove it.
The first 10 values of the first n's are:
$T(1,*)=1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\dots\\ T(2,*)=1, 1, 3, 5, 7, 9, 11, 13, 15, 17,\dots\\ T(3,*)=1, 1, 1, 1, 3, 3, 5, 5, 7, 7,\dots\\ T(4,*)=1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\dots$
Experiments show that
$$T(n,m)=\begin{cases}1 & n=1 \text{ or }f(n,m)\leq 0,\\ 3+2\Big\lfloor\frac {f(n,m)-1}{g(n)}\Big\rfloor & \text{otherwise},\end{cases}$$
for $f(n,m)=m-2(n-1)!$, and for some $g$ whose first values are: 1, 1, 2, 4, 12, 48, 240. I guess that $$g(n)=\begin{cases}1 & \text{if }n<3,\\ 2(n-2)! & \text{otherwise.}\end{cases}$$
recurrence-relation factorial
Dudi FridDudi Frid
$\begingroup$ Can you explain where all these questions are coming from? Are we solving some exercise sheet? Writing your thesis? $\endgroup$ – Yuval Filmus Jun 9 '19 at 8:23
$\begingroup$ It's neither an exercise sheet, nor my thesis: I am working on a rather complicated article in combinatorics for a while, since I don't have co-authors in my article I post here some stuff to get some help and to assure that my conclusions are correct $\endgroup$ – Dudi Frid Jun 9 '19 at 8:30
$\begingroup$ After getting all this help, you will be having co-authors, namely people who helped you write the article. $\endgroup$ – Yuval Filmus Jun 9 '19 at 9:24
$\begingroup$ Assume that $T(n,m)$ is defined on $\Bbb Z_{\ge1}\times\Bbb Z_{\ge1}$ as before. What is the value of $T(1,3)$? Since $3\not\le2(1-1)!$, we cannot apply the first rule. Since there is no $c$ such that $c\ge1$ and $c<1-1$, so $T(1,3)$ is the min of an empty set to be infinity. Is $T(1,3)$ infinity? $\endgroup$ – John L. Jun 12 '19 at 8:55
$\begingroup$ It looks like $T(1,m)$ for $m\ge 3$ can be set to infinity or any value that is no less than 1 without affecting other values. $\endgroup$ – John L. Jun 12 '19 at 10:52
It is not true that the minimum can always be obtained at $c=\lceil n/2\rceil,a=c!,b=(n-c)!$. Here is the smallest counterexample, $$T(5,49) = T(1,1) + T(4,1) + T(5,48) = 3 \not=5=T(3,6)+T(2,2)+T(5,12).$$ Instead, the minimum can always be obtained at $c=1$, $a=1$, $b=2(n-2)!.$
The following neat formula conjectured in the question is correct. $$T(n,m)=\begin{cases} 1 & n=1 \text{ or }f(n,m)\leq 0,\\ 3+2\Big\lfloor\dfrac {f(n,m)-1}{g(n)}\Big\rfloor &\text{otherwise}, \end{cases}$$ where $f(n,m)=m-2(n-1)!$ and $g(n)=\begin{cases}1 &\text{if }n<3,\\ 2(n-2)! &\text{otherwise.}\end{cases}$
$T(n,m)=1,$ if $n=1$ or $m\le 2(n-1)!$.
$T(n,m)$ is nondecreasing with respect to $m$.
$T(n,m)\ge3$ if $m\gt 2(n-1)!$.
$T(2,m)=\begin{cases} 1 & m\le2,\\ 2m-3 & \text{otherwise}.\end{cases}$
If $n=3,4,5$, then the conjectured formula is correct.
The following proposition $p(n,j)$ is true for all $n\ge3$ and $j\ge0$. $$\text{If $n\ge3$ and $j=\lfloor\frac{f(n,m)-1}{2(n-2)!}\rfloor$ for some $j\ge0$ and $m$, then $T(n,m)=3+2j.$}$$
The conjectured formula is the same as the combination of observations 1, 4, and 6.
All observations above except observation 6 can be proved easily, although observation 5 might take a while to sort out case by case.
Let $S(n,m,a,b,c)=T(c,a)+T(n-c,b)+T(n, m-ab)$. Then for $m\gt 2(n-1)!$, $T(n,m)= \min\limits_{a,b,c\geq 1,\ c\le\frac n2\\a\leq c!,\ b\leq(n-c)!}S(n,m,a,b,c).$ The reason why we can replace the condition $c\le n-1$ by $c\le \frac n2$ is that $(n,m,a,b,c)=(n,m,b, a, n-c)$.
Proof of observation 6 by well-founded induction
Here are the steps. Steps 1 and 2 are the induction bases while step 3 is the induction step.
Suppose $j=\lfloor\frac{f(n,m)-1}{2(n-2)!}\rfloor=0$, i.e., $2(n-1)!\lt m\le2(n-1)!+2(n-2)!$. Since $m\gt 2(n-1)$, $$T(n,m)\ge3.$$ On the other hand, $$T(n,m)\le S(n,m,1,2(n-2)!,1)=1+1+1=3.$$ So $T(n,m)=3$, i.e., $p(n,j)$ is true when $j=0$.
Observation 5 says that $p(n,j)$ is true for $n=3,4,5$.
Let $n\ge6$ and $j\ge1$. As induction hypothesis, suppose $p(x,y)$ is true for all $x\le n$ or $x=n$ and $y\lt j$, i.e., $T(x,y)=3+2\lfloor\frac{f(x,y)-1}{2(x-2)!}\rfloor$, which implies, by the definition of $\lfloor\cdot\rfloor$, $$(x-2)!(T(x,y)+2x-5)<y\le (x-2)!(T(x,y)+2x-3).$$
We will prove that $p(n,j)$ is true, i.e., $T(n,j)\le 3+2j$.
Let $j=\lfloor\frac{f(n,m)-1}{2(n-1)!}\rfloor$ for some $m$.
Proof for $T(n,m)\le 3+2j$
By induction hypothesis, we know that $T(n, m-2(n-2)!)=3+2(j-1)$. Hence, $$T(n,m)\le S(n,m,1,2(n-2)!,1)=1 + 1 + T(n, m-2(n-2)!)=3+2j.$$
Proof for $T(n,m)\ge 3+2j$
Because $T(n,m)$ is nondecreasing with respect to $m$ (observation 2), we will assume $m=2(n-1)!+2(n-2)!j+1$, the smallest value possible such that $j=\lfloor\frac{f(n,m)-1}{2(n-2)!}\rfloor.$
We will prove that $S(n,m,a,b,c)\ge 3+2j$ for all valid choices of $(a,b,c)$. The case when $c=1$ or $c=2$ is relatively easy. From now on assume $3\le c\le \frac n2$.
Let $A=T(c,a)$ and $B=T(n-c,b)$. The case when $A=1$ or $B=1$ is much easier to prove. Now assume $A,B\ge2$.
Since $c<n$, we have $a\le(c-2)!(A+2c-3)$.
Since $n-c<n$, we have $b\le(n-c-2)!(B+2n-2c-3)$.
Since $ab\ge1$, we have $\lfloor\frac{f(n,m-ab)-1}{2(n-2)!}\rfloor<j$, so we can apply induction hypothesis to yield the first equality below.
Since $T(n,m)$ is nondecreasing w.r.t $m$, $$\begin{aligned} &S(n,m,a,b,c)\\ &\ge A+B+T(n,m-(c-2)!(A+2c-3)\,(n-c-2)!(B+2n-2c-3))\\ &= A+B+3+ 2\lfloor\frac{f(n,m-(c-2)!(A+2c-3)(n-c-2)!(B+2n-2c-3))-1}{2(n-2)!}\rfloor\\ &= 3+2j+ A+B +2\lfloor\frac{-(c-2)!(A+2c-3)(n-c-2)!(B+2n-2c-3))}{2(n-2)!}\rfloor\\ &\gt 3+2j+ \frac{(c-2)!(A+2c-3)(n-c-2)!(B+2n-2c-3)}{(n-2)!}(h(c,A,B)-1) \\ \end{aligned}$$
where $$h(n,A,B,c)=\frac{(A+B-2)(n-2)!} {(c-2)!(A+2c-3)(n-c-2)!(B+2n-2c-3)}.$$ Since $n-c<n$, induction hypothesis yields the second equality below. $$\begin{aligned} B&=T(n-c,b)\le T(n-c, (n-c)!)\\ &=3+2(\frac{(n-c)(n-c-1)}2-(n-c-1)-1)\\ &=(n-c)(n-c-3)+3. \end{aligned}$$ Since $n\ge6$ and $c\ge3$, $(n-2)!\ge (n-2)(n-3)(n-4)(c-2)!(n-c-2)!.$
Since $n\ge6$, $c\le \frac n2$ and $A,B\ge2$, $(n-2)(A+B-2)\gt A+2c-3.$
$$\begin{aligned}h(n,A,B,c) &\ge\frac{(A+B-2)(n-2)(n-3)(n-4)}{(A+2c-3)(B+2n-2c-3)}\\ &\ge\frac{(n-3)(n-4)}{B+2n-2c-3}\frac{(n-2)(A+B-2)}{A+2c-3}\\ &\ge\frac{(n-3)(n-4)}{(n-c)(n-c-1)}\frac{(n-2)(A+B-2)}{A+2c-3}\\ &\gt1 \end{aligned}$$
So $S(n,m, a,b,c) \gt 3+2j.$
The proof is complete. By the way, the proof for $T(n,j)\le 3+2j$ shows that the minimum can always be obtained at $c=1,$ $a=1,$ $b=2(n-2)!.$
Exercise 1. Prove the formula for $T(2,m)$.
Exercise 2. (Observation 5) Prove the formula for $T(3,m)$, $T(4,m)$, and $T(5,m)$. Hint, the proof of observation 6 above might be helpful.
Exercise 3. Let $T_1$ be defined over $\Bbb N_{+}\times\Bbb N_{+}$. $$T_1(n,m)=\begin{cases} 1, & n=1\text{ or }m\leq (n-1)!\\ \min\limits_{a,b,c\geq 1,\ c\le n-1\\a\leq c!,\ b\leq(n-c)!}T_1(c,a)+T_1(n-c,b)+T_1(n,m-ab), & \text{else} \end{cases}$$ Show that $$T_1(n,m)=\begin{cases} 1 & n=1 \text{ or }m\le (n-1)!,\\ 3+2\Big\lfloor\dfrac {m-(n-1)!-1}{(n-2)!}\Big\rfloor & \text{otherwise}.\end{cases}$$
John L.John L.
$\begingroup$ "$T(n,j)\le 3+2j$" should have been "$T(n,m)\le 3+2j$" $\endgroup$ – John L. Jul 6 '19 at 10:17
Not the answer you're looking for? Browse other questions tagged recurrence-relation factorial or ask your own question.
Recurrence with Minimum
Solving the recurrence relation $T(n) = 2T(\lfloor n/2 \rfloor) + n$
Solving a complicated recurrence relation
Solving recurrence relation with two recursive calls
Trying to find a substitution to solve a recurrence
What exactly is going on in a proof by induction of a recurrence relation?
How do we guess the recurrence relation from the given equation | CommonCrawl |
\begin{document}
\title{Extensions of BV compactness criteria}
\author{Helge Kristian Jenssen} \address{ H.\ K.\ Jenssen, Department of Mathematics, Penn State University, University Park, State College, PA 16802, USA ({\tt [email protected]}).}
\thanks{This work was partially supported by the National Science Foundation [grant DMS-1813283].}
\date{\today} \begin{abstract}
Helly's selection theorem provides a criterion for compactness of sets of
single-variable functions with bounded pointwise variation. Fra{\v{n}}kov{\'a} has
given a proper extension of Helly's theorem to the setting of single-variable regulated
functions.
We show how a similar approach yields extensions of the standard compactness
criterion for multi-variable functions of bounded variation. \end{abstract}
\maketitle
Keywords: compactness; functions of bounded variation; regulated functions.
MSC2020: 26A45, 26B30, 26B99.
\tableofcontents
\section{Introduction}\label{intro}
Helly's selection theorem (Theorem \ref{helly} below) provides a compactness criterion for sequences of one-variable functions with uniformly bounded pointwise variation. In her work on regulated functions, i.e., one-variable functions admitting finite left and right limits at all points, Fra{\v{n}}kov{\'a} \cite{fr} provided an extension of Helly's theorem. Simple examples demonstrate that Fra{\v{n}}kov{\'a}'s theorem (Theorem \ref{fr_thm} below) provides a genuine extension. In particular, it guarantees pointwise everywhere convergence of a subsequence in some cases where the sequence is not bounded in variation.
The main objective of the present work is to provide a generalization of Fra{\v{n}}kov{\'a}'s theorem to the multi-variable setting. For a ``nice'' open set $\Omega\subset\mathbb{R}^N$, with $N\geq 1$, the space $BV(\Omega)$ of functions of bounded variation admits a compactness result \`a la Helly's (see Theorem \ref{multi_var_compact} below): any sequence which is bounded in $BV$-norm contains a subsequence converging in $L^1$-norm to a $BV$-function. We shall see how this compactness criterion can play the same role that Helly's theorem plays in Fra{\v{n}}kov{\'a}'s approach in \cite{fr}. As a consequence we obtain a compactness result (Theorem \ref{multi_var_frankova_any_p}) which guarantees $L^1$-convergence of a subsequence in certain cases where the original sequence is unbounded in $BV$.
A key ingredient in Fra{\v{n}}kov{\'a}'s work \cite{fr} is the notion of {\em $\varepsilon$-variation} of a bounded function $u$ of one variable: for $\varepsilon>0$, $\evar u$ is defined as the smallest amount of pointwise variation a function uniformly $\varepsilon$-close to $u$ can have. We shall introduce generalizations of this notion to functions of any number of variables (including 1-variable functions). For the purpose of extending the multi-d criterion for $BV$-compactness, it is natural to define $\varepsilon$-variation for general $L^1$-functions. In addition, we want to consider more general norms than the uniform norm in measuring distance between a function and its $BV$-approximants. The standard $L^p$-norms, with $1\leq p\leq\infty$, provide a natural choice, and we therefore introduce the notion of {\em $(\varepsilon,p)$-variation} of general $L^1$-functions (Definition \ref{eps_p_varn_defn}). Our main result is that the $BV$-compactness criterion admits an extension to this setting (Theorem \ref{multi_var_frankova_any_p}). Just as for Fra{\v{n}}kov{\'a}'s extension of Helly's theorem, simple examples demonstrate that this is a proper extension.
We note that even in the case of one-variable functions and with $p=\infty$, which is the setting closer to that of Fra{\v{n}}kov{\'a}'s, our result is not identical to hers. The latter concerns pointwise variation and everywhere pointwise convergence, i.e., the setting of Helly's theorem. In contrast, our Theorem \ref{multi_var_frankova_any_p} is formulated in terms of variation and $L^p$-convergence of (strictly speaking) equivalence classes of functions agreeing up to null-sets. (For the relation between pointwise variation and variation of a one-variable function, see Remark \ref{notns_of_varn}.) In particular, for $p=\infty$, we employ $L^\infty$-norm rather than uniform norm. This distinction is of relevance when we seek to generalize the notion of regulated function to the multi-variable case (see below).
\begin{remark} While our main result applies with any choice of $L^p$-norm, the values $p=1$ and $p=\infty$ are the more relevant ones. The case $p=1$ is natural since the original criterion for $BV$-compactness in several dimensions guarantees $L^1$-convergence of a subsequence. On the other hand, $p=\infty$ provides a setting closer to that of Fra{\v{n}}kov{\'a} \cite{fr}. Also, we have been able to establish certain useful properties only when $p=1$ or $p=\infty$. These concern attainment of $(\varepsilon,p)$-variation, continuity with respect to $\varepsilon$, and lower semi-continuity with respect to $L^1$-convergence; see Sections \ref{multi_var_case_p=1}-\ref{multi_var_case_p=infty}. \end{remark}
It turns out that, besides providing a means for extending Helly's theorem, Fra{\v{n}}kov{\'a}'s notion of $\varepsilon$-variation also yields a characterization of regulated functions. Indeed, a function is regulated if and only if its $\varepsilon$-variation is finite for each $\varepsilon>0$, \cite{fr}. Neither the definition of regulated functions (Definition \ref{reg}), nor various characterizations (see Theorem 2.1, pp.\ 213-214 in \cite{dn}), admit a straightforward generalization to higher dimensions (see also \cite{da}). In contrast, Fra{\v{n}}kov{\'a}'s characterization may appear to offer an obvious way of defining regulated functions of several variables. However, there is a catch: $\varepsilon$-variation for a one-variable function is defined in terms of pointwise variation, a notion lacking in higher dimensions. For functions of several variables one could of course replace it by variation; however, the resulting definition, when restricted to the one-variable case, will not reproduce the class of regulated functions of one variable.
Instead, having introduced the notion of $(\varepsilon,p)$-variation, we use it to define {\em $p$-regulated} functions as those functions whose $(\varepsilon,p)$-variation is finite for all $\varepsilon>0$. The resulting function space is introduced in Section \ref{eps_p_varns_p_reg_fncs}, and this provides the setting for our main result on extensions of the standard $BV$ compactness criterion for multi-variable functions. For the particular case of $\infty$-regulated functions of one variable, we show that these are precisely the functions that are ``essentially regulated,'' i.e., possessing a regulated version (see Proposition \ref{1_d_reg_vs_infty_reg}).
Finally, we comment on the possibility of applying Fra{\v{n}}kov{\'a}'s strategy to other compactness criteria in function spaces. Such criteria invariably involve a requirement that the given sequence (or set) of functions satisfy some uniform requirement. In essence, Fra{\v{n}}kov{\'a}'s strategy amounts to replacing a uniform requirement on the given functions by a (weaker) uniform requirement on nearby functions. Specifically, in Fra{\v{n}}kov{\'a}'s extension of Helly's theorem, this is done by replacing the requirement of a uniform variation bound by the weaker requirement of uniformly bounded $\varepsilon$-variations (see Definitions \ref{unif_eps_varns} and \ref{multi_var_unif_eps_varns}).
It is natural to ask if a similar approach can be applied to extend other compactness theorems for function spaces, including Fra{\v{n}}kov{\'a}'s own theorem. This is a somewhat open-ended question as there is freedom in how to set up an extension. However, we have not been able to extend Fra{\v{n}}kov{\'a}'s theorem, the Ascoli-Arzel\`a theorem, or the Kolmogorov-Riesz theorem. In each case we find that a natural implementation of Fra{\v{n}}kov{\'a}'s strategy fails: the assumptions on the given function sequence are so strong that the original compactness criterion directly applies to it. One might say that these compactness results are saturated with respect to Fra{\v{n}}kov{\'a}'s strategy.
The rest of the article is organized as follows. Notation and conventions are recorded below. Section \ref{frankova_helly_extn} recalls Helly's theorem for sequences of one-variable functions with bounded (pointwise) variation, together with Fra{\v{n}}kov{\'a}'s extension to the space of regulated functions. We include the key definitions introduced in \cite{fr} and recall without proof some of the results from \cite{fr}, including the characterization of regulated functions in terms of $\varepsilon$-variation. For completeness we include a proof of Fra{\v{n}}kov{\'a}'s theorem. Section \ref{multi_var_case} concerns the generalization of Fra{\v{n}}kov{\'a}'s strategy to the case of functions of several variables. We first recall the standard compactness result for bounded sequences in $BV(\Omega)$, and then define $(\varepsilon,p)$-variation of $L^1(\Omega)$-functions. The space $\mathcal R_p(\Omega)$ of $p$-regulated functions is introduced in Definition \ref{p_reg}. The main result, Theorem \ref{multi_var_frankova_any_p}, is then formulated and proved by mimicking Fra{\v{n}}kov{\'a}'s proof. Further properties of $(\varepsilon,p)$-variation for $p=1$ and $p=\infty$ are established in Sections \ref{multi_var_case_p=1}-\ref{multi_var_case_p=infty}. Section \ref{reg_vs_infty_reg_1_d} provides the relationship between $\infty$-regulated and standard regulated functions of one variable. Finally, in Section \ref{aa} we describe our negative findings about the possibility of applying Fra{\v{n}}kov{\'a}'s strategy to other compactness criteria.
\noindent{\bf Notations and conventions:} For sequences of functions we write $(u_n)$ or $(u_n)_n$ for $(u_n)_{n=1}^\infty$. Given a set of functions $X$, we write $(u_n)\subset X$ to mean that $u_n\in X$ for all $n\geq 1$. We write $(u_{n(k)})\subset (u_n)$ to mean that $(u_{n(k)})_k$ is a subsequence of $(u_n)_n$. For $m\in\mathbb{N}$ we fix a norm $|\cdot|$ on $\mathbb{R}^m$. For any set $U$ and any function $u:U\to\mathbb{R}^m$ we define its uniform norm by
\[\|u\|:=\sup_{x\in U} |u(x)|.\] The set of bounded functions on $U$ is denoted
\[\mathcal B(U):=\{u:U\to\mathbb{R}^m\,|\, \|u\|<\infty\};\]
it is a standard result that $(\mathcal B(U),\|\cdot\|)$ is a Banach space.
For a normed space $(\mathcal X,\vertiii{\cdot})$, a sequence $(u_n)\subset \mathcal X$ is {\em bounded} provided $\sup_n\vertiii{u_n}<\infty$. For a Lebesgue measurable set
$\Omega\subset\mathbb{R}^N$ ($N\geq 1$), $|\Omega|$ denotes its Lebesgue measure. The open ball of radius $r$ about $x\in\mathbb{R}^N$ is denoted $B_r(x)$. For an open set $\Omega\subset\mathbb{R}^N$, $L^p(\Omega)$, $1\leq p\leq \infty$, denotes the set of Lebesgue measurable functions with finite $L^p(\Omega)$-norm
\[\|u\|_{L^p(\Omega)}\equiv\|u\|_{p}=\Big(\int_\Omega |u(x)|^p\, dx\Big)^\frac{1}{p}\qquad \text{for $1\leq p< \infty$,}\] and
\[\|u\|_{L^\infty(\Omega)}\equiv\|u\|_\infty=\ess_{x\in\Omega} |u(x)|. \] Throughout, the terms ``measurable'', ``almost everywhere'' and ``for almost all'' are understood with respect to Lebesgue measure. A {\em version} of a measurable function $u:\Omega\to\mathbb{R}$ refers to any function $\bar u:\Omega\to\mathbb{R}$ agreeing almost everywhere with $u$.
We use the notation ``$\var u$'' for the {\em pointwise variation} of a one-variable function $u$, and the notation ``$\V u$'' for the {\em variation} of a (one- or multi-variable) function $u$. These are defined in \eq{var} and \eq{Var}, respectively. Our notation differs from that of \cite{afp} and \cite{le} which use pV$(u,\Omega)$ and $\V u$, respectively, for
the pointwise variation of a one-variable function, and $V(u,\Omega)$ and $|Du|(\Omega)$, respectively, for the variation. (We avoid the notation pV$(u,\Omega)$ since we shall later define the notions of $(\varepsilon,p)$-variation, where $p$ denotes the exponent of an $L^p$-space.) Remark \ref{notns_of_varn} recalls the relationship between pointwise variation and variation of one-variable functions.
Finally, for two sequences of real numbers $(A_n)$ and $(B_n)$, we write $A_n\lesssim B_n$ to mean that there is a finite number $C$, independent of $n$, such that $A_n\leq CB_n$ for all $n\geq 1$. $A_n\sim B_n$ means that both $A_n\lesssim B_n$ and $B_n\lesssim A_n$ hold.
\section{Fra{\v{n}}kov{\'a}'s extension of Helly's theorem}\label{frankova_helly_extn}
In this section we fix an open and bounded interval $I=(a,b)\subset\mathbb{R}$. A function $u:I\to\mathbb{R}^m$ is of {\em bounded pointwise variation} provided \begin{equation}\label{var}
\var u:=\sup\, \sum_{i=1}^k |u(x_i)-u(x_{i-1})|<\infty, \end{equation} where the supremum is over all $k\in\mathbb{N}$ and all finite selections of points $x_0<x_1<\cdots<x_k$ in $I$. We follow \cite{le} and denote the set of such functions by
$BPV(I)$. We recall Helly's selection theorem (see e.g.\ \cite{le}; recall that $\|\cdot\|$ denotes uniform norm):
\begin{theorem}[Helly]\label{helly}
Assume $(u_n)\subset \mathcal B(I)$ satsfies $\sup_n\|u_n\|<\infty$ and
$\sup_n\var u_n<\infty$. Then there is a subsequence $(u_{n(k)})\subset(u_n)$
and a function $u\in BPV(I)$ such that $u(x)=\lim_{k} u_{n(k)}(x)$ for every $x\in I$.
Furthermore, $\var u\leq\liminf_{k}\var u_{n(k)}$. \end{theorem}
Next we describe Fra{\v{n}}kov{\'a}'s extension of Helly's theorem. This requires a few definitions.
\begin{definition}\label{reg}
A function $u\in\mathcal B(I)$ is {\em regulated}
provided its right and left
limits exist (as finite numbers) at all points of $I$, it has a finite
right limit at the left endpoint, and a finite left limit at the right endpoint. The class of
regulated functions on $I$ is denoted $\mathcal R(I)$. \end{definition}
\begin{remark}
We have opted to work on a bounded and open interval $I$ so that the setting
is a special case of the multi-variable setting in Section \ref{multi_var_case}.
In contrast, Fra{\v{n}}kov{\'a} \cite{fr} considers regulated functions on closed
intervals $[a,b]$. However, since Definition \ref{reg} requires finite one-sided
limits at the endpoints, any regulated function in the sense above extends trivially
to a regulated function on $[a,b]$ in the sense of \cite{fr}. \end{remark}
It is immediate that $\mathcal R(I)$ is a proper subspace of $\mathcal B(I)$. Fra{\v{n}}kov{\'a} \cite{fr} introduced the following definitions.
\begin{definition}\label{epsilon_varn}
For $u\in \mathcal B(I)$ and $\varepsilon>0$, we define the {\em $\varepsilon$-variation}
of $u$ by
\begin{equation}\label{eps_varn}
\evar u:=\inf_{v\in \mathcal U(u;\varepsilon)} \var v,
\end{equation}
where
\begin{equation}\label{V}
\mathcal U(u;\varepsilon):=\{v\inBPV(I)\,|\, \|u-v\|\leq \varepsilon\},
\end{equation}
with the convention that infimum over the empty set is $\infty$. \end{definition}
\begin{definition}\label{unif_eps_varns}
A set of functions $\mathcal F\subset \mathcal R(I)$ has {\em uniformly
bounded $\varepsilon$-variations} provided
\[\sup_{u\in\mathcal F}\,\,\evar u<\infty\qquad\text{for every $\varepsilon>0$}.\] \end{definition}
For $u\in\mathcal B(I)$, let $J(u)$ denote the jump set of $u$, i.e.,
\[J(u):=\{x\in I\,|\, \text{at least one of $u(x+)$ or $u(x-)$ differs from $u(x)$}\}.\] (Here $u(x\pm)$ denotes $\lim_{y\to x\pm}u(y)$, respectively.) A function $u\in \mathcal B(I)$ is a {\em step function} provided there is a finite, increasing sequence of points $x_0=a<x_1<\dots<x_{m-1}<x_m=b$ such that $u$ is constant on each of the open intervals $(x_i,x_{i+1})$, $i=0,\dots,m$. The following results about regulated functions are known (for proofs see \cite{di,fr,dn}; (R3) and (R4) are Propositions 3.4 and 3.6 in \cite{fr}, respectively):
\begin{enumerate}
\item[(R1)] If $u\in\mathcal R(I)$, then $J(u)$ is countable.\\
\item[(R2)] For $u\in \mathcal B(I)$, $u\in\mathcal R(I)$ if and only if
it is the uniform limit of step functions on $I$. It follows that
$(\mathcal R(I),\|\cdot\|)$ is a proper, closed subspace of $(\mathcal B(I),\|\cdot\|)$.\\
\item[(R3)] For $u\in \mathcal B(I)$, $u\in\mathcal R(I)$ if and only if
$\evar u<\infty$ for every $\varepsilon>0$.\\
\item[(R4)] Assume $(u_n)\subset \mathcal R(I)$ and $u_n(x)\to u(x)$ for every $x\in I$.
Then
\[\evar u\leq\liminf_n\, \evar u_n\qquad\text{for every $\varepsilon>0$.}\]
If, in addition, $(u_n)$ has uniformly bounded $\varepsilon$-variations, then
$u\in\mathcal R(I)$. \end{enumerate}
\noindent Note that (R3) provides a characterization of regulated functions. In Section \ref{multi_var_case} we will use this to motivate the notion of $p$-regulated functions of any number of variables.
Before stating and proving Fra{\v{n}}kov{\'a}'s extension of Helly's theorem, we make the following observation.
\begin{observation}\label{obs1}
Assume $(z_n)\subset\mathcal R(I)$ is bounded and has uniformly bounded $\varepsilon$-variations.
Then, for each $\varepsilon>0$ there is a finite number $K_\varepsilon$ and a sequence $(z_n^\varepsilon)\subsetBPV(I)$
satisfying
\[\var z_n^\varepsilon\leq K_\varepsilon,\qquad \|z_n-z_n^\varepsilon\|\leq\varepsilon \qquad\text{for all $n\geq1$}.\]
It follows that $(z_n^\varepsilon)$ satisfies the assumptions in Helly's theorem; there is
therefore a subsequence $(z_{n(k)}^\varepsilon)\subset(z_n^\varepsilon)$ and a $z^\varepsilon\inBPV(I)$ so that
$z^\varepsilon(x)=\lim_k z_{n(k)}^\varepsilon(x)$ for every $x\in I$. \end{observation}
Fra{\v{n}}kov{\'a}'s extension of Helly's theorem is the following result:
\begin{theorem}[Fra{\v{n}}kov{\'a}]\label{fr_thm}
Assume $(u_n)\subset\mathcal R(I)$ is bounded and has uniformly bounded
$\varepsilon$-variations. Then there is a subsequence $(u_{n(k)})\subset(u_n)$
and a function $u\in\mathcal R(I)$ such that
$u(x)=\lim_k u_{n(k)}(x)$ for every $x\in I$. \end{theorem}
\begin{remark}\label{f_thm_rmk}
This is Theorem 3.8 in \cite{fr}, for which Fra{\v{n}}kov{\'a} provided two different proofs.
The first of these (outlined on pp.\ 48-49 in \cite{fr}) is relevant to us, and we
therefore provide the details of the argument. We also note that Fra{\v{n}}kov{\'a}'s
theorem provides a genuine extension of Helly's theorem. I.e., it specializes to
Helly's theorem when the conditions in Theorem \ref{helly} are met, and
it also provides convergence of a subsequence in cases where the original sequence $(u_n)$ is
unbounded in $BPV(I)$; see Example \ref{ex} and Example \ref{2nd_ex}. \end{remark}
\noindent {\it Proof of Theorem \ref{fr_thm}.} Fix a strictly decreasing sequence $(\varepsilon_l)$ converging to zero. We shall use Observation \ref{obs1} repeatedly, and then employ a diagonal argument.
For $l=1$ apply Observation \ref{obs1} to the original sequence $(u_n)$ with $\varepsilon=\varepsilon_1$ to get a sequence-subsequence pair $(v_n^{\varepsilon_1})\supset(v_{n_1(k)}^{\varepsilon_1})$ in $BPV(I)$ and a $v^1\inBPV(I)$ satisfying
\[\|u_{n_1(k)}-v_{n_1(k)}^{\varepsilon_1}\|\leq \varepsilon_1\quad \text{for all $k\geq 1$, and}\qquad v^1(x)=\lim_k v_{n_1(k)}^{\varepsilon_1}(x)\quad \text{for every $x\in I$.}\] For $l=2$ apply Observation \ref{obs1} to the sequence $(u_{n_1(k)})$ with $\varepsilon=\varepsilon_2$ to get a sequence-subsequence pair $(v_{n_1(k)}^{\varepsilon_2})\supset(v_{n_2(k)}^{\varepsilon_2})$ in $BPV(I)$ and a $v^2\inBPV(I)$ satisfying
\[\|u_{n_2(k)}-v_{n_2(k)}^{\varepsilon_2}\|\leq \varepsilon_2\quad \text{for all $k\geq 1$, and}\qquad v^2(x)=\lim_k v_{n_2(k)}^{\varepsilon_2}(x)\quad \text{for every $x\in I$.}\] Continuing in this manner we obtain for each index $l$ a sequence-subsequence pair $(v_{n_{l-1}(k)}^{\varepsilon_l})\supset(v_{n_l(k)}^{\varepsilon_l})$ in $BPV(I)$ and a $v^l\inBPV(I)$ satisfying
\[\|u_{n_l(k)}-v_{n_l(k)}^{\varepsilon_l}\|\leq\varepsilon_l\quad \text{for all $k\geq 1$, and}\qquad v^l(x)=\lim_k v_{n_l(k)}^{\varepsilon_l}(x)\quad \text{for every $x\in I$.}\] Next, for $l\geq 1$ fixed, consider the diagonal index sequence $(n_k(k))_{k\geq l}$, which is a subsequence of $(n_l(j))_{j\geq 1}$. With $ n(k):=n_k(k)$ we therefore get that: for each $l\geq 1$, there holds \begin{equation}\label{key1}
\|u_{ n(k)}-v_{ n(k)}^{\varepsilon_l}\|\leq\varepsilon_l\quad \text{for all $k\geq l$, and}
\quad \lim_k v_{ n(k)}^{\varepsilon_l}(x)= v^l(x)\quad \text{for every $x\in I$.}
\end{equation} We claim that $(v^l)$ is a Cauchy sequence in $(\mathcal B(I),\|\cdot\|)$. Indeed, given $\delta>0$ we first choose an index $l(\delta)$ so that $\varepsilon_l\leq\frac{\delta}{2}$ for $l\geq l(\delta)$. Then, for any $x\in I$, if $l,q\geq l(\delta)$ and $k\geq \max(l,q)$ we get from \eq{key1}${}_1$ that \begin{align*}
|v^l(x)-v^q(x)|
&\leq |v^l(x)-v_{n(k)}^{\varepsilon_l}(x)|+|v_{n(k)}^{\varepsilon_l}(x)-u_{n(k)}(x)|\\
&\quad+|u_{n(k)}(x)-v_{n(k)}^{\varepsilon_q}(x)|+|v_{n(k)}^{\varepsilon_q}(x)-v^q(x)|\\
&\leq |v^l(x)-v_{ n(k)}^{\varepsilon_l}(x)|+\|v_{n(k)}^{\varepsilon_l}-u_{n(k)}\|\\
&\quad+\|u_{n(k)}-v_{n(k)}^{\varepsilon_q}\|+|v_{n(k)}^{\varepsilon_q}(x)-v^q(x)|\\
&\leq |v^l(x)-v_{n(k)}^{\varepsilon_l}(x)|+\varepsilon_l+\varepsilon_q+|v_{n(k)}^{\varepsilon_q}(x)-v^q(x)|\\
&\leq |v^l(x)-v_{n(k)}^{\varepsilon_l}(x)|+\delta +|v_{n(k)}^{\varepsilon_q}(x)-v^q(x)|. \end{align*}
Sending $k\to\infty$ we get from \eq{key1}${}_2$ that $|v^l(x)-v^q(x)|\leq \delta$. As $x\in I$ is arbitrary, we conclude that $\|v^l-v^q\|\leq \delta$ whenever $l,q\geq l(\delta)$, establishing the claim.
By completeness of $(\mathcal B(I),\|\cdot\|)$ we thus obtain the existence of a function $u\in\mathcal B(I)$ such that $v^l\to u$ uniformly on $I$.
We claim that $u_{ n(k)}(x)\to u(x)$ for each $x\in I$. To verify this, fix any $x\in I$ and any $\delta>0$. Then choose $l$ so large that $\varepsilon_l\leq\frac{\delta}{3}$ and
$\|u-v^l\|\leq\frac{\delta}{3}$. Finally choose $k\geq l$ so large that $|v^l(x)-v_{ n(k)}^{\varepsilon_l}(x)|<\frac{\delta}{3}$ (which is possible according to \eq{key1}${}_2$). Using this together with \eq{key1}${}_1$ we obtain \begin{align*}
|u(x)-u_{n(k)}(x)|
&\leq |u(x)-v^l(x)|+|v^l(x)-v_{n(k)}^{\varepsilon_l}(x)|+|v_{n(k)}^{\varepsilon_l}(x)-u_{n(k)}(x)|\\
&\leq \|u-v^l\|+|v^l(x)-v_{n(k)}^{\varepsilon_l}(x)|+\varepsilon_l<\delta, \end{align*} establishing the claim. Finally, since $(u_{ n(k)})_k$ has uniformly bounded $\varepsilon$-variations, we get from the property (R4) above that $u\in\mathcal R(I)$. \qed
\section{Fra{\v{n}}kov{\'a}'s strategy applied in the multi-variable case} \label{multi_var_case}
\subsection{Preliminaries}\label{prelims}
For the following background material we refer to \cite{afp,le}. We fix an open and bounded subset $\Omega\subset\mathbb{R}^N$, $N\geq 1$. For convenience, when $N=1$, we assume $\Omega$ is an interval. In stating compactness results it will be further assumed that $\Omega$ is a bounded $BV$ extension domain (cf.\ Definition 3.20 in \cite{afp}). To simplify the notation we restrict attention to scalar-valued functions $u:\Omega\to \mathbb{R}$; all results generalize routinely to the vector valued case $u:\Omega\to \mathbb{R}^m$.
A function $u\in L^1(\Omega)$ is of {\em bounded variation} provided \begin{equation}\label{Var}
\V u:=\sup\Big\{\int_\Omega u\dv \varphi\, dx\,|\,
\varphi\in[C_c^1(\Omega)]^{N}, \|\varphi\|_\infty\leq 1 \Big\}<\infty. \end{equation} We shall make repeated use of the fact that $\V$ is lower semi-continuous with respect to $L^1$-convergence: if $u_n\to u$ in $L^1(\Omega)$, then $\V u\leq\liminf_n \V u_n$; cf.\ Remark 3.5 in \cite{afp}.
\begin{remark}\label{notns_of_varn}
For the one-variable case $N=1$ with $\Omega=(a,b)$, the variation $\V$ relates to the pointwise variation
$\var$ in \eq{var} as follows. Defining the {\em essential variation} of $u\in L^1(a,b)$ by
\[\essen\var u:=\inf\{\var w\,|\, \text{$w$ is a version of $u$ on $(a,b)$}\},\]
we have (see Section 3.2 in \cite{afp}): every $u\in BV(a,b)$ has
a version $\bar u\inBPV(a,b)$ satisfying
\begin{equation}\label{essvar}
\var \bar u=\essen\var u=\V u.
\end{equation}
\end{remark}
The set $BV(\Omega)$ of functions of bounded variation is a Banach space when equipped with the norm
\[\|u\|_{BV}:=\|u\|_1+\V u.\] Furthermore, $BV(\Omega)$ enjoys the following compactness criterion (cf.\ Theorem 3.23 in \cite{afp}):
\begin{theorem}\label{multi_var_compact}
Let $\Omega$ be an open and bounded $BV$ extension domain in $\mathbb{R}^N$ ($N\geq1$),
and assume $(u_n)$ is bounded in $(BV(\Omega),\|\cdot\|_{BV})$. Then there is a subsequence
$(u_{n(k)})\subset(u_n)$ and a function $u\inBV(\Omega)$ such that $u_{n(k)}\to u$ in $L^1(\Omega)$. \end{theorem}
We shall provide extensions of this criterion by applying Fra{\v{n}}kov{\'a}'s strategy for the proof of Theorem \ref{fr_thm}. The first step is to generalize the notion of $\varepsilon$-variation to functions of several variables.
\subsection{$(\varepsilon,p)$-variation and $p$-regulated functions}\label{eps_p_varns_p_reg_fncs}
Our main objective is to provide extensions of Theorem \ref{multi_var_compact} which guarantees $L^1$-convergence of a subsequence. It is therefore natural to seek a notion of $\varepsilon$-variation for general $L^1$-function. At the same time we want to add flexibility in how the distance between a given function and its $BV$-approximants is measured. In extending Fra{\v{n}}kov{\'a}'s setup to multi-variable functions we therefore replace the uniform norm by any $L^p$-norm, and generalize Definition \ref{epsilon_varn} as follows:
\begin{definition}\label{eps_p_varn_defn}
Let $u\in L^1(\Omega)$. For $p\in[1,\infty]$ and $\varepsilon>0$ we define
the {\em $(\varepsilon,p)$-variation} of $u$ by
\begin{equation}\label{eps_p_varn}
(\varepsilon,p)\text{-}\mathrm{Var\,} u:=\inf_{v\in \mathcal V_p(u;\varepsilon)} \V v,
\end{equation}
where
\begin{equation}\label{multi_var_V}
\mathcal V_p(u;\varepsilon):=\{v\inBV(\Omega)\,|\, \|u-v\|_p\leq \varepsilon\},
\end{equation}
with the convention that infimum over the empty set is $\infty$. \end{definition}
\begin{remark}
It might seem more natural to restrict the definition of $(\varepsilon,p)\text{-}\mathrm{Var\,} u$ to functions $u\in L^p(\Omega)$.
However, our primary goal is to provide extensions of Theorem \ref{multi_var_compact},
and this works out with Definition \ref{eps_p_varn_defn} as stated.
We note that already in the case $N=1$, $\Omega=(a,b)$, and $p=\infty$, i.e.,
the setting closer to that of Fra{\v{n}}kov{\'a} \cite{fr}, our setup is different
from hers. Specifically, we consider $L^1$-functions which are
really equivalence classes of functions agreeing almost everywhere, the distance
between a function and its $BV$-approximants is measured in $L^\infty$, and
everywhere pointwise convergence plays no role in the present analysis.
\end{remark}
We record some immediate consequences of Definition \ref{eps_p_varn_defn}.
\begin{lemma}\label{monotonicity}
Let $u\in L^1(\Omega)$. Then, for all $p\in[1,\infty]$ and $\varepsilon>0$ we have:
\begin{enumerate}
\item If $\bar u$ is a version of $u$, then
\[(\varepsilon,p)\text{-}\mathrm{Var\,} u=(\varepsilon,p)\text{-}\mathrm{Var\,} \bar u.\]
\item If $u\inBV(\Omega)$, then
\[(\varepsilon,p)\text{-}\mathrm{Var\,} u\leq \V u<\infty.\]
\item If $0<\varepsilon_0<\varepsilon$, then
\[(\varepsilon_0,p)\text{-}\mathrm{Var\,} u\geq (\varepsilon,p)\text{-}\mathrm{Var\,} u.\] \end{enumerate}
\end{lemma}
Next, motivated by the characterization (R3) of regulated functions of one variable, we introduce the class of {\em $p$-regulated} functions:
\begin{definition}\label{p_reg}
For $u\in L^1(\Omega)$ and $p\in[1,\infty]$, we say that $u$ is {\em $p$-regulated}
provided $(\varepsilon,p)\text{-}\mathrm{Var\,} u<\infty$ for every $\varepsilon>0$. We set
\[\mathcal R_p(\Omega):=\{u\in L^1(\Omega)\,|\, \text{$u$ is $p$-regulated}\,\}.\] \end{definition}
Since $\Omega$ is assumed bounded, we have $\mathcal R_p(\Omega)\subset\mathcal R_q(\Omega)$ whenever
$1\leq q< p\leq \infty$. (This follows since $\|f\|_q\leq C\|f\|_p$, where $C=C(p,q,\Omega)$.) Furthermore, we have:
\begin{lemma}\label{basic_props}
For any $p\in[1,\infty]$, $\mathcal R_p(\Omega)$ is a subspace of $L^1(\Omega)$
which is closed under $L^p$-convergence. For $p\in[1,\infty)$ we have
$L^p(\Omega)\subset\mathcal R_p(\Omega)$; in particular,
$\mathcal R_1(\Omega)\equiv L^1(\Omega)$. \end{lemma}
\begin{proof}
Using Definition \ref{p_reg} it is routine to verify that $\mathcal R_p(\Omega)$ is closed
under addition and scalar multiplication. Next, for $p\in[1,\infty]$,
assume $(u_n)\subset\mathcal R_p(\Omega)$
and $\|u_n- u\|_p\to0$. Since $\Omega$ is bounded we then have $\|u_n-u\|_1\to0$; and
since $(u_n)\subset L^1(\Omega)$ by assumption, we get that $u\in L^1(\Omega)$.
Fix any $\varepsilon>0$. Choose $n$ so that $\|u-u_n\|_p\leq\frac{\varepsilon}{2}$.
Since $u_n\in\mathcal R_p(\Omega)$ there is a $v\inBV(\Omega)$ with
$\|u_n-v\|_p\leq\frac{\varepsilon}{2}$, so that $\|u-v\|_p\leq\varepsilon$.
As $\varepsilon>0$ is arbitrary, this shows that $u\in\mathcal R_p(\Omega)$, establishing
closure of $\mathcal R_p(\Omega)$ under $L^p$-convergence.
Next, if $p\in[1,\infty)$, then $C_c^\infty(\Omega)$ is dense in $L^p(\Omega)$.
Therefore, given $u\in L^p(\Omega)$ and $\varepsilon>0$ there is a
$v\in C_c^\infty(\Omega)\subset BV(\Omega)$ with $\|u-v\|_p\leq\varepsilon$.
Thus, $u\in L^1(\Omega)$ and $(\varepsilon,p)\text{-}\mathrm{Var\,} u<\infty$ for each $\varepsilon>0$, so that
$u\in\mathcal R_p(\Omega)$. This establishes $L^p(\Omega)\subset \mathcal R_p(\Omega)$
for $p\in[1,\infty)$.
Finally, since $\mathcal R_1(\Omega)\subset L^1(\Omega)$ by definition, this gives
$\mathcal R_1(\Omega)\equiv L^1(\Omega)$. \end{proof}
It follows from part (2) of Lemma \ref{monotonicity} that $BV(\Omega)\subset\mathcal R_p(\Omega)$ for all $p\in[1,\infty]$. The following example shows that this inclusion is strict in all cases.
\begin{example}\label{ex}
Let $\Omega\subset\mathbb{R}^N$ ($N\geq1$) be open and bounded. By translation it is not
restrictive to assume that $\Omega$ contains the origin. Fix $r_0>0$ such that
$B_{r_0}(0)\subset\subset \Omega$, and a strictly decreasing sequence of radii
$(r_k)_{k\geq 1}$ with $r_1<r_0$ and $r_k\downarrow 0$. Also, fix a strictly
decreasing sequence $(\beta)_{k\geq0}$ of positive numbers with $\beta_k\downarrow 0$,
and define the radial function $u:\Omega\to\mathbb{R}$ by
\begin{equation}\label{u}
u(x):=\sum_{k=0}^\infty \beta_k \chi_{I_k}(|x|),
\end{equation}
where $I_k:=[r_{2k+1},r_{2k}]$ and $\chi_A$ denotes the indicator function of a set $A$.
Then $u\in L^\infty(\Omega)\subset L^1(\Omega)$ and we have
(with $\omega_N=$area of unit ball in $\mathbb{R}^N$ for $N\geq2$, and $\omega_1=1$)
\[\V u=\omega_N\sum_{k=0}^\infty\beta_k(r_{2k}^{N-1}+r_{2k+1}^{N-1}).\]
Choosing, for $k\geq 0$,
\begin{equation}\label{r_beta_choices}
r_k=(\textstyle\frac{1}{k+1})^\frac{1}{2N}r_0 \qquad\text{and}
\qquad \beta_k=(\textstyle\frac{1}{k+1})^\frac{1}{2},
\end{equation}
gives
\[\V u\gtrsim\sum_{k=0}^\infty\beta_k r_{2k+1}^{N-1}
\gtrsim\sum_{k=0}^\infty\textstyle\frac{1}{k+1}=\infty,\]
so that $u\notin BV(\Omega)$. To show that $u\in\mathcal R_p(\Omega)$
for all $p\in[1,\infty]$, it suffices to verify that $u\in\mathcal R_\infty(\Omega)$
(see comment before Lemma \ref{basic_props}). For this define
\begin{equation}\label{u_n}
u_n(x):=\sum_{k=0}^n \beta_k \chi_{I_k}(r),
\end{equation}
and let, for $\varepsilon>0$, $n_\varepsilon$ be the smallest integer such that $\beta_{n_\varepsilon}\leq\varepsilon$.
Then
\[u_{n_\varepsilon}\inBV(\Omega)\qquad\text{and}\qquad \|u-u_{n_\varepsilon}\|_\infty\leq\varepsilon,\]
showing that $(\varepsilon,\infty)\text{-}\mathrm{Var\,} u<\infty$. As $\varepsilon>0$ is arbitrary, we have $u\in\mathcal R_\infty(\Omega)$. \end{example}
Next we generalize Definition \ref{unif_eps_varns} to the setting of $(\varepsilon,p)$-variation.
\begin{definition}\label{multi_var_unif_eps_varns}
Let $p\in[1,\infty]$. A set of functions $\mathcal F\subset \mathcal R_p(\Omega)$ has {\em uniformly
bounded $(\varepsilon,p)$-variations} provided
\[\sup_{u\in\mathcal F}\,\,(\varepsilon,p)\text{-}\mathrm{Var\,} u<\infty\qquad\text{for every $\varepsilon>0$}.\] \end{definition}
The following example builds on Example \ref{ex} and shows that for any $p\in[1,\infty]$ there are sequences with uniformly bounded $(\varepsilon,p)$-variations that are unbounded in $BV(\Omega)$. \begin{example}\label{2nd_ex}
Fix $p\in[1,\infty]$ and consider the sequence $(u_n)$ defined in \eq{u_n}.
Note that $u_n\inBV(\Omega)$ for all $n$ and that $\V u_n$ increases monotonically
without bound as $n\uparrow\infty$ (cf.\ Example \ref{ex}). We have
\[\|u-u_n\|_p\lesssim\|u-u_n\|_\infty\to 0,\]
and $\|u-u_n\|_p$ decreases monotonically to $0$ as $n\uparrow\infty$.
For given $\varepsilon>0$ let $m_\varepsilon$ be the smallest integer $m$ such that
$\|u-u_m\|_p\leq\varepsilon$. For $n\leq m_\varepsilon$ we have $\V u_n\leq\V u_{m_\varepsilon}<\infty$,
so that
\[(\varepsilon,p)\text{-}\mathrm{Var\,} u_n\leq \V u_n\leq\V u_{m_\varepsilon},\qquad n\leq m_\varepsilon. \]
On the other hand, for $n> m_\varepsilon$ we have
\[\|u_n-u_{m_\varepsilon}\|_p<\|u-u_{m_\varepsilon}\|_p\leq\varepsilon,\]
showing that $(\varepsilon,p)\text{-}\mathrm{Var\,} u_n\leq\V u_{m_\varepsilon}$ holds also in this case.
It follows that
\[\sup_n\,\,(\varepsilon,p)\text{-}\mathrm{Var\,} u_n\leq \V u_{m_\varepsilon}<\infty. \]
As $\varepsilon>0$ is arbitrary, this shows that $(u_n)$ has uniformly bounded $(\varepsilon,p)$-variations
for any $p\in[1,\infty]$. \end{example}
\subsection{Multi-variable Fra{\v{n}}kov{\'a} theorem}\label{gen'zd_Frankova}
In this subsection we formulate and prove our main result on extensions of the $BV$-compactness criterion in Theorem \ref{multi_var_compact}. We first record the following observation, cf.\ Observation \ref{obs1} in Section \ref{frankova_helly_extn}.
\begin{observation}\label{obs2}
Let $\Omega$ be an open and bounded $BV$ extension domain in $\mathbb{R}^N$ ($N\geq1$)
and $p\in[1,\infty]$. Assume $(z_n)\subset\mathcal R_p(\Omega)$ is bounded in $L^1(\Omega)$
and has uniformly bounded $(\varepsilon,p)$-variations.
Then, for each $\varepsilon>0$ there is a finite number $K_\varepsilon$ and a
sequence $(z_n^\varepsilon)\subsetBV(\Omega)$ satisfying
\[\V z_n^\varepsilon\leq K_\varepsilon,\qquad \|z_n-z_n^\varepsilon\|_p\leq\varepsilon \qquad\text{for all $n\geq1$}.\]
Since $\Omega$ is assumed bounded, there is a constant $C=C(\Omega,p)$ so that
\[\|z_n^\varepsilon\|_1\leq \|z_n^\varepsilon-z_n\|_1+\|z_n\|_1\leq C\|z_n^\varepsilon-z_n\|_p+\|z_n\|_1\leq C\varepsilon+\|z_n\|_1.\]
It follows that, for each fixed $\varepsilon$, the sequence $(z_n^\varepsilon)$
meets the assumptions in Theorem \ref{multi_var_compact}. Thus, for each $\varepsilon>0$, there is
a subsequence $(z_{n(k)}^\varepsilon)\subset(z_n^\varepsilon)$ and a $z^\varepsilon\inBV(\Omega)$ with
$z_{n(k)}^\varepsilon\to z^\varepsilon$ in $L^1(\Omega)$. \end{observation}
We now have:
\begin{theorem}\label{multi_var_frankova_any_p}
Let $\Omega$ be an open and bounded $BV$ extension domain in $\mathbb{R}^N$ ($N\geq1$), and let
$p\in[1,\infty]$. Assume the sequence $(u_n)$
is bounded in $L^1(\Omega)$ and has uniformly bounded $(\varepsilon,p)$-variations.
Then there is a subsequence $(u_{n(k)})\subset (u_n)$ and a function $u\in L^1(\Omega)$
such that $u_{n(k)}\to u$ in $L^1(\Omega)$. \end{theorem}
\begin{remark}
Note that it is not claimed that the limit function $u$ belongs to
$\mathcal R_p(\Omega)$. However, this does hold when $p=1$ or $p=\infty$.
For $p=1$ this is immediate since $u\in L^1(\Omega)\equiv \mathcal R_1(\Omega)$
(Lemma \ref{basic_props}). Furthermore, lower semi-continuity of $(\varepsilon,1)\text{-}\mathrm{Var\,}\!$ with respect to
$L^1$-convergence (Proposition \ref{lsc_p=1} below) yields
\[(\varepsilon,1)\text{-}\mathrm{Var\,} u\leq\liminf_k \,\,(\varepsilon,1)\text{-}\mathrm{Var\,} u_{n(k)}\qquad\text{for each $\varepsilon>0$.}\]
When $p=\infty$ we again have lower semicontinuity of $(\varepsilon,\infty)\text{-}\mathrm{Var\,}\!$ with respect to
$L^1$-convergence (Proposition \ref{lsc_p=infty} below). Thus, also for the case $p=\infty$ we have
\[(\varepsilon,\infty)\text{-}\mathrm{Var\,} u\leq\liminf_k \,\,(\varepsilon,\infty)\text{-}\mathrm{Var\,} u_{n(k)}\qquad\text{for each $\varepsilon>0$.}\]
As the sequence $(u_n)$ is assumed to have uniformly bounded $(\varepsilon,\infty)$-variations, it follows that
$(\varepsilon,\infty)\text{-}\mathrm{Var\,} u<\infty$ for each $\varepsilon>0$, i.e., $u\in\mathcal R_\infty(\Omega)$. \end{remark}
\noindent {\it Proof of Theorem \ref{multi_var_frankova_any_p}.} Fix a strictly decreasing sequence $(\varepsilon_l)$ converging to zero. For $l=1$ apply Observation \ref{obs2} to the original sequence $(u_n)$ with $\varepsilon=\varepsilon_1$ to get a sequence-subsequence pair $(v_n^{\varepsilon_1})\supset(v_{n_1(k)}^{\varepsilon_1})$ in $BV(\Omega)$ and a $v^1\inBV(\Omega)$ satisfying
\[\|u_{n_1(k)}-v_{n_1(k)}^{\varepsilon_1}\|_p\leq \varepsilon_1\quad \text{for all $k\geq 1$, and}\qquad v_{n_1(k)}^{\varepsilon_1}\overset{k}{\to} v^1\quad \text{in $L^1(\Omega)$.}\] For $l=2$ apply Observation \ref{obs2} to the sequence $(u_{n_1(k)})$ with $\varepsilon=\varepsilon_2$ to get a sequence-subsequence pair $(v_{n_1(k)}^{\varepsilon_2})\supset(v_{n_2(k)}^{\varepsilon_2})$ in $BV(\Omega)$ and a $v^2\inBV(\Omega)$ satisfying
\[\|u_{n_2(k)}-v_{n_2(k)}^{\varepsilon_2}\|_p\leq \varepsilon_2\quad \text{for all $k\geq 1$, and}\qquad v_{n_2(k)}^{\varepsilon_2}\overset{k}{\to}v^2\quad \text{in $L^1(\Omega)$.}\] Continuing in this manner we obtain for each index $l$ a sequence-subsequence pair $(v_{n_{l-1}(k)}^{\varepsilon_l})\supset(v_{n_l(k)}^{\varepsilon_l})$ in $BV(\Omega)$ and a $v^l\inBV(\Omega)$ satisfying
\[\|u_{n_l(k)}-v_{n_l(k)}^{\varepsilon_l}\|_p\leq\varepsilon_l\quad \text{for all $k\geq 1$, and}\qquad v_{n_l(k)}^{\varepsilon_l}\overset{k}{\to}v^l\quad \text{in $L^1(\Omega)$.}\] Next, for $l\geq 1$ fixed, consider the diagonal index sequence $(n_k(k))_{k\geq l}$, which is a subsequence of $(n_l(j))_{j\geq 1}$. With $ n(k):=n_k(k)$ we therefore get that: for each $l\geq 1$, there holds \begin{equation}\label{key2}
\|u_{ n(k)}-v_{ n(k)}^{\varepsilon_l}\|_p\leq\varepsilon_l\quad \text{for all $k\geq l$, and}\quad
v_{ n(k)}^{\varepsilon_l}\overset{k}{\to} v^l\quad \text{in $L^1(\Omega)$.} \end{equation} We claim that $(v^l)$ is a Cauchy sequence in $L^1(\Omega)$. To show this let
$C=C(\Omega,p)$ be a constant such that $\|f\|_1\leq C\|f\|_p$ for all $f\in L^p(\Omega)$. Then, for any $\delta>0$ we first choose an index $l(\delta)$ so that $\varepsilon_l\leq\frac{\delta}{2C}$ for $l\geq l(\delta)$. Then, for $l,q\geq l(\delta)$ and any $k\geq \max(l,q)$, \eq{key2}${}_1$ gives that \begin{align*}
\|v^l-v^q\|_1
&\leq \|v^l-v_{ n(k)}^{\varepsilon_l}\|_1+\|v_{n(k)}^{\varepsilon_l}-u_{n(k)}\|_1
+\|u_{n(k)}-v_{n(k)}^{\varepsilon_q}\|_1+\|v_{n(k)}^{\varepsilon_q}-v^q\|_1\\
&\leq \|v^l-v_{ n(k)}^{\varepsilon_l}\|_1+C\varepsilon_l+C\varepsilon_q+\|v_{n(k)}^{\varepsilon_q}-v^q\|_1\\
&\leq \|v^l-v_{ n(k)}^{\varepsilon_l}\|_1+\delta +\|v_{n(k)}^{\varepsilon_q}-v^q\|_1. \end{align*}
Sending $k\to\infty$ we get from \eq{key2}${}_2$ that $\|v^l-v^q\|_1\leq \delta$ whenever $l,q\geq l(\delta)$, establishing the claim.
By completeness of $L^1(\Omega)$ we thus obtain the existence of a function $u\in L^1(\Omega)$ such that $v^l\to u$ in $L^1(\Omega)$. We claim that $u_{ n(k)}\to u$ in $L^1(\Omega)$. To verify this, fix any $\delta>0$ and choose $l$
so large that $\varepsilon_l\leq\frac{\delta}{3C}$ (with $C$ as above) and $\|u-v^l\|_1\leq\frac{\delta}{3}$. According to \eq{key2}${}_1$ we therefore have, for any $k\geq l$, that \begin{align*}
\|u-u_{n(k)}\|_1
&\leq \|u-v^l\|_1+\|v^l-v_{n(k)}^{\varepsilon_l}\|_1+\|v_{n(k)}^{\varepsilon_l}-u_{n(k)}\|_1\\
&\leq \textstyle\frac{\delta}{3}+\|v^l-v_{n(k)}^{\varepsilon_l}\|_1+C\|v_{n(k)}^{\varepsilon_l}-u_{n(k)}\|_p\\
&
\leq\textstyle\frac{2\delta}{3}+\|v^l-v_{n(k)}^{\varepsilon_l}\|_1. \end{align*}
Finally choose $k\geq l$ so large that $\|v^l-v_{n(k)}^{\varepsilon_l}\|_1<\frac{\delta}{3}$
(possible according to \eq{key2}${}_2$), giving $\|u-u_{n(k)}\|_1\leq\delta$. This establishes the claim, and concludes the proof of Theorem \ref{multi_var_frankova_any_p}. \qed
In the following two subsections we provide additional results for $p=1$ and $p=\infty$.
\subsection{The case $p=1$}\label{multi_var_case_p=1}
We shall establish attainment of $(\varepsilon,1)$-variation, right-continuity with respect to $\varepsilon$, and lower semi-continuity with respect to $L^1$-convergence. The arguments make use of Theorem \ref{multi_var_compact} and thus require that $\Omega$ is a $BV$ extension domain.
\begin{proposition}\label{attained_1}
Let $\Omega\subset\mathbb{R}^N$ ($N\geq1$) be an open and bounded $BV$ extension domain
and assume $u\in L^1(\Omega)$ and $\varepsilon>0$. Then there is a function
$v\in BV(\Omega)$ with $\|u-v\|_1\leq\varepsilon$ and $\V v=(\varepsilon,1)\text{-}\mathrm{Var\,} u$. \end{proposition}
\begin{proof}
Recall from Lemma \ref{basic_props} that $\mathcal R_1(\Omega)=L^1(\Omega)$,
so that $(\varepsilon,1)\text{-}\mathrm{Var\,} u<\infty$. According to Definition \ref{eps_p_varn_defn}
there is a sequence $(v_k)\subset BV(\Omega)$ with $\|v_k-u\|_1\leq\varepsilon$
for all $k\geq 1$ and such that
\begin{equation}\label{v_k_2}
(\varepsilon,1)\text{-}\mathrm{Var\,} u=\lim_k \V v_k.
\end{equation}
Also, for all $k\geq1$,
\[\|v_k\|_1\leq \|v_k-u\|_1+\|u\|_1\leq\varepsilon+\|u\|_1.\]
Thus $(v_k)$ is bounded in $BV(\Omega)$, and Theorem \ref{multi_var_compact}
gives a subsequence $(v_{k(j)})\subset (v_k)$ and a $v\inBV(\Omega)$ so that
\begin{equation}\label{v_k_3}
v_{k(j)}\to v\quad\text{in $L^1(\Omega)$.}
\end{equation}
According to lower semi-continuity of $\V$ with respect to $L^1$-convergence
we have $\V v\leq\liminf_j \V v_{k(j)}$.
Together with \eq{v_k_2} this gives
\begin{equation}\label{v_k_4}
\V v\leq\liminf_j \V v_{k(j)}=\lim_j \V v_{k(j)}=(\varepsilon,1)\text{-}\mathrm{Var\,} u.
\end{equation}
On the other hand, we have for any $j$ that
\[\|u-v\|_1\leq \|u-v_{k(j)}\|_1+\|v_{k(j)}-v\|_1\leq\varepsilon+\|v_{k(j)}-v\|_1.\]
By sending $j\to\infty$ and using \eq{v_k_3} we therefore get $\|u-v\|_1\leq\varepsilon$.
Definition \ref{eps_p_varn_defn} therefore gives $(\varepsilon,1)\text{-}\mathrm{Var\,} u\leq \V v,$ showing that
$(\varepsilon,1)\text{-}\mathrm{Var\,} u= \V v$. \end{proof}
We next apply this last result to establish right-continuity with respect to $\varepsilon$.
\begin{proposition}\label{right_cont}
Let $\Omega\subset\mathbb{R}^N$ ($N\geq1$) be an open and bounded $BV$ extension
domain and assume $u\in L^1(\Omega)$ and $\varepsilon_0>0$. Then
\begin{equation}\label{r_cont}
\lim_{\varepsilon\downarrow\varepsilon_0} \,\,(\varepsilon,1)\text{-}\mathrm{Var\,} u=(\varepsilon_0,1)\text{-}\mathrm{Var\,} u.
\end{equation} \end{proposition}
\begin{proof}
Since $L^1(\Omega)=\mathcal R_1(\Omega)$ the
right-hand side of \eq{r_cont} is finite. Also, according to part (3) of
Lemma \ref{monotonicity}, we have $(\varepsilon,1)\text{-}\mathrm{Var\,} u\leq (\varepsilon_0,1)\text{-}\mathrm{Var\,} u$
whenever $\varepsilon>\varepsilon_0$. Denoting the limit on the left-hand side of
\eq{r_cont} by $L_0$, it follows that $L_0$ exists as a finite number
satisfying
\begin{equation}\label{one_way}
L_0\leq (\varepsilon_0,1)\text{-}\mathrm{Var\,} u.
\end{equation}
For the opposite inequality we use Proposition \ref{attained_1} to select,
for each $\varepsilon>\varepsilon_0$, a function $v^\varepsilon\inBV(\Omega)$ with
\[\|u-v^\varepsilon\|_1\leq\varepsilon\qquad\text{and}\qquad \V v^\varepsilon=(\varepsilon,1)\text{-}\mathrm{Var\,} u\leq (\varepsilon_0,1)\text{-}\mathrm{Var\,} u<\infty.\]
Since,
\[\|v^\varepsilon\|_1\leq\|v^\varepsilon-u\|_1+\|u\|_1\leq \varepsilon+\|u\|_1<\infty,\]
we have that $\{v^\varepsilon\,|\, \varepsilon>\varepsilon_0\}$ is bounded in $BV(\Omega)$.
It follows from Theorem \ref{multi_var_compact} that there is a sequence $(\varepsilon_n)_{n\geq1}$
with $\varepsilon_n\downarrow \varepsilon_0$ and a function $v\inBV(\Omega)$ so that $v_n:=v^{\varepsilon_n}$
satisfies $v_n\to v$ in $L^1(\Omega)$. This gives
\begin{equation}\label{one_way_1}
L_0=\lim_{\varepsilon\downarrow\varepsilon_0} \,\,(\varepsilon,1)\text{-}\mathrm{Var\,} u=\lim_{n} \V v_n\equiv\liminf_n\V v_n\geq \V v.
\end{equation}
On the other hand,
\[\|u-v\|_1\leq\|u-v_n\|_1+\|v_n-v\|_1\leq \varepsilon_n+\|v_n-v\|_1,\]
so that sending $n\to\infty$ yields $\|u-v\|_1\leq\varepsilon_0$. It follows
from Definition \ref{eps_p_varn_defn} that
\[(\varepsilon_0,1)\text{-}\mathrm{Var\,} u\leq\V v,\]
which together with \eq{one_way_1} gives $(\varepsilon_0,1)\text{-}\mathrm{Var\,} u\leq L_0.$ \end{proof}
We next show how the two previous propositions yield lower semi-continuity of $(\varepsilon,1)\text{-}\mathrm{Var\,} u$ with respect to $L^1$-convergence.
\begin{proposition}\label{lsc_p=1}
Let $\Omega\subset\mathbb{R}^N$ ($N\geq1$) be an open and bounded $BV$ extension
domain. If $u_n\to u$ in $L^1(\Omega)$, then
\[(\varepsilon,1)\text{-}\mathrm{Var\,} u\leq\liminf_n\,\,(\varepsilon,1)\text{-}\mathrm{Var\,} u_n \qquad\text{for each $\varepsilon>0$.}\] \end{proposition}
\begin{proof}
Fix $\varepsilon>0$. According to Proposition \ref{attained_1}, for each $n\geq 1$ there is a
function $v_n\inBV(\Omega)$ with
\[\|u_n-v_n\|_1\leq\varepsilon\qquad\text{and}\qquad (\varepsilon,1)\text{-}\mathrm{Var\,} u_n=\V v_n.\]
Let $\delta>0$, and choose $M=M(\delta)\in\mathbb{N}$ so that $\|u_n-u\|_1\leq\delta$
for $n\geq M$. For such $n$ we have
\[\|v_n-u\|_1\leq \|v_n-u_n\|_1+\|u_n-u\|_1\leq \varepsilon+\delta,\]
and therefore
\[(\varepsilon+\delta,1)\text{-}\mathrm{Var\,} u\leq \V v_n=(\varepsilon,1)\text{-}\mathrm{Var\,} u_n\qquad\text{for each $n\geq M$.}\]
It follows that
\[(\varepsilon+\delta,1)\text{-}\mathrm{Var\,} u\leq\liminf_n\,\,(\varepsilon,1)\text{-}\mathrm{Var\,} u_n\qquad\text{whenever $\delta>0$.}\]
Finally, sending $\delta\downarrow 0$ and using Proposition \ref{right_cont} give
\[(\varepsilon,1)\text{-}\mathrm{Var\,} u=\lim_{\delta\downarrow 0}\,\,(\varepsilon+\delta,1)\text{-}\mathrm{Var\,} u\leq \liminf_n\,\,(\varepsilon,1)\text{-}\mathrm{Var\,} u_n,\]
establishing the claim. \end{proof}
\subsection{The case $p=\infty$}\label{multi_var_case_p=infty}
We shall establish the results corresponding to the previous three propositions also for $p=\infty$. The proofs for the two first are similar to those for the case $p=1$, while for the proof of lower semi-continuity we follow Fra{\v{n}}kov{\'a}'s argument for (R4) in Section \ref{frankova_helly_extn} (cf.\ Proposition 3.6 in \cite{fr}). In each case there is the added ingredient that $L^1$-convergence implies almost everywhere convergence along a subsequence. We first verify that the infimum in the definition of $(\varepsilon,\infty)$-variation is attained.
\begin{proposition}\label{attained_infty}
Let $\Omega\subset\mathbb{R}^N$ ($N\geq1$) be an open and bounded $BV$ extension domain,
and assume $u\in \mathcal R_\infty(\Omega)$ and $\varepsilon>0$. Then there is a function
$v\in BV(\Omega)$ with $\|u-v\|_\infty\leq\varepsilon$ and $\V v=(\varepsilon,\infty)\text{-}\mathrm{Var\,} u$. \end{proposition}
\begin{proof} Fix $u$ and $\varepsilon$ as stated.
According to Definition \ref{eps_p_varn_defn}
there is a sequence $(v_k)\subset BV(\Omega)$ with $\|v_k-u\|_\infty\leq\varepsilon$
for all $k\geq 1$ and such that
\begin{equation}\label{v_k_5}
\lim_k \V v_k=(\varepsilon,\infty)\text{-}\mathrm{Var\,} u<\infty.
\end{equation}
Also,
\[\|v_k\|_1\leq \|v_k-u\|_1+\|u\|_1\leq|\Omega| \|v_k-u\|_\infty+\|u\|_1
\leq |\Omega|\varepsilon+\|u\|_1<\infty.\]
It follows that $(v_k)$ is bounded in $BV(\Omega)$,
so that Theorem \ref{multi_var_compact} gives a subsequence $(v_{k(j)})\subset (v_k)$
and a $v\inBV(\Omega)$ with
\begin{equation}\label{v_k_6}
v_{k(j)}\to v\quad\text{in $L^1(\Omega)$, and}\quad \V v\leq\liminf_j \V v_{k(j)}.
\end{equation}
Note that \eq{v_k_6}${}_2$ and \eq{v_k_5} give
\begin{equation}\label{v_k_7}
\V v\leq\liminf_j \V v_{k(j)}\equiv\lim_j \V v_{k(j)}=(\varepsilon,\infty)\text{-}\mathrm{Var\,} u.
\end{equation}
For the opposite inequality we use that the $L^1$-convergence in \eq{v_k_6}${}_1$
implies almost everywhere convergence along a further subsequence $v_{k(j(i))}=:w_i$:
\begin{equation}\label{w_i_to_v}
w_i(x)\to v(x)\qquad\text{for all $x\in\Omega_0$,}
\end{equation}
where $\Omega_0\subset\Omega$ has full measure.
Next, as $\|u-w_i\|_\infty\leq \varepsilon$ for each $i\geq 1$, we have that for each $i\geq 1$
there is a set $\Omega_i\subset\Omega$ of full measure with
\begin{equation}\label{u-w_i}
|u(x)-w_i(x)|\leq \varepsilon\qquad\text{for each $x\in\Omega_i$.}
\end{equation}
With
\[\Omega':={\textstyle\bigcap}_{i=0}^\infty \Omega_i,\]
we have that $\Omega'$ has full measure, while \eq{u-w_i} gives
\begin{equation}\label{u-v}
|u(x)-v(x)|\leq |u(x)-w_i(x)|+|w_i(x)-v(x)|\leq\varepsilon +|w_i(x)-v(x)|
\end{equation}
for each $x\in\Omega'$ and all $i\geq 1$.
Sending $i\to\infty$ we obtain from \eq{w_i_to_v} that
\[|u(x)-v(x)|\leq \varepsilon\qquad\text{for each $x\in\Omega'$,}\]
so that
\[\|u-v\|_\infty\leq \varepsilon.\]
As $v\inBV(\Omega)$, Definition \ref{eps_p_varn_defn} gives $(\varepsilon,\infty)\text{-}\mathrm{Var\,} u\leq \V v$.
Together with \eq{v_k_7} this shows that $(\varepsilon,\infty)\text{-}\mathrm{Var\,} u= \V v$. \end{proof}
We next establish right-continuity of $(\varepsilon,\infty)\text{-}\mathrm{Var\,} u$ with respect to $\varepsilon$. \begin{proposition}\label{right_cont_inf}
Let $\Omega\subset\mathbb{R}^N$ ($N\geq1$) be an open and bounded $BV$ extension domain,
and let $u\in \mathcal R_\infty(\Omega)$ and $\varepsilon_0>0$. Then
\begin{equation}\label{r_cont_inf}
\lim_{\varepsilon\downarrow\varepsilon_0} \,\,(\varepsilon,\infty)\text{-}\mathrm{Var\,} u=(\varepsilon_0,\infty)\text{-}\mathrm{Var\,} u.
\end{equation} \end{proposition}
\begin{proof}
Since $u\in \mathcal R_\infty(\Omega)$ the right-hand side of \eq{r_cont_inf} is finite.
According to Lemma \ref{monotonicity}, we have $(\varepsilon,\infty)\text{-}\mathrm{Var\,} u\leq (\varepsilon_0,\infty)\text{-}\mathrm{Var\,} u$
for $\varepsilon>\varepsilon_0$. Denoting the limit on the left-hand side of
\eq{r_cont_inf} by $L_0$, it follows that $L_0$ exists as a finite number
satisfying
\begin{equation}\label{one_way_inf_1}
L_0\leq (\varepsilon_0,\infty)\text{-}\mathrm{Var\,} u.
\end{equation}
For the opposite inequality we use Proposition \ref{attained_infty} to select,
for each $\varepsilon>\varepsilon_0$, a function $v^\varepsilon\inBV(\Omega)$ with
\[\|u-v^\varepsilon\|_\infty\leq\varepsilon\qquad\text{and}\qquad \V v^\varepsilon=(\varepsilon,\infty)\text{-}\mathrm{Var\,} u\leq L_0.\]
Also,
\[\|v^\varepsilon\|_1\leq\|v^\varepsilon-u\|_1+\|u\|_1\leq |\Omega|\|v^\varepsilon-u\|_\infty+\|u\|_1\leq |\Omega|\varepsilon+\|u\|_1<\infty,\]
so that $\{v^\varepsilon\,|\, \varepsilon>\varepsilon_0\}$ is bounded in $BV(\Omega)$.
According to Theorem \ref{multi_var_compact} there is a sequence $(\varepsilon_n)_{n\geq1}$
with $\varepsilon_n\downarrow \varepsilon_0$ and a function $v\inBV(\Omega)$ so that $v_n:=v^{\varepsilon_n}$
satisfies $v_n\to v$ in $L^1(\Omega)$.
This gives
\begin{equation}\label{one_way_inf}
L_0=\lim_{\varepsilon\downarrow\varepsilon_0} \,\,(\varepsilon,\infty)\text{-}\mathrm{Var\,} u=\lim_{n} \V v_n\equiv\liminf_n\V v_n\geq \V v.
\end{equation}
Next, since $v_n\to v$ in $L^1(\Omega)$, there is a full-measure set $\Omega_0\subset \Omega$
such that
\begin{equation}\label{next}
v_n(x)\to v(x)\qquad\text{for each $x\in\Omega_0$.}
\end{equation}
Also, since $\|u-v_n\|_\infty\leq\varepsilon_n$ for each $n\geq 1$, there are full measure sets $\Omega_n$
such that
\[|u(x)-v_n(x)|\leq \varepsilon_n\qquad\text{for each $x\in\Omega_n$, $n\geq 1$.}\]
Therefore, with
\[\Omega':={\textstyle\bigcap}_{n=0}^\infty \Omega_n,\]
we have that $\Omega'$ is of full measure and such that
\[|u(x)-v(x)|\leq |u(x)-v_n(x)|+|v_n(x)-v(x)|\leq \varepsilon_n+|v_n(x)-v(x)|\]
for each $x\in\Omega'$ and for all $n\geq 1$.
Sending $n\to\infty$ and using \eq{next} yields $|u(x)-v(x)|\leq\varepsilon_0$ for each $x\in\Omega'$,
so that
\[\|u-v\|_\infty\leq\varepsilon_0.\]
As $v\inBV(\Omega)$, Definition \ref{eps_p_varn_defn} gives
\[(\varepsilon_0,\infty)\text{-}\mathrm{Var\,} u\leq\V v,\]
which together with \eq{one_way_inf} gives $(\varepsilon_0,\infty)\text{-}\mathrm{Var\,} u\leq L_0$. Combined with \eq{one_way_inf_1}
this finishes the proof. \end{proof}
We finally establish lower semi-continuity of $(\varepsilon,\infty)\text{-}\mathrm{Var\,}$ with respect to $L^1$-convergence.
\begin{proposition}\label{lsc_p=infty}
Let $\Omega\subset\mathbb{R}^N$ ($N\geq1$) be an open and bounded $BV$ extension domain.
If $(u_n)\subset\mathcal R_\infty(\Omega)$ and $u_n\to u$ in $L^1(\Omega)$, then
\begin{equation}\label{lsc_infty}
(\varepsilon,\infty)\text{-}\mathrm{Var\,} u\leq\liminf_n\,\,(\varepsilon,\infty)\text{-}\mathrm{Var\,} u_n \qquad\text{for each $\varepsilon>0$.}
\end{equation} \end{proposition}
\begin{proof}
Fix $\varepsilon>0$. If the right-hand side of \eq{lsc_infty} is infinite, there is nothing to prove.
So assume $L:=\liminf_n\,\,(\varepsilon,\infty)\text{-}\mathrm{Var\,} u_n<\infty$. We select a subsequence $(u_{n(k)})\subset(u_n)$
with
\[\lim_k \,\,(\varepsilon,\infty)\text{-}\mathrm{Var\,} u_{n(k)}=L.\]
As $u_{n(k)}\to u$ in $L^1(\Omega)$ we can extract a further subsequence
$(u_{n(k(j))})\subset (u_{n(k)})$ so that
\begin{equation}\label{pw1}
u_{n(k(j))}(x)\to u(x) \qquad\text{for each $x\in \Omega_{-1}$,}
\end{equation}
where $\Omega_{-1}\subset \Omega$ has full measure.
Since each $u_{n(k(j))}\in\mathcal R_\infty(\Omega)$, Proposition \ref{attained_infty} gives
a sequence $(v_j)\subset BV(\Omega)$ with
\begin{equation}\label{vj}
\V v_j=(\varepsilon,\infty)\text{-}\mathrm{Var\,} u_{n(k(j))}\qquad\text{and}\qquad \|v_j-u_{n(k(j))}\|_\infty\leq \varepsilon
\qquad\text{for each $j\geq1$.}
\end{equation}
We therefore have
\begin{equation}\label{next_next}
\lim_j \V v_j = \lim_j (\varepsilon,\infty)\text{-}\mathrm{Var\,} u_{n(k(j))} =\lim_k \,\,(\varepsilon,\infty)\text{-}\mathrm{Var\,} u_{n(k)}=L<\infty,
\end{equation}
and
\begin{align*}
\|v_j\|_1&\leq\|v_j-u_{n(k(j))}\|_1+\|u_{n(k(j))}\|_1\\
&\leq |\Omega|\|v_j-u_{n(k(j))}\|_\infty+\|u_{n(k(j))}\|_1
\leq |\Omega|\varepsilon+ \|u_{n(k(j))}\|_1,
\end{align*}
which is uniformly bounded with respect to $j$ since $u_n\to u$ in $L^1(\Omega)$.
The sequence $(v_j)$ is therefore bounded in $BV(\Omega)$, and Theorem
\ref{multi_var_compact} gives a subsequence $(v_{j(i)})\subset(v_j)$ and
a $v\inBV(\Omega)$ such that $v_{j(i)}\to v$ in $L^1(\Omega)$. According to \eq{next_next}
and lower semi-continuity of $\V$ with respect to $L^1$-convergence, we get
\begin{equation}\label{var_v}
\V v\leq \liminf_i \V v_{j(i)}\equiv \lim_j \V v_j=L.
\end{equation}
We then extract a further subsequence $(v_{j(i(h))})\subset (v_{j(i)})$ together with a
full measure set $\Omega_0\subset\Omega$ so that
\begin{equation}\label{pw2}
v_{j(i(h))}(x)\to v(x)\qquad\text{for each $x\in\Omega_0$.}
\end{equation}
Finally, according to \eq{vj}${}_2$ there is, for each $j\geq 1$, a full measure set
$\Omega_j\subset\Omega$ such that
\begin{equation}\label{pw3}
|u_{n(k(j))}-v_j(x)|\leq\varepsilon\qquad\text{for each $x\in\Omega_j$, $j\geq1$.}
\end{equation}
Defining the full measure set
\[\Omega':={\textstyle\bigcap}_{j=-1}^\infty \Omega_j,\]
we therefore get from \eq{pw3} that
\begin{align*}
|u(x)-v(x)|&\leq |u(x)-u_{n(k(j(i(h))))}(x)|+|u_{n(k(j(i(h))))}(x)-v_{j(i(h))}(x)|+|v_{j(i(h))}(x)-v(x)|\\
&\leq |u(x)-u_{n(k(j(i(h))))}(x)|+\varepsilon+|v_{j(i(h))}(x)-v(x)|\qquad\text{for each $x\in\Omega'$.}
\end{align*}
Sending $h\to\infty$ and using \eq{pw1} and \eq{pw2} gives $|u(x)-v(x)|\leq\varepsilon$ for
all $x\in\Omega'$, so that
\begin{equation}\label{inf_eps}
\|u(x)-v(x)\|_\infty\leq\varepsilon.
\end{equation}
As $v\inBV(\Omega)$, Definition \ref{eps_p_varn_defn} together with \eq{var_v} and \eq{inf_eps}
give
\[(\varepsilon,\infty)\text{-}\mathrm{Var\,} u\leq \V v\leq L.\]
As $\varepsilon>0$ is arbitrary, this gives \eq{lsc_infty}. \end{proof}
\subsection{Regulated vs.\ $\infty$-regulated functions of a single variable}\label{reg_vs_infty_reg_1_d}
In this section we clarify the relationship between standard regulated functions (cf.\ Definition \ref{reg}) and $\infty$-regulated functions (cf.\ Definition \ref{p_reg} with $p=\infty$) defined on an open interval $I$. The following result shows that the latter coincide with
``essentially regulated'' functions. (Recall that $\|\cdot\|$ denotes uniform norm; step functions were defined in Section \ref{frankova_helly_extn}.)
\begin{proposition}\label{1_d_reg_vs_infty_reg}
Let $I=(a,b)$ be an open and bounded interval, and $u:I\to\mathbb{R}$.
Then the following statements are equivalent:
\begin{itemize}
\item[(a)] $u$ is $\infty$-regulated, i.e., $(\varepsilon,\infty)\text{-}\mathrm{Var\,} u<\infty$
for each $\varepsilon>0$.
\item[(b)] $u$ can be realized as an $L^\infty$-limit of step functions on $I$.
\item[(c)] $u$ has a regulated version, i.e., there exists $\bar u\in\mathcal R(I)$
with $\bar u(x)=u(x)$ for a.a.\ $x\in I$.
\end{itemize}
\end{proposition}
\begin{proof}
(a) $\Rightarrow$ (b): Fix $\varepsilon>0$. By Definition \ref{p_reg} there is a
function $v^\varepsilon\inBV(I)$ with $\|u-v^\varepsilon\|_\infty\leq \frac{\varepsilon}{2}$. According to
Theorem 7.2 in \cite{le}, $v^\varepsilon$ has a version $\bar v^\varepsilon$ belonging to
$BPV(I)\subset\mathcal R(I)$.
It follows from property (R2) in Section \ref{frankova_helly_extn} that there is a step
function $w^\varepsilon:I\to\mathbb{R}$ satisfying $\|\bar v^\varepsilon-w^\varepsilon\|\leq\frac{\varepsilon}{2}$.
Therefore,
\begin{align*}
\|u-w^\varepsilon\|_\infty&\leq \|u-v^\varepsilon\|_\infty+\|v^\varepsilon-w^\varepsilon\|_\infty\\
&=\|u-v^\varepsilon\|_\infty+\|\bar v^\varepsilon-w^\varepsilon\|_\infty\\
&\leq\|u-v^\varepsilon\|_\infty+\|\bar v^\varepsilon-w^\varepsilon\|
\leq\textstyle\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.
\end{align*}
As $w^\varepsilon$ is a step function and $\varepsilon$ is arbitrary, statement (b) follows.
(b) $\Rightarrow$ (c): Assume $(u_n)$ is a sequence of step functions on $I$
with $\|u-u_n\|_\infty\to 0$.
For each $n\geq 1$ let $\bar u_n$ denote the right-continuous version of $u_n$, which is obtained
from $u_n$ by changing at most a finite number of its values. It follows that
$\|u-\bar u_n\|_\infty\to 0$. Therefore, there is a measurable set $E\subset I$ with $|E|=|I|$
and such that $\bar u_n\to u$ uniformly on $E$. We claim that the sequence $(\bar u_n)$ is uniformly
Cauchy on {\em all} of $I$, i.e.,
\begin{equation}\label{sub_claim}
\|\bar u_m-\bar u_n\|\to 0 \qquad\text{as $m,n\to\infty$.}
\end{equation}
Assuming the validity of \eq{sub_claim} for now, we have, by completeness of $(\mathcal B(I),\|\cdot\|)$,
that there exists $\bar u\in \mathcal B(I)$ such that $\bar u_n\to \bar u$ uniformly on $I$.
In particular, according to property (R2) in Section \ref{frankova_helly_extn}, $\bar u\in \mathcal R(I)$.
Since $\bar u_n\to u$ uniformly on $E$, it follows that $\bar u$ and $u$ agree on $E$, establishing (c).
It remains to verify \eq{sub_claim}. For this, fix $\varepsilon>0$. Since $\bar u_n\to u$ uniformly on $E$ we have:
there is an $N_\varepsilon\in\mathbb{N}$ such that
\begin{equation}\label{on_E}
\sup_{y\in E} |\bar u_n(y)-\bar u_m(y)|\leq\varepsilon\qquad\text{whenever $m,n\geq N_\varepsilon$.}
\end{equation}
Fix $m,n\geq N_\varepsilon$ and $x\in I$. Since $\bar u_n$ and $\bar u_m$ are right-continuous
step functions, there is a $\delta_x>0$ such that both $\bar u_n$ and $\bar u_m$ are
constant on $J_x:=[x,x+\delta_x)$. Since $E\subset I$ has full measure, $E$ is dense in $I$,
and there is therefore a point $y_x\in E\cap J_x$. As $\bar u_n$ and $\bar u_m$ are
constant on $J_x$ we get from \eq{on_E} that
\[|\bar u_n(x)-\bar u_m(x)|=|\bar u_n(y_x)-\bar u_m(y_x)|\leq \varepsilon.\]
As $x\in I$ is arbitrary it follows that
\[\sup_{x\in I}|\bar u_n(x)-\bar u_m(x)|\leq \varepsilon.\]
This shows that $\|\bar u_n-\bar u_m\|\leq\varepsilon$ whenever $m,n\geq N_\varepsilon$,
establishing \eq{sub_claim}.
(c) $\Rightarrow$ (a): Assume $\bar u\in\mathcal R(I)$ is a version of $u$, and fix $\varepsilon>0$.
According to (R3) in Section \ref{frankova_helly_extn}, we have $\evar \bar u<\infty$, i.e.,
there is a function $v^\varepsilon\inBPV(I)$ with $\|\bar u-v^\varepsilon\|\leq \varepsilon$.
According to Theorem 7.2 in \cite{le}, $v^\varepsilon$ belongs to $BV(I)$ and we have
\[\|u-v^\varepsilon\|_\infty= \|\bar u-v^\varepsilon\|_\infty\leq \|\bar u-v^\varepsilon\|\leq \varepsilon.\]
Thus, $(\varepsilon,\infty)\text{-}\mathrm{Var\,} u<\infty$, and since $\varepsilon>0$ is arbitrary, we have $u\in\mathcal R_\infty(I)$. \end{proof}
\section{Saturation}\label{aa}
The strategy in Fra{\v{n}}kov{\'a}'s extension of Helly's theorem may be abstracted as follows. Assume given a criterion for precompactness of sequences in some function space $\mathcal X$, which is contained in a larger function space $\mathcal Y$. One then considers sequences $(u_n)$ in $\mathcal Y$ with the property that for each $\varepsilon>0$ there is a ``nearby'' sequence $(v_n^\varepsilon)_n\subset \mathcal X$ which satisfies the original criterion. Fra{\v{n}}kov{\'a}'s work \cite{fr} shows that this strategy provides a genuine extension of Helly's theorem in $\mathcal X=BPV(I)$ to a criterion for precompactness in $\mathcal Y=\mathcal R(I)$. Similarly, Theorem \ref{multi_var_frankova_any_p} above gives a proper extension of Theorem \ref{multi_var_compact} from $\mathcal X=BV(\Omega)$ to $\mathcal Y=\mathcal R_p(\Omega)$ ($p\in[1,\infty]$).
It is natural to consider this strategy for other compactness criteria. In this section we consider two concrete cases: Fra{\v{n}}kov{\'a}'s own theorem and the Ascoli-Arzel\`{a} theorem. However, our findings indicate that this approach does not readily generalize to other settings.
\subsection{Saturation of Fra{\v{n}}kov{\'a}'s theorem}\label{reg_sat}
Given Fra{\v{n}}kov{\'a}'s extension of Helly's theorem, it is natural to ask if one can ``take another step'' and obtain a further extension. The idea would be to have $\mathcal R(I)$ and $\evar$ play the roles of $BV(I)$ and $\var$ in the proof of Theorem \ref{fr_thm} above. For $(u_n)\subset\mathcal B(I)$ we would thus make the following assumptions: \begin{itemize}
\item[(i)] $(u_n)$ is bounded in $\mathcal B(I)$, and
\item[(ii)] for each $\delta>0$ there is a sequence $(w_n^\delta)_n\subset\mathcal R(I)$
satisfying $\|u_n-w_n^\delta\|\leq\delta$ for all $n\geq1$, and $(w_n^\delta)_n$ has
uniformly bounded $\varepsilon$-variations. \end{itemize} However, it is not hard to see that (ii) implies that the original sequence $(u_n)$ itself has uniformly bounded $\varepsilon$-variations. Indeed, we may even weaken (ii) by requiring that for each $\delta>0$ there is a single (small) $\varepsilon'>0$ such that the approximating sequence $(w_n^\delta)_n$ has uniformly bounded $\varepsilon'$-variation: \begin{itemize}
\item[(ii)$'$] for each $\delta>0$ there is an $\varepsilon'(\delta)>0$, with $\varepsilon'(\delta)\to0$ as $\delta\to0$,
and a sequence $(w_n^\delta)_n\subset\mathcal R(I)$
satisfying $\|u_n-w_n^\delta\|\leq\delta$ for all $n\geq1$, and
\begin{equation}\label{weakened}
\sup_n\,\,\varepsilon'(\delta)\text{-var}\,w_n^\delta<\infty.
\end{equation} \end{itemize} Assumption (ii)$'$ gives that, for each $\delta>0$, there is a $K_\delta<\infty$ such that $\varepsilon'(\delta)\text{-var}\,w_n^\delta\leq K_\delta$ for all $n\geq1$. In particular, for each $n\geq 1$ there is a function $z_n^\delta\in BPV(I)$ with
$\|w_n^\delta-z_n^\delta\|\leq\varepsilon'(\delta)$ and $\var z_n^\delta\leq K_\delta$.
Given $\varepsilon>0$ we can then choose $\delta(\varepsilon)>0$ so small that $\delta(\varepsilon)+\varepsilon'(\delta(\varepsilon))\leq\varepsilon$. Setting $M_\varepsilon:=K_{\delta(\varepsilon)}$ and $v_n^\varepsilon:=z_n^{\delta(\varepsilon)}$, we get
\[\|u_n-v_n^\varepsilon\|\leq \|u_n-w_n^{\delta(\varepsilon)}\|+\|w_n^{\delta(\varepsilon)}-z_n^{\delta(\varepsilon)}\|\leq \varepsilon \qquad\text{and}\qquad \var v_n^\varepsilon\leq M_\varepsilon,\] for all $n\geq 1$. Thus, $\sup_n\evar u_n\leq M_\varepsilon<\infty$ for each $\varepsilon>0$, i.e., the original sequence $(u_n)$ has uniformly bounded $\varepsilon$-variations.
This shows that the assumptions (i) and (ii)$'$ (a fortiori (i) and (ii)) imply that the given sequence already meets the assumptions of Theorem \ref{fr_thm}. That is, Fra{\v{n}}kov{\'a}'s theorem is saturated in this respect.
\subsection{Saturation of the Ascoli-Arzel\`{a} theorem}\label{aa_sat}
It is natural to consider the same strategy for other standard compactness criteria. In this section we consider the Ascoli-Arzel\`{a} theorem, which can be formulated as follows (see \cite{dib}, pp.\ 203-204).
(For an open set $\Omega\subset\mathbb{R}^N$ we equip $C(\Omega)$ with uniform norm $\|\cdot\|$.)
\begin{theorem}[Ascoli-Arzel\`{a}, version I] \label{aa1}
Assume $(u_n)\subset C(\Omega)$ is a bounded sequence for which there
is a modulus of continuity\footnote{I.e., $\omega:[0,\infty)\to[0,\infty)$ is continuous
and increasing and with $\omega(0)=0$.} $\omega$ such that
\begin{equation}\label{mod_cont}
\sup_n |u_n(x)-u_n(y)|\leq \omega(|x-y|)\qquad\text{for all $x,y\in\Omega$.}
\end{equation}
Then there is a subsequence $(u_{n(k)})\subset(u_n)$ and a function $u\in C(\Omega)$
such that $u(x)=\lim_k u_{n(k)}(x)$ for every $x\in\Omega$. \end{theorem} \noindent (The convergence is uniform on compact subsets of $\Omega$, but this is not important for what follows.) It is convenient to also record an alternative formulation in terms of uniform equicontinuity of $(u_n)$, i.e., the requirement that for any $\delta>0$ there is a $\eta(\delta)>0$ such that
$\sup_n|u_n(x)-u_n(y)|\leq\delta$ whenever $x,y\in\Omega$ satisfy $|x-y|\leq \eta(\delta)$. \begin{theorem}[Ascoli-Arzel\`{a}, version II]\label{aa2}
If $(u_n)\subset C(\Omega)$ is bounded and uniformly equicontinuous,
then the conclusion of Theorem \ref{aa1} holds. \end{theorem}
To implement the extension strategy formulated above, we consider a bounded sequence $(u_n)\subset \mathcal B(\Omega)$ and assume that for each $\varepsilon>0$ there is a $C(\Omega)$-sequence which is uniformly $\varepsilon$-close to $(u_n)$ and which meets the conditions in Theorem \ref{aa1}. That is, for each $\varepsilon>0$ there is a sequence $(v_n^\varepsilon)_n\subset C(\Omega)$ and a modulus of continuity $\omega^\varepsilon$ so that \begin{equation}\label{aa3}
\|u_n-v_n^\varepsilon\|\leq \varepsilon\qquad\text{for all $n$,} \end{equation} and \begin{equation}\label{aa4}
\sup_n |v_n^\varepsilon(x)-v_n^\varepsilon(y)|\leq \omega^\varepsilon(|x-y|)
\qquad\text{for all $x,y\in\Omega$.} \end{equation} However, these conditions imply that the original sequence $(u_n)$ itself meets the conditions in Theorem \ref{aa2}.
Indeed, for any $n$ and for any $\varepsilon>0$, \eq{aa3}-\eq{aa4} yield
\[|u_n(x)-u_n(y)|\leq |u_n(x)-v_n^\varepsilon(x)|+|v_n^\varepsilon(x)-v_n^\varepsilon(y)|
+|v_n^\varepsilon(y)-u_n(y)|\leq 2\varepsilon+\omega^\varepsilon(|x-y|).\] Then, given $\delta>0$, set $\varepsilon:=\frac{\delta}{3}$ and choose $\eta(\delta)$ so small that $\omega^\varepsilon(s)\leq\frac{\delta}{3}$ for $s\in[0,\eta(\delta)]$. This gives
\[\sup_n|u_n(x)-u_n(y)|\leq\delta\qquad\text{whenever $x,y\in\Omega$ satisfy $|x-y|\leq \eta(\delta)$.}\] In particular, $(u_n)$ is a sequence in $C(\Omega)$ to which Theorem \ref{aa2} applies. This shows that also the Ascoli-Arzel\`{a} theorem is saturated with respect to Fra{\v{n}}kov{\'a}'s extension strategy.
\begin{remark}
Similar considerations show that the Kolmogorov-Riesz theorem characterizing
precompactness in $L^p(\mathbb{R}^N)$ is likewise saturated. \end{remark}
\begin{bibdiv} \begin{biblist} \bib{afp}{book}{
author={Ambrosio, Luigi},
author={Fusco, Nicola},
author={Pallara, Diego},
title={Functions of bounded variation and free discontinuity problems},
series={Oxford Mathematical Monographs},
publisher={The Clarendon Press, Oxford University Press, New York},
date={2000},
pages={xviii+434},
isbn={0-19-850245-1},
review={\MR{1857292}}, } \bib{da}{article}{
author={Davison, T. M. K.},
title={A generalization of regulated functions},
journal={Amer. Math. Monthly},
volume={86},
date={1979},
number={3},
pages={202--204},
issn={0002-9890},
review={\MR{522344}},
doi={10.2307/2321523}, } \bib{dib}{book}{
author={DiBenedetto, Emmanuele},
title={Real analysis},
series={Birkh\"{a}user Advanced Texts: Basler Lehrb\"{u}cher. [Birkh\"{a}user
Advanced Texts: Basel Textbooks]},
publisher={Birkh\"{a}user Boston, Inc., Boston, MA},
date={2002},
pages={xxiv+485},
isbn={0-8176-4231-5},
review={\MR{1897317}},
doi={10.1007/978-1-4612-0117-5}, } \bib{di}{book}{
author={Dieudonn\'{e}, J.},
title={Foundations of modern analysis},
series={Pure and Applied Mathematics, Vol. X},
publisher={Academic Press, New York-London},
date={1960},
pages={xiv+361},
review={\MR{0120319}}, } \bib{dn}{book}{
author={Dudley, Richard M.},
author={Norvai\v{s}a, Rimas},
title={Differentiability of six operators on nonsmooth functions and
$p$-variation},
series={Lecture Notes in Mathematics},
volume={1703},
note={With the collaboration of Jinghua Qian},
publisher={Springer-Verlag, Berlin},
date={1999},
pages={viii+277},
isbn={3-540-65975-7},
review={\MR{1705318}},
doi={10.1007/BFb0100744}, } \bib{fr}{article}{
author={Fra\v{n}kov\'{a}, Dana},
title={Regulated functions},
language={English, with Czech summary},
journal={Math. Bohem.},
volume={116},
date={1991},
number={1},
pages={20--59},
issn={0862-7959},
review={\MR{1100424}}, } \bib{le}{book}{
author={Leoni, Giovanni},
title={A first course in Sobolev spaces},
series={Graduate Studies in Mathematics},
volume={105},
publisher={American Mathematical Society, Providence, RI},
date={2009},
pages={xvi+607},
isbn={978-0-8218-4768-8},
review={\MR{2527916}},
doi={10.1090/gsm/105}, } \end{biblist} \end{bibdiv}
\end{document} | arXiv |
The rocky road to organics needs drying
Ingredients for microbial life preserved in 3.5 billion-year-old fluid inclusions
Helge Mißbach, Jan-Peter Duda, … Volker Thiel
Geoelectrochemistry-driven alteration of amino acids to derivative organics in carbonaceous chondrite parent bodies
Yamei Li, Norio Kitadai, … Kristin Johnson-Finn
The Boring Billion, a slingshot for Complex Life on Earth
Indrani Mukherjee, Ross R. Large, … Leonid V. Danyushevsky
Tectonically-driven oxidant production in the hot biosphere
Jordan Stone, John O. Edgar, … Jon Telling
In-situ preservation of nitrogen-bearing organics in Noachian Martian carbonates
Mizuho Koike, Ryoichi Nakada, … Atsuko Kobayashi
Mixing of meteoric and geothermal fluids supports hyperdiverse chemosynthetic hydrothermal communities
Daniel R. Colman, Melody R. Lindsay & Eric S. Boyd
Supply of phosphate to early Earth by photogeochemistry after meteoritic weathering
Dougal J. Ritson, Stephen J. Mojzsis & John. D. Sutherland
Indigenous and exogenous organics and surface–atmosphere cycling inferred from carbon and oxygen isotopes at Gale crater
H. B. Franz, P. R. Mahaffy, … R. E. Summons
Goldilocks at the dawn of complex life: mountains might have damaged Ediacaran–Cambrian ecosystems and prompted an early Cambrian greenhouse world
Fabricio Caxito, Cristiano Lana, … Carlos E. Ganade
Muriel Andreani ORCID: orcid.org/0000-0001-8043-09051,2,
Gilles Montagnac ORCID: orcid.org/0000-0001-9938-02821,
Clémentine Fellah1,
Jihua Hao3,4,5,
Flore Vandier1,
Isabelle Daniel ORCID: orcid.org/0000-0002-1448-79191,
Céline Pisapia ORCID: orcid.org/0000-0002-1432-436X6,
Jules Galipaud7,8,
Marvin D. Lilley9,
Gretchen L. Früh Green10,
Stéphane Borensztajn6 &
Bénédicte Ménez6
Nature Communications volume 14, Article number: 347 (2023) Cite this article
How simple abiotic organic compounds evolve toward more complex molecules of potentially prebiotic importance remains a missing key to establish where life possibly emerged. The limited variety of abiotic organics, their low concentrations and the possible pathways identified so far in hydrothermal fluids have long hampered a unifying theory of a hydrothermal origin for the emergence of life on Earth. Here we present an alternative road to abiotic organic synthesis and diversification in hydrothermal environments, which involves magmatic degassing and water-consuming mineral reactions occurring in mineral microcavities. This combination gathers key gases (N2, H2, CH4, CH3SH) and various polyaromatic materials associated with nanodiamonds and mineral products of olivine hydration (serpentinization). This endogenous assemblage results from re-speciation and drying of cooling C–O–S–H–N fluids entrapped below 600 °C–2 kbars in rocks forming the present-day oceanic lithosphere. Serpentinization dries out the system toward macromolecular carbon condensation, while olivine pods keep ingredients trapped until they are remobilized for further reactions at shallower levels. Results greatly extend our understanding of the forms of abiotic organic carbon available in hydrothermal environments and open new pathways for organic synthesis encompassing the role of minerals and drying. Such processes are expected in other planetary bodies wherever olivine-rich magmatic systems get cooled down and hydrated.
In nature, very few organic compounds are recognized as abiotic, i.e., formed by mechanisms that do not involve life1,2. Abiotic methane (CH4) is the most abundant of those compounds, and can be accompanied by short-chain hydrocarbons (ethane, propane) or organic acids (formate, acetate) in fluids3,4,5,6,7,8 occurring in molecular hydrogen (H2)-enriched hydrothermal systems where olivine-bearing rocks are altered via serpentinization reactions9 in various geological contexts (i.e., subduction zones, ophiolites, mid-ocean ridges—MOR). Because of this limited variety of small abiotic organic molecules and their strong dilution in hydrothermal fluids, prebiotic reactions cannot easily lead to more complex molecules of biological interest; thus, constituting a limiting factor for a unifying hypothesis for a hydrothermal origin of life on Earth. Without evidence for alternative abiotic organic molecules or pathways and based on an abundance of diverse organic molecules in comets and meteorites10,11,12 (e.g., carbonaceous kerogen-like material, amino acids, polycyclic aromatic, or aliphatic hydrocarbons) many have considered that life's ingredients had an extraterrestrial origin.
Recent studies of serpentinized rocks along the Mid-Atlantic Ridge (MAR) have highlighted low temperature (T), abiotic formation of aromatic amino acids via Friedel–Crafts reactions catalyzed by an iron-rich saponite clay13. Such a process requires a nitrogen source for amine formation and a polyaromatic precursor whose origin remains unknown, and suggests the availability of more diverse abiotic organic reactants than previously expected on Earth, notably in the subseafloor. The discovery of low-T formation of abiotic carbonaceous matter in ancient oceanic lithosphere14 also leads to the consideration of new paradigms for organic synthesis pathways within the rocks hosting hydrothermal fluid circulation15. Processes leading to such complex, condensed compounds, during rock alteration are unknown but must differ from the mineral-catalyzed Fischer-Tropsch Type (FTT) process that is the most invoked so far in hydrothermal fluids16,17,18,19 to explain the formation short-chained hydrocarbons. Understanding the variety and formation mechanisms of abiotic organic compounds on Earth, as well as their preservation, has important implications for the global carbon cycle, but also expands the inventory of the forms of carbon available for present-day ecosystems and prebiotic chemistry, and compliments data from extraterrestrial systems20,21.
Here we demonstrate how deep mid-ocean ridge processes can provide an unexpected diversity of abiotic organics as gaseous and condensed phases thanks to water-consuming mineral reactions. Our study focuses on the investigation of olivine mineral microcavities (secondary fluid inclusions (FI)) aligned along ancient fracture planes where circulating fluids were trapped within one of the deepest igneous-rock sections drilled along the MAR, i.e., IODP Hole 1309D, 1400 meter-depth below seafloor – m.b.s.f., at the Atlantis Massif (30°N MAR, IODP Expeditions 304–305, Fig. 1). Five km to the south of Site 1309, Atlantis Massif hosts the Lost City hydrothermal field22 where the discharge of abiotic H2, CH4 and formate have been observed in fluids19,23. Within the shallow rock substrate of Hole 1309D (~170 m.b.s.f), abiotic amino acids were identified13. At deeper levels (1100–1200 m.b.s.f), olivine-rich igneous rocks such as troctolites are particularly fresh and rich in FIs where they form linear trails of various orientations within olivine grains (Fig. 1b–e). Such FIs are inherited from the first stages of rock cracking and healing during cooling of the lithosphere, allowing the trapping of circulating fluids. Crack-healing of olivine is expected between 600 and 800 °C24,25 and at Hole 1309D fluid trapping occurs down to ~700 °C–6000 m.b.s.f. (P~2 kbar)26. During cooling, rocks were progressively exhumed below an extensive fault zone up to their present-day position (P < 0.3 kbar and T~100 °C27).
Fig. 1: Location and characteristics of the magmatic rock samples.
a High resolution bathymetric map of the Atlantis Massif hosting the Lost City hydrothermal field. The massif is mainly composed of mantle and mantle-derived magmatic rocks exhumed along the Mid-Atlantic Ridge (MAR) parallel to the Atlantis transform fault ("m.b.s.l." stands for meters below sea level). The inset shows its location at the MAR scale. b, c Thin section scans in natural and cross-polarized light, respectively, of a characteristic troctolite sample used in this study and recovered at 1100 meters below sea floor by drilling the Atlantis massif during the IODP Expedition 304–305 Leg 1309D (sample 1309D–228R2). d Optical view in transmitted cross-polarized light of olivine (Ol) kernels hosted in the same troctolite. Red arrows show planes of secondary fluid inclusions within olivine crystals. e Optical photomicrographs in transmitted plane-polarized light of olivine-hosted multiphasic fluid inclusions.
Diverse organic compounds and nanodiamonds in microcavities
Punctual Raman analyses (see Methods) were made on 36 closed FIs in olivine grains forming three troctolites cored at IODP Hole 1309D (intervals 228R2, 235R1, and 247R3). The samples were very fresh and only displayed a localized alteration along thin serpentinized veinlets, underlined by magnetite grains (Fig. 1b, c). All FIs contained H2(g) and/or CH4(g) as well as secondary minerals serpentine (lizardite ± antigorite), brucite, magnetite, or carbonates (calcite or magnesite), as previously observed in similar or ancient rocks18,28,29,30,31,32 (Fig. 2). In addition, we documented for the first time in present-day oceanic lithosphere the presence of N2(g), methanethiol (CH3SH(g)), and variably disordered polyaromatic carbonaceous materials (PACMs) closely associated with secondary minerals in FIs (Fig. 2a, c).
Fig. 2: Representative punctual Raman analyses of individual fluid inclusions.
They show a large diversity of gaseous109,110,111 (a, b) and secondary mineral phases112,113 and of polyaromatic carbonaceous material (PACM) (b, c)33,34,35,36. CH4(g) is well identified by its triplets at 2917, 3020, 3070 cm−1 whereas H2(g) raman shifts are found between 4152–4142 and 4155–4160 cm−1. N2(g) displays a thin band at 2330 cm−1 while the thiol group (-SH) of methanethiol CH3SH(g) is observed at 2581 cm−1. The PACMs are characterized in their first order region by two broad bands assigned to the disorder (D) band and the graphite (G) band. Few tens of nm-sized nano-diamonds (nD) are identified by the characteristic downward shift of the D band at ~1325 cm−1 (ranging between 1313-1332 cm−1), its broadening (FWHM-D of 54–70 cm−1) and an associated G band near 1550 cm−141,42,43,44. Interpretation of parameter variability in nD is complex and beyond the scope of the present contribution. Srp serpentine, Cal calcite, Mag magnetite.
To further investigate the nature of the PACMs, high-resolution 3D Raman mapping was carried out on two FIs from one olivine grain of sample 1309D-228R2 (Fig. 1d, e) and were named FI3 and FI5 (Fig. 3a and Supplementary Movie 1). The FI that was richest in PACM (FI5) was then milled and imaged using focused ion beam (FIB)-scanning electron microscopy (SEM) associated with electron dispersive X-ray spectrometry (EDS) (Fig. 3c–f) before being extracted as an ultrathin section (Supplementary Fig. 1) for high resolution transmission electron microscopy (HR-TEM) and X-ray photoelectron spectroscopy (XPS). See Methods for details.
Fig. 3: Diversity of gaseous and condensed abiotic organic compounds associated with secondary minerals in single fluid inclusions trapped in olivine minerals of the deep oceanic lithosphere.
a Three dimensional Raman imaging of fluid inclusion FI3 showing polyaromatic carbonaceous materials (PACMs)33,34,35,36 coexisting with reduced gaseous species identified as H2, N2, CH4, and CH3SH and micrometric serpentinization-derived mineral phases109,111 (i.e., serpentine, brucite, magnetite, and carbonate). See also Supplementary Movie 1. b Raman spectra highlighting the 3 end-member types of PACMs in the two individual fluid inclusions (FI3 and FI5), all characterized by two broad bands assigned to the disorder (D) band and the graphite (G) band but showing variable position, intensity and width. For each end-member, a mean Raman spectrum is presented (bold line) with the standard deviation (colored shadows). c False color scanning electron microscopy (SEM) image of FI5 fluid inclusion freshly opened by focused ion beam milling showing distinct types of PACMs which contrast by their apparent textures: gel-like or mesoporous with nanofilaments are characteristic of PACM1 and PACM2, respectively. d Associated elemental mapping using energy dispersive X-ray spectrometry of the olivine (Ol) hosted fluid inclusion allows the identification of PACMs, fibrous polygonal serpentine (F. Srp), lamellar serpentine (lizardite, Lz), polyhedral serpentine (P. Srp), calcite (Cal), and brucite (Brc), as also supported by Raman microspectroscopy (Fig. 2). e, f Magnified SEM views of c, highlighting PACM1 and PACM2, respectively.
Quantitative parameters were extracted from 3D hyperspectral Raman data collected on FI3 and FI5 (see Methods) and compared to those of graphite, terrestrial biologically-derived kerogens, as well as carbonaceous matter in meteorites (Fig. 4a)33,34,35,36. Previous investigations established that the trend followed by terrestrial kerogens and meteoritic carbonaceous matter in such diagrams reflects an increase in thermal maturation during prograde metamorphism. Thermal maturation globally involves organic matter carbonization characterized by a decrease of the full-width at half maximum of the disorder (D) band (FWHM (D))33,36. It is materialized by a decrease in heteroatom-bearing groups (e.g., O, N, or S) and aliphatic units, and an increase in the degree of aromaticity. It can be followed by graphitization during which the structural order of the graphitic material increases. This corresponds to a decrease of defects in aromatic planes, characterized by the decrease of the relative intensities R1 of the D and graphite (G) bands (R1 = ID/IG). While such a metamorphic history does not apply in the present context of cooling and exhumation of deep-seated rocks at the Atlantis Massif, this trend is used here to chemically and structurally describe the observed material based on its comparable spectroscopic characteristics. The PACMs contained in our two 3D-imaged FIs cover an unusually large range depicted in Fig. 4a, attesting to an unexpected diversity in chemistry, aromatization degree and structural order at the micrometric scale. PACMs in FI5 alone displays a trend similar to those described in all meteorites, i.e., forming a continuum between 2 end-members, referred to here as PACM1 and PACM2 (see also Fig. 3), reflecting various degrees of aromatization. PACMs of FI3 overlap the FI5 trend but show a complementary trend toward a more structured state defined as PACM3 (see also Fig. 3) with increased crystallinity.
Fig. 4: Diversity of the polyaromatic carbonaceous material that displays strong structural and chemical heterogeneities while coexisting at micrometric scale in the two individual fluid inclusions (FI3 and FI5).
a PACMs heterogeneity as shown by fitting parameters derived from 3D hyperspectral Raman mapping of FI3 and FI5 (e.g. Fig. 3a), namely full width at half maximum (FWHM) of the D (i.e., disorder) band and the relative intensities R1 of the D and G (i.e., graphite) bands (=ID/IG). The colored data correspond to the data points used to calculate mean Raman signals shown in Fig. 3b. Also reported are the values obtained for kerogens, carbonaceous material in meteorites, and graphite compiled from the literature33,34,35. b, c High resolution TEM imaging of the PACMs with the qualitative chemical composition of PACM1 and PACM2 measured with energy dispersive X-ray spectrometry. The amorphous, most disordered material (PACM1) plots at the top of the data points in a and contains the highest amount of heteroatoms, notably O. The most aromatic material (PACM2) plots at the lower-right end of the graph and is richer in C, tending toward amorphous carbon. The nano-crystalline phase (~5 nm-sized) embedded in PACM1 b, and possibly in PACM2 (dotted texture in c), has been identified as nano-diamond (nD) both by high-resolution TEM (Fast Fourier Transform of the TEM image in insert) and with Raman (PACM3; Figs. 2a and 3b). PACM3 plots toward the lower-left end of the diagram in graph a, where well organized aromatic C skeleton is expected, but graphite is metastably replaced by nD here. PACM polyaromatic carbonaceous material, Ol olivine, d interfoliar distance of the (111) planes in cubic nD.
PACM1 displays the most complex structure. In addition to the disorder (D) and graphite (G) bands, two additional contributions are detectable (Fig. 3b). The band at ~1735 cm−1 is characteristic of the stretching mode of the carbonyl functional group (C = O), and the shoulder around 1100 cm−1 fits well with stretching vibrations of C–O/C–O–C in ether or carboxylic ester functional groups37. In PACM1, a shoulder is also visible near 1200 cm−1. A similar component has been described in natural and synthetic functionalized carbon systems, while lacking in more carbonized or graphitic materials38. Its origin is not well understood, but it was previously attributed to vibrations of C–H/C–Calkyl in aromatic rings39. 3D data indicate that PACM1 is spatially well distributed in the FI and is primarily associated with phyllosilicates, corresponding to the gel-like phase wetting serpentine and brucite fibers in FI5 (Fig. 3c–e). HR-TEM examination attests to its amorphous structure and enrichment in O (C/O~1) and in other heteroatoms including metals and S, as shown by associated EDS analysis (Fig. 4b). This agrees with the high level of structural disorder and functionalization deduced from Raman spectra (Figs. 3b and 4a).
Raman and SEM imaging shows that PACM2, observed in both FIs, is localized on olivine walls where it forms a mesoporous texture made of nanofilaments (Fig. 3c, d and f) of ~20 nm in diameter and up to hundreds of nm long. This spongy texture was more difficult to mill under FIB resulting in thicker foils which limited the study of its structure using HR-TEM (Fig. 4c). Associated qualitative EDS analysis shows that PACM2 is made of more than 80% carbon (C/O~9) with traces of the same other elements as PACM1, and confirms that PACM2 is more aromatized than PACM1.
Well-structured nanometric phases, ~5 nm in diameter, are locally observed within amorphous PACM1 (Fig. 4b). These nanoparticles display a lattice parameter d~0.20 nm that corresponds to the d111 of cubic nano-diamond (nD). Raman signals of nD strongly depends on their structure, purity, crystal size and surface chemistry40,41, but the smallest ones (<few tens of nm) commonly display a downshift and broadening of the D band due to phonon confinement effects42,43,44 and an additional G band attesting to residual defects and graphitic domains within a surrounding carbon shell41,43. PACM3, which plots in the lower-left end of Fig. 4a where most crystalline materials are expected, displays a Raman pattern (Figs. 2c and 3b) similar to nD with a characteristic D band shifted at ~1325 cm−1, a FWHM-D of 54–70 cm−1 and a G band near 1550 cm−1. 3D Raman data of nD (PACM3) are also co-localized with PACM2 that shows a dense and spotted texture made of particles 5–50 nm in diameter, hence attributed to nD (Fig. 4c).
XPS C 1s core level spectra was acquired on the whole FIB section of FI5 (Supplementary Figs. 1, 2, and Methods) that contains both PACM1 and PACM2 (Figs. 3c and 4a), spatially unresolved with this method. XPS data reveal a dominant contribution to the PACMs' structure of C–C/C = C and C–H bonds (~80%), in addition to C–O/C–O–C (~12%) and C = O/O–C = O (~5%) bonds (Supplementary Table 1, ref. 45). This confirms previous observations of the dominance of a macromolecular structure with H- and O-bearing functional groups. The remaining contributions correspond to carbon in the form of carbonate (CaCO3 here, Fig. 3) and carbide (Supplementary Table 1). The survey spectrum shows the presence of silicon and titanium in small quantities that could form such a carbide (refs. 46,47). The latter was not clearly located but it most probably contributes to the nano-particles observed in the C-rich PACM2 (Fig. 4c), together with nD.
Carbon and hydrogen isotopic composition of the CH4 contained in fluid inclusions of sample 1309D–228R2 was determined by crushing experiments (see Methods). A minimum concentration of 143 µmol of CH4 per kg rock was measured on this sample. \({{{\mathrm{\delta}}}}^{13}{{{\mathrm{C}}}}_{{{{{\mathrm{CH}}}}_{4}}}\) values of −8.9 ± 0.1‰ and \({{{\mathrm{\delta}}}}{{{\mathrm{D}}}}_{{{{{\mathrm{CH}}}}_{4}}}\) of −161.4 ± 1‰ were obtained. They fall within the abiotic range of natural CH43,31 and are close to the compositions of CH4 venting in Lost City hydrothermal chimneys nearby on the same massif19.
The ideal combination for abiotic synthesis of diverse organics
The nature of the original fluid can be inferred from the current phases found in the FI that attest to in situ reactions with olivine walls. The occurrence of hydrated secondary minerals (serpentine, brucite; Figs. 2 and 3) and of C-, N- and S-bearing phases (N2, CH4, CH3SH, PACMs, carbonates, carbide, and nD) requires an aqueous fluid enriched in C, N and S, to be trapped as olivine-hosted inclusions. At MOR, such a fluid can be magmatic or seawater-derived, or a mix of both. The fresh character and the Sr isotopic compositions of deep magmatic rocks from the same hole attest of their very limited interaction with seawater48 that resulted in late serpentine veinlets, formed after the FIs (Fig. 1b–d). If any, seawater would not be a significant source of carbon to such deep fluid inclusions since dissolved inorganic carbon (DIC) is efficiently removed from seawater at shallower levels by carbonate precipitation, and dissolved organic carbon (DOC) should be rapidly captured in shallow rocks or decomposed in high temperature fluids (T > 200 °C)49,50. If few DIC may persist and contribute to the carbon in the fluid inclusion, it is unlikely that any relict DOC, notably the macromolecular component, would remain at the T (600–800 °C) and depth conditions of fluid trapping24,25,26 since seawater-derived fluids would have also undergone boiling and phase separation51. Hence, a dominant magmatic origin, resulting from magma degassing, is favored for the fluid trapped in our inclusions as previously proposed for olivine under similar deep crustal conditions52,53. Such fluids, exsolved from melts, are dominated by CO2-rich vapors that can evolve to more H2O-enriched compositions with progressive fractionation52. Indeed, at MOR, the source of magma is located in the shallow upper mantle where equilibrium thermodynamic speciation for fluids in the C–O–H–N system strongly favors N2 and CO2 relative to NH3 and to other carbon species considered (CO and CH4), respectively54. This supports the abiotic, dominantly mantle-derived, origin of the N2 and the carbon involved in the various organic compounds observed in the FIs. The absence of CO2 and the occurrence of H2 and CH4 suggests a complete reduction of initial CO2 to CH4 during fluid-olivine reactions inside the inclusions, at a temperature corresponding to H2 production by serpentinization (<350–400 °C), rather than a CO2–CH4 equilibration at higher temperature. This is consistent with the clumped isotopologue data on CH4 from seafloor hydrothermal sites, including Lost City, which imply a formation of CH4 at ~250–350 °C55. The complete reduction of CO2 to CH4 also fits the \({{{\mathrm{\delta}}}}^{13}{{{\mathrm{C}}}}_{{{{{\mathrm{CH}}}}_{4}}}\) value thus inherited from the original δ13C of magmatic CO2.
The main S-bearing species should be SO2 with some H2S depending on the degassing temperature and H2 content of the fluid56,57 according to the following equilibrium:
$${{{{{{\rm{SO}}}}}}}_{2}+{{{{{{\rm{3H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{2H}}}}}}}_{2}{{{{{\rm{O}}}}}}+{{{{{{\rm{H}}}}}}}_{2}{{{{{\rm{S}}}}}}$$
Accordingly, we argue that the aqueous fluid initially trapped in the FIs was dominantly composed of N2, CO2, and SO2, with minor amounts of H2, CH4, and H2S. Proportions of those species cannot be quantified but a compilation of volcanic gas analyses indicates that the redox state of similar fluids, as defined by O2 fugacity (fO2,g), is usually between the log fO2,g set by the fayalite-magnetite-quartz (FMQ) mineral buffer FMQ-1 (1 log unit below FMQ) and the nickel-nickel oxide (NiNiO) mineral buffer NiNiO+2 (2 log unit higher than NiNiO), and their pH is acidic with trace amount of HCl (see Methods).
A two-step abiotic process of fluid cooling and subsequent fluid-mineral reactions (serpentinization) is proposed to account for our observations in FI as described below and in Fig. 5.
Fig. 5: Proposed scenario to account for the chemical and structural diversity of the different types of carbonaceous material in microreactor-like fluid inclusions hosted in serpentinizing olivine from the deep oceanic lithosphere.
a During Stage 1, the trapped fluid cools down to 400 °C at 2 kbar and its speciation evolves as depicted by the gray area in diagram b, for a plausible range of initial redox conditions (Supplementary Fig. 3). The path followed by the most reduced fluids (Log fH2 ≥ FMQ) cross-cut the pyrene-CO2 curve between 400 °C and 450 °C, allowing the early formation of pyrene, analogous to the most aromatic PACM observed on olivine walls (PACM2). In these fluids, CO2 can also partially convert to CH3SH and CH4, and N2 to NH3, before reaching 400 °C; i.e., before serpentinization initiates. The vertical orange area depicts the main serpentinization field (stage 2). c During stage 2, for T < 400 °C (2 kbar), water becomes liquid and olivine highly reactive with an expected major stage of serpentinization at T between 300 and 400 °C that produces serpentine, brucite, magnetite, and H2. Serpentinization advancement rapidly shifts fH2 and pH of the solution toward the field of organic acids as schematically drawn by the orange arrow in the speciation diagram, d, (350 °C-2kbar, methane and methanol suppressed, See Methods and Supplementary Fig. 4), with concomitant carbonate precipitation (calcite or magnesite). This hydration reaction dries out the system leading to condensation of the fluid and formation of PACM1 that wets product minerals and displays varied functional groups bearing O, H, ± S heteroatoms, in agreement with Raman, TEM, and XPS data. nD (PACM3) can metastably form from the amorphous PACM1 and PACM2 during this serpentinization stage114. Then, the chemical and structural characteristics of PACMs are expected to evolve with time and during cooling, and contribute to the formation of CH4 that was kinetically limited so far. PACM polyaromatic carbonaceous material, aH2O water activity, set to 1 or 0.1.
Stage 1–Trapping and cooling of a magmatic-dominated fluid, T ~ 400–600 °C.
Modeling the evolution of such a fluid during cooling from 600 °C to 400 °C at 2kbar (Fig. 5b, Supplementary Fig. 3a and Methods) shows that the most reduced fluids (Log fH2 ≥ FMQ) favor CH4, CH3SH, and graphite below ~550 °C, and NH3 below ~450 °C (Fig. 5, stage 1) if kinetics are favorable. The same fluids also first crosscut the pyrene-CO2 equilibrium near 450 °C, showing the possibility to form early aromatic materials such as pyrene, used here as a simple analog for PACMs (Fig. 5, stage 1). Deposition of carbonaceous films on freshly-cracked olivine surfaces by condensation of C–O–H fluids during abrupt cooling to 400–800 °C has been described experimentally58, inspired by observations of olivine surfaces in basalts and xenoliths59,60. In these experiments, the carbonaceous films consisted of various proportions of C–C, C–H, C–O bounds and carbide depending on the redox conditions and final temperature. In our FIs, deposition on olivine walls of the most aromatic material (PACM2), possibly associated with carbides, can be initiated by a similar surface-controlled process61 (Fig. 5, stage 1). The initial chemical and structural characteristics of this material are unknown since they probably changed in the FI during the subsequent evolution of physico-chemical conditions (stage 2).
Stage 2 – Serpentinization and formation of various metastable organic compounds. Once T falls below 400 °C, fluid water becomes both liquid and gaseous and olivine is prone to serpentinization therefore leading to the formation of serpentine, brucite, magnetite and H2 (Fig. 5, stage 2)9. Serpentinization also increases the pH of the fluid that first equilibrates with CO2, allowing carbonate precipitation. Previous modeling of these reactions in similar FIs have considered a seawater-derived aqueous fluid variably enriched in CO2(aq)18. Since PACM, CH3SH or N2 were not reported, these species were not included in the previous models, but increasing levels of H2 were predicted between 400° and 300 °C, shifting the system by more than 2 logfH2,g units to highly reducing conditions, allowing CH4 formation via reaction (2). Reaction (2) is thermodynamically favored with decreasing T and water activity (aH2O) but its slow kinetic at T < 400 °C16 needs to be overcome by long residence times (thousand years) of the fluids in FIs53.
$${{{{{{\rm{CO}}}}}}}_{2}+{{{{{{\rm{4H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{CH}}}}}}}_{4}+{{{{{{\rm{2H}}}}}}}_{2}{{{{{\rm{O}}}}}}$$
However, olivine serpentinization is very fast at optimum conditions near 300 °C and can be completed in few weeks to months62. At this short time scale, the kinetic inhibition of methane formation prevails and metastable organic compounds are predicted to form, including aliphatic and polyaromatic hydrocarbons (PAHs), organic and amino acids or condensed carbon61,63,64. Suppressing CH4, we have modeled the redox evolution of the fluids in our FIs during serpentinization (Methods and Supplementary Fig. 3). fH2 also increases of ~2 log units between 400 °C and 300 °C, buffered here by the precipitation of PACM (analog to pyrene). The increase of fH2,g and pH due to serpentinization can progressively shift the carbon speciation in solution toward the fields of organic acids (e.g., formic acid, reaction (3) and orange arrow on Fig. 5d, or acetic acid, Supplementary Fig. 4) that are common species in serpentinizing systems16,23. These fields widen with decreasing water activity (aH2O) and T (Fig. 5b, Supplementary Fig. 4).
$${{{{{{\rm{CO}}}}}}}_{2}+{{{{{{\rm{H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{HCOO}}}}}}}^{-}+{{{{{{\rm{H}}}}}}}^{+}$$
Carbon speciation of the fluid is probably even more complex, notably with contributions of others O-bearing reduced species (e.g., CO, aldehydes or alcohols)1,65. Reduction of N2 to ammonia is also favored with increasing fH2,g (e.g. Fig. 5, stage 1), making possible the formation of CN-containing organic species. The abiotic formation of CH3SH may be initiated earlier from the fluid initially trapped (Fig. 5, stage 1) but can continue during serpentinization via reaction (4)66; organic acids being potential intermediate products1.
$${{{{{{\rm{CO}}}}}}}_{2}+{{{{{{\rm{H}}}}}}}_{2}{{{{{\rm{S}}}}}}+{{{{{{\rm{3H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{CH}}}}}}}_{3}{{{{{\rm{SH}}}}}}+{{{{{{\rm{2H}}}}}}}_{2}{{{{{\rm{O}}}}}}$$
Occurrence of thioester functions is also possible through condensation of available thiols and carboxylic acids according to reaction (5):
$${{{{{\rm{RSH}}}}}}+{{{{{\rm{R}}}}}}{\prime} {{{{{{\rm{CO}}}}}}}_{2}{{{{{\rm{H}}}}}}\to {{{{{\rm{RSC}}}}}}({{{{{\rm{O}}}}}}){{{{{\rm{R}}}}}}{\prime}+{{{{{{\rm{H}}}}}}}_{2}{{{{{\rm{O}}}}}}$$
More generally, hydrothermal conditions favor dehydration reactions of organic compounds such as amide or ester formation from carboxylic acids67, in addition to organic functional group transformation reactions68, which both considerably enlarge the range of organic compounds that can be formed. The absence of liquid water in the FI today attests to the full consumption of water during serpentinization of the olivine walls that should have progressively enhanced reactions (1) to (4) and condensation reactions (e.g., reaction (5)). Based on the structural and chemical characteristics of PACM1 (Figs. 3 and 4), and its "wetting" texture on hydrous minerals (Fig. 3c, e), we propose that this complex gel-like material was formed by condensation of the fluid enriched in organics during this serpentinization-driven drying stage.
Metastable phases such as PACM1 and PAMC2 are prone to evolve after their formation. Here, they seem to serve as organic precursors for nD nucleation under the low P–T conditions of modern oceanic setting, similarly to higher P–T processes in subduction zones (>3 GPa)69. Occurrence of nDs within the stability field of graphite have been previously described in ophiolites under similar conditions70 and at higher T (~500–600 °C)71, as well as experimentally72. It has also been predicted by thermodynamic models73. Our results first suggest that nDs formation in such low P–T environments (≤2 kbars–400 °C) possibly occurs via an intermediate, amorphous, organic material. CH4 and possibly other hydrocarbons28 can also form later in FIs from reaction (2) or from further dehydration74 and hydrogenation75 reactions of PACMs simplified by pyrene (C16H10) in the following reaction.
$${{{{{{\rm{C}}}}}}}_{16}{{{{{{\rm{H}}}}}}}_{10}+27{{{{{{\rm{H}}}}}}}_{2}\leftrightarrows 16{{{{{{\rm{CH}}}}}}}_{4}$$
New routes for abiotic organic synthesis in the Earth primitive crust and beyond
Considering the geological context of such systems, our observations indicate that the timely interplay between magmatic degassing and progressive serpentinization is an ideal combination for the abiotic synthesis of varied gaseous and condensed organic compounds. Fluid inclusions have long been recognized as a major source of H2 and CH4 in fossil and active oceanic lithosphere18,30,31,53, but the discovery of the new compounds has further implications. The likelihood that fluid inclusions can be opened during lower-T alteration processes at shallower levels in the oceanic crust, render the components trapped in the inclusions available for further diversification and complexification that can benefit prebiotic reactions. Ingredients, gathered and preserved in olivine pods, can suddenly be released in an environment that is far from the original equilibrium conditions. Such a high degree of disequilibria favors the production of many additional organic compounds and promotes the development of chemotrophic microbial metabolisms76,77. As an example, similar FIs could have provided nitrogen and aromatic precursors required for the local synthesis of abiotic amino acids that are described in the shallower part of the same drill hole13. PACMs may also be the locus of further precipitation of carbonaceous material assisted by mineral reactions at low-T14. The studied FIs also provide the first evidence for an abiotic source of CH3SH in present-day oceanic rocks where a thermogenic origin was favored up to now66. Availability of CH3SH and organosulfur compounds such as thioesters may be crucial to initiate proto-metabolisms in primitive hydrothermal systems66. In modern systems, such FIs may also provide nutrients for hydrocarbon degrading micro-organisms that have been revealed by genomic studies in magmatic rocks at various depths in IODP Hole 1309D78.
H2 and CH4 enriched alkaline environments created by low T serpentinization have been recognized as providing some of the most propitious conditions for the emergence of life79,80,81. Our results strengthen this hypothesis by highlighting new reaction routes that encompass the progressive time-line of geologic events in such rock systems. Unexplored prebiotic reaction pathways based on similar processes may have occurred in the primitive Earth and on Mars where hydrothermal environments rooted on olivine-rich magmatic rocks (e.g., komatiites on Earth) are thought to be widespread82,83,84,85. The more reduced state of the mantle on early planets should have favored reduced species12,86,87 in the percolating magmatic fluids. Some studies of Martian meteorites have already suggested synthesis of PACM on Mars in relation to combined magmatic and hydrothermal processes12. This may be extrapolated to other planetary bodies such as icy moons where serpentinization has become a focus of attention88,89.
Transmitted light microscopy
Optical imaging of rock thin sections (30 µm thick) have been done under plane-polarized and cross-polarized light using a Leica transmitted light microscosope.
Micro-Raman spectroscopy
We acquired all the individual Raman spectra and 3D hyper-spectral Raman images (3D HSR images) with a LabRam HR evolution from Horiba™ manufacturer and a 532 nm DPSS laser. The laser beam was focused onto the sample with an Olympus ×100 objective. The probe spot has a diameter of around 0.9 μm. We used 600 grooves/mm grating to collect Raman spectra in two wavelength ranges, from 120 to 1800 cm−1 and from 2500 to 3800 cm−1. The first one is associated to the Raman fingerprint of minerals and covers the first order region of PACM with D and G bands. The second one presents hydroxyl' stretching bands of phyllosilicates, hydrated oxides, CH4 stretching modes and the second order of PACM.
With four acquisitions of 500 ms per spectrum, the recording of a 3D HSR image is up to 39 h per spectral range. That means, twice for a full 3D acquisition of IF3 and almost 32 h for IF5. We minimized this time by scanning laser beam instead of shifting the position of the sample with the holding stage. We retain the true confocal performance of the microscope by using the DuoScan® hardware module in stepping mode. The laser was stepped across the sample in X and Y direction by two piezoelectric mirrors. The surface map has a small and high accuracy step, down to 250 nm in our case. Then stacking 2D HSR images from the surface and down in the hosting olivine with Z steps of 250 nm, we composed a 3D image of the fluid inclusion. After data preprocessing of the Raman spectra (see below explanations), we powered these images and 3D animations with the 3D Surface and Volume Rendering (3D SVR) application for LabSpec6®. Assuming minerals and gases are here transparent and our microscope is confocal, 3D shapes can be rendered by association of a color channel to a Raman signature. We used filters to remove voxels which have low color intensity and thresholds to control the transparency.
The preprocessing data treatment differs between the 3D HSR images and the Raman signature of the PACM. In regards to the images, we operated the following sequence of preprocessing: (1) extraction of the relevant wavenumber range, (2) removal of extremely low and high signal corresponding either to low Raman diffusion or to high luminescence, (3) correction of the background with a polynomial base line.
The first order Raman signature of the PACM was extracted from the whole spectra (more than 50,000 per image) acquired during 3D HSR images recording. We used a homemade algorithm with the Matlab® software to extract this Raman signal from each spectra and perform an iterative data fitting with the PeakFit Matlab® application tool peakfit.m90. We applied the same procedure as in Quirico et al. (2014)91. The two Raman bands D and G were fitted with the so-called Lorentzian-Breit-Wigner-Fano (LBWF) spectral model36. Raman spectral parameters characterizing the PACM were extracted: width at half maximum (FWHM-G, FWHM-D), peak position (wG, wD) and ratio of peak intensity R1 (ID/IG) with a GOF (Goodness of fit). This parameter was used to remove fits with low RMS fitting error and R-squared. Eventually, we kept on working with typical batches of 600 up to 10,000 spectra with spectral parameters associated. The following table provides characteristic parameters of the averaged PACM end-members Table 1.
Table 1 Raman characteristic parameters of the averaged PACM end-members
Data mining and visualization of spectral parameters in a workflow were powered with the software Orange92. We concatenated the Raman spectral parameters of the two IF to plot the diagram representing FWHM-D as a function of R1. We selected 3 groups of data as endmembers in this diagram to discuss spectral properties of each one and localization between each other in the inclusion.
Focused Ion Beam milling (FIB) and Scanning Electron Microscopy (SEM)
After being located by transmitted light microscopy and analyzed by Raman spectroscopy, fluid inclusions within the thin section sample were opened by using a FIB - SEM workstation (NVision 40; Carl Zeiss Microscopy) coupling a SIINT zeta ionic column (Seiko Instruments Inc. NanoTechnology, Japan) with a Zeiss Gemini I electronic column. For FIB operation, the thin section was coated with a carbon layer of about 20 nm by a carbon coater (Leica EM ACE600) to prevent electrostatic charging.
First, a platinium coating was deposited with the in-situ gas injection system to define the interest region and to protect the surface from ion beam damage. Prior to milling and imaging, a coarse trench was milled around the region of interest to a depth of 30 µm. The inclusions were closed and not visible on the surface of the sample. The abrasion was therefore done progressively with FIB parameters adjusted to 30 kV and 10 nA until breaking through and obtaining a cross-section of the inclusion.
Subsequently, the observations were performed using backscattered electrons with the so-called Energy and angle selective BSE detector (EsB) and secondary electrons with the Secondary Electrons Secondary Ions detector (SESI). These experiments were operated at 15 kV and in high vacuum. Chemical composition of solids in fluid inclusions was obtained simultaneously by EDX analyses using an Aztec Oxford system (EDS Oxford Instruments Aztec-DDI detector X MAXN 50).
The studied cross sections were then extracted and thinned to a thickness of 100 nm by the ion beam following the lift-out method.
Transmission electron microscopy (TEM)
The structural organization of these thin foils was investigated by TEM. A 2100 JEOL operating at 200 kV was used to study precisely the carbon-rich regions within the fluid inclusions. A STEM (scanning transmission electron microscopy) module coupled with a EDX XMAX 80 mm2 system (Oxford Instruments) allowed the acquisition of images in annular bright field and a precise chemical analysis of the solid and condensed organic phases in the fluid inclusions.
Fast Fourier Transform analysis of high-resolution images of nano-diamond-rich areas were used to determine cell parameter of the ~5 nm-size particles using Digital Micrograph software©.
XPS analyses were carried out ate the Ecole Centrale de Lyon (France) on a PHI 5000 Versaprobe II apparatus from ULVAC-PHI Inc. A monochromatized AlKα source (1486.6 eV) was used with a spot size of 10 µm. A charge neutralization system was used to limit charge effect. The remaining charge effect was corrected fixing the C–C bond contribution of C1s peak at 284.8 eV. Before acquisition of the spectra, a short Ar ion etching was performed (250 V, 1 min) to limit the presence of adventitious carbon on the surface. C1s spectra were obtained using a pass energy of 23.5 eV. All the peaks were fitted with Multipak software using a Shirley background. Quantification was carried out using the transmission function of the apparatus and angular distribution correction for a 45° angle. Sensitivity factors were extracted from Wagner et al., (1981)93 in which they integrate cross section and escape depth correction.
Extraction and isotopic analyses of CH4(g)
A portion of the studied rock sample was initially crushed with a stainless steel mortar and pestle and sieved to collect 1–2 mm chips. These chips were then heated at 60 °C under vacuum to remove surficial water. Approximately 0.23 g of these chips were placed into a hydraulic rock crusher with a continuous He stream similar to that of Potter and Longstaffe (2007)94 and the crusher activated several times until the CH4 signal approached that of the blank. The gas released by crushing was focused on a Porapak Q filled quartz capillary trap held at liquid nitrogen temperature. Gases were released from the trap by moving it out of the liquid nitrogen and into a 150 °C heating block.
The released gases were separated on a HP 6890 gas chromatograph fitted with an Agilent Poraplot Q column (50 m, 0.32 mm wide bore, 10 μm film) temperature programmed from −30 to 80 °C. The column effluent was fed into an oxidation oven containing NiO, CuO and Pt catalysts where the reduced gases were converted to CO2. Following the oxidation oven, the gases entered a Thermo Fischer Delta V isotope ratio mass spectrometer (IRMS). Data reduction was performed by comparing an in house CH4 isotope standard to Indiana University Biogeochemical Laboratory CH4 standards #1, #2, #5, and #7.
Thermodynamic modeling
Equilibrium reaction constants at elevated temperatures and pressures are used to construct the equilibrium speciation diagram (Fig. 5 and Supplementary Fig. 3 and 4). For the aqueous species, we used the Helgeson–Kirkham–Flowers equations and predictive correlations to calculate the Gibbs free energies of formation at high temperatures and pressures95,96,97. The calculations were conducted with the Deep Earth Water (DEW) Model98. The Gibbs free energies of formation of minerals and solid condensed carbons at high temperatures and pressures were calculated using the SUPCRT92b code, an adaption of SUPCRT9299.
Thermodynamic data files used in the calculations were built using data for aqueous species from Shock et al. (1997)96, and minerals from Berman (1988)100, Berman and Aranovich (1996)101, and Sverjensky et al. (1991)102. We adopted the thermodynamic properties of CH3SH,aq from Schutle and Rogers (2004)103, which are consistent with Shock et al. (1997)97. We also included the thermodynamic data of condensed aromatic organic carbons of Richard and Helgeson (1998)104, which are consistent with Berman (1988)100.
To simulate fluid-rock reactions, we applied purely chemical irreversible mass transfer models105 to simulate reactions between a cooling magmatic-dominated fluid and olivine. We consider the system as progressive alteration of olivine in a closed system in which there was always the reaction affinity for the alteration of olivine by water. We set 30 moles of olivine (Fa15Fo85) reacting with 1 kg water, so the approximate water:rock = 1:4.5. It represents a low W/R ratio relevant with the geological settings observed here (very limited fluid captured as olivine inclusions). The dissolved salts of Ca, Mg, Fe, C, Si, N, S were considered in the calculations as well as all available minerals. All the calculations were carried out with the aqueous speciation, solubility, and chemical mass transfer codes EQ3 & EQ6 which have been recompiled from a traditional version106 for the purpose of simulating temperature and pressures higher than water saturation conditions, using thermodynamic data files prepared as described above. The codes are accessible freely to the public through the Deep Earth Water community (http://www.dewcommunity.org/). We first simulated the volcanic gas and starting fluid using EQ3 code. We then let the gas cool down to 400 °C before reacting with olivine in a continuous cooling (<400 °C) and enclosed system (2000 bars), mimicking the high-temperature and low-pressure environment where the fluid inclusions formed. It is within the T range of fluids when they are trapped in the inclusions.
The cooling rate is set by the following equation in the model input:
$${{{{{\rm{temp}}}}}}\,{{{{{\rm{C}}}}}}={{{{{\rm{temp}}}}}}\,{{{{{{\rm{C}}}}}}}_{0}+{{{{{\rm{tk}}}}}}1\ast \xi+{{{{{\rm{tk}}}}}}2\ast {\xi }^{2}+{{{{{\rm{tk}}}}}}3\ast {\xi }^{3}(0\le \xi \le 1)$$
where temp C0 represents the initial temperature in °C; ξ represents the reaction extent; tk1, tk2, and tk3 are three parameters. Here, we set tk1 = −200 for the two cooling calculations: 600–400 °C (without olivine) and <400 °C (with olivine); we used the first cooled fluid (at 400 °C) as the starting fluid to react with olivine for the second stage cooling calculation.
Volcanic gas is mainly composed of steam H2O, CO2, and H2, with other trace gases107,108. The composition of the volcanic gas varies depending on several geological factors, including the extent of degassing of the magma, redox state, and temperature and cooling history108. Under the circumstances of this study, the simulation used volcanic CO2,g as the only carbon source. Provided the reported CO2/H2O ratio in volcanic gases, we set the starting CO2/H2O ratio as 0.3 in our starting fluid.
Compilation of the volcanic gas indicated that the redox state of volcanic gas is between the log fO2,g values set by fayalite-magnetite-quartz (FMQ) mineral buffer minus one log unit (FMQ-1) and nickel-nickel oxide (Ni/NiO) mineral buffer plus two log unit (Ni/NiO+2) (Symonds et al., 1994). In our simulation, we set log fO2,g of the starting fluid equal to these two values, representing two boundary cases (Supplementary Fig. 3). As the starting fluid would dissolve high pressures of CO2,g and trace amounts of HCl and S gases107,108, the starting pH would be acidic. The neutral pH at 600 °C and 2 kbars is 5.3. Therefore, in our simulation, we set the initial pH as 4 to represent an acidic condition.
The data supporting the findings of this study are available within the paper and its Supplementary Information. Any additional information is available from the corresponding author upon request.
Reeves, E. P. & Fiebig, J. Abiotic synthesis of methane and organic compounds in Earth's lithosphere. Elements 16, 25–31 (2020).
Sephton, M. A. & Hazen, R. M. On the origins of deep hydrocarbons. Rev. Mineral. Geochem. 75, 449–465 (2013).
Etiope, G. Abiotic methane on Earth. Rev. Geophys. 51, (2013).
Konn, C., Charlou, J. L., Holm, N. G. & Mousis, O. The production of methane, hydrogen, and organic compounds in ultramafic-hosted hydrothermal vents of the mid-atlantic ridge. Astrobiology 15, 381–399 (2015).
Lang, S. Q., Butterfield, D. A., Lilley, M. D., Paul Johnson, H. & Hedges, J. I. Dissolved organic carbon in ridge-axis and ridge-flank hydrothermal systems. Geochim. Cosmochim. Acta 70, 3830–3842 (2006).
Sherwood Lollar, B. et al. A window into the abiotic carbon cycle – Acetate and formate in fracture waters in 2.7 billion year-old host rocks of the Canadian Shield. Geochim. Cosmochim. Acta 294, 295–314 (2021).
Vitale Brovarone, A. et al. Massive production of abiotic methane during subduction evidenced in metamorphosed ophicarbonates from the Italian Alps. Nat. Commun. 8, 1–13 (2017).
Eickenbusch, P. et al. Origin of short-chain organic acids in serpentinite mud volcanoes of the Mariana convergent margin. Front. Microbiol 10, 1–21 (2019).
McCollom, T. M. & Bach, W. Thermodynamic constraints on hydrogen generation during serpentinization of ultramafic rocks. Geochim. Cosmochim. Acta 73, 856–875 (2009).
Anders, E. Pre-biotic organic matter from comets and asteroids. Nature 342, 255–257 (1989).
Bonal, L., Bourot-Denise, M., Quirico, E., Montagnac, G. & Lewin, E. Organic matter and metamorphic history of CO chondrites. Geochim. Cosmochim. Acta 71, 1605–1623 (2007).
Steele, A., McCubbin, F. M. & Fries, M. D. The provenance, formation, and implications of reduced carbon phases in Martian meteorites. Meteorit. Planet. Sci. 51, 2203–2225 (2016).
Ménez, B. et al. Abiotic synthesis of amino acids in the recesses of the oceanic lithosphere. Nature 564, 59–63 (2018).
Sforna, M. C. et al. Abiotic formation of condensed carbonaceous matter in the hydrating oceanic crust. Nat. Commun. 9, (2018).
Andreani, M. & Ménez, B. New Perspectives on Abiotic Organic Synthesis and Processing during Hydrothermal Alteration of the Oceanic Lithosphere. Deep Carbon: Past to Present (2019). https://doi.org/10.1017/9781108677950.015.
McCollom, T. M. Laboratory simulations of abiotic hydrocarbon formation in Earth's deep subsurface. Rev. Mineral. Geochem. 75, 467–494 (2013).
Horita, J. & Berndt, M. E. Abiogenic Methane formation and isotopic fractionation under hydrothermal conditions. Sci. (80-.) 285, 2–5 (1999).
Klein, F., Grozeva, N. G. & Seewald, J. S. Abiotic methane synthesis and serpentinization in olivine-hosted fluid inclusions. Proc. Natl Acad. Sci. USA 116, 17666–17672 (2019).
Proskurowski, G. et al. Abiogenic hydrocarbon production at lost city hydrothermal field. Science 319, 604–607 (2008).
ten Kate, I. L. Organic molecules on Mars. Sci. (80-.) 360, 1068–1069 (2018).
Glein, C. R., Baross, J. A. & Waite, J. H. The pH of Enceladus' ocean. Geochim. Cosmochim. Acta 162, 202–219 (2015).
Kelley, D. S. et al. An off-axis hydrothermal vent field near the Mid-Atlantic Ridge at 30°N. Nature 412, 8–12 (2001).
Lang, S. Q., Butterfield, D. A., Schulte, M., Kelley, D. S. & Lilley, M. D. Elevated concentrations of formate, acetate and dissolved organic carbon found at the Lost City hydrothermal field. Geochim. Cosmochim. Acta 74, 941–952 (2010).
Demartin, B., Hirth, G. & Evans, B. Experimental Constraints on Thermal Cracking of Peridotite at Oceanic Spreading Centers. in Mid‐Ocean Ridges: Hydrothermal Interactions Between the Lithosphere and Oceans (eds. German, C. R., Lin, J. & Parson, L. M.) (AGU, 2004). https://doi.org/10.1029/148GM07.
Harper, G. D. Tectonics of slow spreading mid‐ocean ridges and consequences of a variable depth to the brittle/ductile transition. Tectonics 4, 395–409 (1985).
Castelain, T., McCaig, A. M. & Cliff, R. A. Fluid evolution in an Oceanic Core Complex: A fluid inclusion study from IODP hole U1309 D—Atlantis Massif, 30?N, Mid-Atlantic Ridge. Geochemistry, Geophys. Geosystems 1193–1214 https://doi.org/10.1002/2013GC004975.Received. (2014)
Blackman, D.K., et al., and the Expedition 304/305 Scientists. Site U1309. in Proceedings of the IODP, 304/305: College Station TX (Integrated Ocean Drilling Program Management International, Inc.), https://doi.org/10.2204/iodp.proc.304305.103.2006. (2006)
Grozeva, N. G., Klein, F., Seewald, J. S. & Sylva, S. P. Chemical and isotopic analyses of hydrocarbon-bearing fluid inclusions in olivine-rich rocks. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 378, (2020).
Miura, M., Arai, S. & Mizukami, T. Raman spectroscopy of hydrous inclusions in olivine and orthopyroxene in ophiolitic harzburgite: Implications for elementary processes in serpentinization. J. Mineral. Petrol. Sci. 106, 91–96 (2011).
Sachan, H. K., Mukherjee, B. K. & Bodnar, R. J. Preservation of methane generated during serpentinization of upper mantle rocks: evidence from fluid inclusions in the Nidar ophiolite, Indus Suture Zone, Ladakh (India). Earth Planet. Sci. Lett. 257, 47–59 (2007).
Kelley, D. S. & Früh-Green, G. L. Abiogenic methane in deep-seated mid-ocean ridge environments: Insights from stable isotope analyses. J. Geophys. Res. Solid Earth 104, 10439–10460 (1999).
Zhang, L., Wang, Q., Ding, X. & Li, W. C. Diverse serpentinization and associated abiotic methanogenesis within multiple types of olivine-hosted fluid inclusions in orogenic peridotite from northern Tibet. Geochim. Cosmochim. Acta 296, 1–17 (2021).
Delarue, F. et al. The Raman-derived carbonization continuum: a tool to select the best preserved molecular structures in Archean Kerogens. Astrobiology 16, 407–417 (2016).
Bonal, L., Quirico, E., Flandinet, L. & Montagnac, G. Thermal history of type 3 chondrites from the Antarctic meteorite collection determined by Raman spectroscopy of their polyaromatic carbonaceous matter. Geochim. Cosmochim. Acta 189, 312–337 (2016).
Quirico, E. et al. Prevalence and nature of heating processes in CM and C2-ungrouped chondrites as revealed by insoluble organic matter. Geochim. Cosmochim. Acta 241, 17–37 (2018).
Ferrari, A. C. & Robertson, J. Interpretation of Raman spectra of disordered and amorphous carbon. Phys. Rev. B 61, 14095–14107 (2000).
Socrates, G. Infrared and Raman characteristic group frequencies. Tables and charts. (JOHN WILEY & SONS, LTD, 2001).
Ferralis, N., Matys, E. D., Knoll, A. H., Hallmann, C. & Summons, R. E. Rapid, direct and non-destructive assessment of fossil organic matter via microRaman spectroscopy. Carbon N. Y 108, 440–449 (2016).
Li, X., Hayashi, J. & Li, C. Z. FT-Raman spectroscopic study of the evolution of char structure during the pyrolysis of a Victorian brown coal. Fuel 85, 1700–1707 (2006).
Korepanov, V. I. et al. Carbon structure in nanodiamonds elucidated from Raman spectroscopy. Carbon N. Y. 121, 322–329 (2017).
Mochalin, V. N., Shenderova, O., Ho, D. & Gogotsi, Y. The properties and applications of nanodiamonds. Nat. Nanotechnol. 7, 11–23 (2012).
Osswald, S., Mochalin, V. N., Havel, M., Yushin, G. & Gogotsi, Y. Phonon confinement effects in the Raman spectrum of nanodiamond. Phys. Rev. B - Condens. Matter Mater. Phys. 80, (2009).
Mermoux, M., Chang, S., Girard, H. A. & Arnault, J. C. Raman spectroscopy study of detonation nanodiamond. Diam. Relat. Mater. 87, 248–260 (2018).
Yoshikawa, M., Katagiri, G., Ishida, H., Ishitani, A. & Akamatsu, T. Raman spectra of diamondlike amorphous carbon films. Solid State Commun. 66, 1177–1180 (1988).
Moulder, J. F. & Chastain, J. Handbook of X-ray photoelectron spectroscopy: a reference book of standard spectra for identification and interpretation of XPS data. Phys. Electron. Div. Perkin-Elmer Corp. 221–256 (1992).
Wang, Y.-Y., Kusumoto, K. & Li, C.-J. XPS analysis of SiC films prepared by radio frequency plasma sputtering. Phys. Procedia 32, 95–102 (2012).
Krzanowski, J. E. & Leuchtner, R. E. Chemical, mechanical, and tribological properties of pulsed‐laser‐deposited titanium carbide and vanadium carbide. J. Am. Ceram. Soc. 80, 1277–1280 (1997).
Delacour, A., Früh-green, G. L., Frank, M., Gutjahr, M. & Kelley, D. S. Sr- and Nd-isotope geochemistry of the Atlantis Massif (30 °N, MAR): Implications for fluid fluxes and lithospheric heterogeneity. Chem. Geol. 254, 19–35 (2008).
Tertieten, L., Fruh-Green, G. L. & Bernasconi, S. M. Distribution and Sources of Carbon in Serpentinized Mantle Peridotites at the Atlantis Massif (IODP Journal of Geophysical Research: Solid Earth. J. Geophys. Res. Solid Earth 126, (2021).
Hawkes, J. A. et al. Efficient removal of recalcitrant deep-ocean dissolved organic matter during hydrothermal circulation. 8, (2015).
Früh-green, G. L. et al. Diversity of magmatism, hydrothermal processes and microbial interactions at mid-ocean ridges. Nat. Rev. Earth Environ. https://doi.org/10.1038/s43017-022-00364-y. (2022)
Kelley, D. S. Methane-rich fluids in the oceanic crust. J. Geophys. Res. Solid Earth 101, 2943–2962 (1996).
McDermott, J. M., Seewald, J. S., German, C. R. & Sylva, S. P. Pathways for abiotic organic synthesis at submarine hydrothermal fields. Proc. Natl Acad. Sci. USA 112, 7668–7672 (2015).
Frost, D. J. & McCammon, C. A. The Redox State of Earth's Mantle. Annu. Rev. Earth Planet. Sci. 36, 389–420 (2008).
Wang, D. T., Reeves, E. P., Mcdermott, J. M., Seewald, J. S. & Ono, S. Clumped isotopologue constraints on the origin of methane at seafloor hot springs. Geochim. Cosmochim. Acta 223, 141–158 (2018).
Gaillard, F., Scaillet, B., Pichavant, M. & Iacono-Marziano, G. The redox geodynamics linking basalts and their mantle sources through space and time. Chem. Geol. 418, 217–233 (2015).
Hoshyaripour, G., Hort, M. & Langmann, B. How does the hot core of a volcanic plume control the sulfur speciation in volcanic emission? Geochemistry, Geophys. Geosystems 13, (2012).
Tingle, T. N. & Hochella, M. F. Formation of reduced carbonaceous matter in basalts and xenoliths: Reaction of C-O-H gases on olivine crack surfaces. Geochim. Cosmochim. Acta 57, 3245–3249 (1993).
Tingle, T. N., Hochella, M. F., Becker, C. H. & Malhotra, R. Organic compounds on crack surfaces in olivine from San Carlos, Arizona and Hualalai Volcano, Hawaii. Geochim. Cosmochim. Acta 54, 477–485 (1990).
Mathez, E. A. & Delaney, J. R. The nature and distribution of carbon in submarine basalts and peridotite nodules. Earth Planet. Sci. Lett. 56, 217–232 (1981).
Zolotov, M. Y. & Shock, E. L. A thermodynamic assessment of the potential synthesis of condensed hydrocarbons during cooling and dilution of volcanic gases. J. Geophys. Res. Solid Earth 105, 539–559 (2000).
McCollom, T. M. et al. Temperature trends for reaction rates, hydrogen generation, and partitioning of iron during experimental serpentinization of olivine. Geochim. Cosmochim. Acta 181, 175–200 (2016).
Shock, E. L. Geochemical constraints on the origin of organic compounds in hydrothermal systems. Orig. Life Evol. Biosph. 20, 331–367 (1990).
Milesi, V., McCollom, T. M. & Guyot, F. Thermodynamic constraints on the formation of condensed carbon from serpentinization fluids. Geochim. Cosmochim. Acta 189, 391–403 (2016).
Seewald, J. S., Zolotov, M. Y. & McCollom, T. Experimental investigation of single carbon compounds under hydrothermal conditions. Geochim. Cosmochim. Acta 70, 446–460 (2006).
Reeves, E. P., McDermott, J. M. & Seewald, J. S. The origin of methanethiol in midocean ridge hydrothermal fluids. Proc. Natl Acad. Sci. USA 111, 5474–5479 (2014).
Shock, E. L. Hydrothermal dehydration of aqueous organic compounds. Geochim. Cosmochim. Acta 57, 3341–3349 (1993).
Shipp, J. et al. Organic functional group transformations in water at elevated temperature and pressure: Reversibility, reactivity, and mechanisms. Geochim. Cosmochim. Acta 104, 194–209 (2013).
Frezzotti, M. L. Diamond growth from organic compounds in hydrous fluids deep within the Earth. Nat. Commun. 10, (2019).
Pujol-Solà, N. et al. Diamond forms during low pressure serpentinisation of oceanic lithosphere. Geochem. Perspect. Lett. 15, 19–24 (2020).
Farré-de-pablo, J. et al. A shallow origin for diamonds in ophiolitic chromitites. 47, 75–78 (Geology 2018).
Simakov, S. K., Dubinchuk, V. T., Novikov, M. P. & Melnik, N. N. Metastable nanosized diamond formation from fluid phase. SRX Geosci. 2010, 1–5 (2010).
Manuella, F. C. Can nanodiamonds grow in serpentinite-hosted hydrothermal systems? A theoretical modelling study. Mineral. Mag. 77, 3163–3174 (2013).
Seewald, J. S. Aqueous geochemistry of low molecular weight hydrocarbons at elevated temperatures and pressures: Constraints from mineral buffered laboratory experiments. Geochim. Cosmochim. Acta 65, 1641–1664 (2001).
Milesi, V. et al. Formation of CO2, H2 and condensed carbon from siderite dissolution in the 200–300 °C range and at 50 MPa. Geochim. Cosmochim. Acta 154, 201–211 (2015).
Canovas, P. A., Hoehler, T. & Shock, E. L. Geochemical bioenergetics during low-temperature serpentinization: an example from the Samail ophiolite, Sultanate of Oman. J. Geophys. Res. Biogeosci. 122, 1821–1847 (2017).
Shock, E. & Canovas, P. The potential for abiotic organic synthesis and biosynthesis at seafloor hydrothermal systems. Geofluids 10, 161–192 (2010).
Mason, O. U. et al. First investigation of the microbiology of the deepest layer of ocean crust. PLoS ONE 5, (2010).
Martin, W. & Russell, M. J. On the origin of biochemistry at an alkaline hydrothermal vent. Philos. Trans. R. Soc. B Biol. Sci. 362, 1887–1925 (2007).
Sleep, N. H., Bird, D. K. & Pope, E. C. Serpentinite and the dawn of life. Philos. Trans. R. Soc. B 366, 2857–2869 (2011).
Preiner, M. et al. Serpentinization: Connecting geochemistry, ancient metabolism and industrial hydrogenation. Life 8, (2018).
Quesnel, Y. et al. Serpentinization of the martian crust during Noachian. Earth Planet. Sci. Lett. 277, 184–193 (2009).
Bultel, B., Quantin-Nataf, C., Andréani, M., Clénet, H. & Lozac'h, L. Deep alteration between Hellas and Isidis Basins. Icarus 260, 141–160 (2015).
Arndt, N. T. & Nisbet, E. G. Processes on the Young Earth and the Habitats of Early Life. Annu. Rev. Earth Planet. Sci. 40, 521–549 (2012).
Sossi, P. A. et al. Petrogenesis and geochemistry of Archean Komatiites. J. Petrol. 57, 147–184 (2016).
Li, Y. & Keppler, H. Nitrogen speciation in mantle and crustal fluids. Geochim. Cosmochim. Acta 129, 13–32 (2014).
Zolotov, M. & Shock, E. Abiotic synthesis of polycyclic aromatic hydrocarbons on Mars. J. Geophys. Res. 104, (1999).
Russell, M. J. et al. The drive to life on wet and Icy Worlds. Astrobiology 14, 308–343 (2014).
Vance, S. D. & Daswani, M. M. Serpentinite and the search for life beyond Earth. Philos. Trans. R. Soc. A 378, (2020).
O'Haver, T. iPeak (https://www.mathworks.com/matlabcentral/fileexchange/23850-ipeak). MATLAB Cent. File Exch. (2021).
Quirico, E. et al. Origin of insoluble organic matter in type 1 and 2 chondrites: New clues, new questions. Geochim. Cosmochim. Acta 136, 80–99 (2014).
Demsar, J. et al. Orange: data mining toolbox in Python. J. Mach. Learn. Res. 14, 2349–2353 (2013).
MATH Google Scholar
Wagner, C. D., Raymond, R. H. & Gale, L. H. Empirical atomic sensitivity factors for quantitative analysis by electron spectroscopy for chemical analysis. Surf. interface Anal. 3, 211–225 (1981).
Potter, J. & Longstaffe, F. J. A gas-chromatograph, continuous flow-isotope mass-spectrometry method for δ13C and δD measurement of complex fluid inclusion volatiles: examples from the Khibina alkaline igneous complex, northwest Russia and the south Wales coalfields. Chem. Geol. 244, 186–201 (2007).
Helgeson, H. C., Kirkham, D. H. & Flowers, G. C. Theoretical prediction of the thermodynamic behavior of aqueous electrolytes by high pressures and temperatures; IV, calculation of activity coefficients, osmotic coefficients, and apparent molal and standard and relative partial molal properties to 600 d. Am. J. Sci. 281, 1249–1516 (1981).
Shock, E. L., Sassani, D. C., Willis, M. & Sverjensky, D. A. Inorganic species in geologic fluids: correlations among standard molal thermodynamic properties of aqueous ions and hydroxide complexes. Geochim. Cosmochim. Acta 61, 907–950 (1997).
Sverjensky D. A., Shock, E. L., & Helgeson, H. C. Prediction of the thermodynamic properties of aqueous metal complexes to 1000 °C and 5 kb. Geochim. Cosmochim. Acta 1359–1412 (1997).
Sverjensky, D. A., Harrison, B. & Azzolini, D. Water in the deep Earth: the dielectric constant and the solubilities of quartz and corundum to 60 kb and 1200 °C. Geochim. Cosmochim. Acta 129, 125–145 (2014).
Johnson, J. W., Oelkers, E. H. & Helgeson, H. C. SUPCRT92: a software package for calculating the standard molal thermodynamic properties of minerals, gases, aqueous species, and reactions from 1 to 5000 bars and 0 to 1000 °C. vol. 18 (1992).
Berman, R. G. Internally-consistent thermodynamic data for minerals in the system Na2O-K2O-CaO-MgO-FeO-Fe2O3-Al2O3-SiO2-TiO2-H2O-CO2. J. Petrol. 29, 445–522 (1988).
Berman, R. & Aranovich, L. Optimized standard state and solution properties of minerals. Contrib. Mineral. Petrol. 126, 1–24 (1996).
Sverjensky, D. A., Hemley, J. J. & D'Angelo, W. M. Thermodynamic assessment of hydrothermal alkali feldspar-mica- aluminosilicate equilibria. Geochim. Cosmochim. Acta 55, 989–1004 (1991).
Schulte, D. & Rogers, L. Thiols in hydrothermal solution: standard partial molal properties and their role in the organic geochemistry of hydrothermal environments. Geochim. Cosmochim. Acta 68, (2004).
Richard, L. & Helgeson, H. C. Calculation of the thermodynamic properties at elevated temperatures and pressures of saturated and aromatic high molecular weight solid and liquid hydrocarbons in kerogen, bitumen, petroleum, and other organic matter of biogeochemical interest. Geochim. Cosmochim. Acta 62, 3591–3636 (1998).
Helgeson, H. C. Mass transfer among minerals and hydrothermal solutions. in Geochemistry of hydrothermal ore deposits (ed. Barnes, H. L.) 568–606 (John Wiley & Sons, New York, 1979).
Wolery, T. EQ3/6: A software package for geochemical modeling of aqueous systems: package overview and installation guide (version 7.0). (1992).
Giggenbach, W. F. Chemical Composition of Volcanic Gases. in Monitoring and Mitigation of Volcano Hazards (eds. Scarpa, R. & Tilling, R. I.) 221–256 (Springer, 1996).
Symonds, R. B., Rose, W. I., Bluth, G. J. & Gerlach, T. M. Volcanic-gas studies: methods, results, and applications. in Volatiles in magmas (ed. Carroll, M.R., and Holloway, J. R.) 517 (Mineralogical Society of America, 1994).
May, W. & Pace, E. L. The vibrational spectra of methanethiol. 481, (1987).
Burke, E. A. J. Raman microspectrometry of fluid inclusions. Lithos 55, 139–158 (2001).
Frezzotti, M. L., Tecce, F. & Casagli, A. Raman spectroscopy for fluid inclusion analysis. J. Geochem. Explor. 112, 1–20 (2012).
Petriglieri, J. R. et al. Micro-Raman mapping of the polymorphs of serpentine. J. Raman Spectrosc. 953–958 (2015) https://doi.org/10.1002/jrs.4695.
de Faria, D. L. A., Venaü ncio Silva, S. & de Oliveira, M. T. J. Raman Spectrosc. 28, 873–878 (1997).
We acknowledge the IODP program (https://www.iodp.org/) and the IODP 304–305 party. This research was supported by the Deep Carbon Observatory awarded by Alfred P. Sloan Foundation, the French CNRS (Mission pour l'Interdisciplinarité, Défi Origines 2018) and the Institut Universitaire de France (MA). We are also grateful to the LABEX Lyon Institute of Origins (ANR-10-LABX-0066) of the Université de Lyon for its financial support within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) of the French government operated by the National Research Agency (ANR). J. H. wants to acknowledge the financial support from Chinese Academy of Sciences Pioneer Hundred Talents Program and CIFAR Azrieli Global Scholarship. The authors are gratefull to Lisa Mayhew, Manuel Reinhardt and anomymous reviewers for their constructive comments that considerably improved our manuscript.
Université de Lyon, Univ Lyon 1, CNRS UMR5276, ENS de Lyon, LGL-TPE, Villeurbanne Cedex, France
Muriel Andreani, Gilles Montagnac, Clémentine Fellah, Flore Vandier & Isabelle Daniel
Institut Universitaire de France, Paris, France
Muriel Andreani
Deep Space Exploration Laboratory/CAS Key Laboratory of Crust-Mantle Materials and Environments, University of Science and Technology of China, Hefei, China
Jihua Hao
CAS Center for Excellence in Comparative Planetology, University of Science and Technology of China, Hefei, Anhui, China
Blue Marble Space Institute of Science, Seattle, WA, USA
Université Paris Cité, Institut de physique du globe de Paris, CNRS UMR 7154, Paris, France
Céline Pisapia, Stéphane Borensztajn & Bénédicte Ménez
Université de Lyon, Ecole Centrale de Lyon, LTDS, CNRS UMR 5513, 36, Ecully, France
Jules Galipaud
Université de Lyon INSA-Lyon, MATEIS, CNRS UMR 5510, Villeurbanne, France
School of Oceanography, University of Washington, Seattle, WA, USA
Marvin D. Lilley
Department of Earth Sciences, ETH Zurich, Zurich, Switzerland
Gretchen L. Früh Green
Gilles Montagnac
Clémentine Fellah
Flore Vandier
Isabelle Daniel
Céline Pisapia
Stéphane Borensztajn
Bénédicte Ménez
M.A., G.M., C.F., F.V., C.P., J.G., and S.B., acquired and processed the data. M.A. wrote the paper with contributions from G.M., C.F., J.H., M.D.L., G.L.F.G., I.D., and B.M.
Correspondence to Muriel Andreani.
Nature Communications thanks Lisa Mayhew, Manuel Reinhardt and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Description of Additional Supplementary Files
Supplementary Movie 1
Andreani, M., Montagnac, G., Fellah, C. et al. The rocky road to organics needs drying. Nat Commun 14, 347 (2023). https://doi.org/10.1038/s41467-023-36038-6 | CommonCrawl |
Ellen Gethner
Ellen Gethner is a US mathematician and computer scientist specializing in graph theory who won the Mathematical Association of America's Chauvenet Prize[1] in 2002 with co-authors Stan Wagon and Brian Wick for their paper A stroll through the Gaussian Primes.[2]
Ellen Gethner
Born1960 (age 62–63)
United States
Occupation(s)Mathematician and Computer Scientist
Known forResearch in graph theory, winning the Mathematical Association of America's Chauvenet Prize in 2002
Career
Gethner has two doctorates. She completed her first, a PhD in mathematics from Ohio State University, in 1992; her dissertation, Rational Period Functions For The Modular Group And Related Discrete Groups, was supervised by L. Alayne Parson. She completed a second PhD in computer science from the University of British Columbia in 2002, with a dissertation Computational Aspects of Escher Tilings supervised by Nick Pippenger and David G. Kirkpatrick.[3] Gethner is an associate professor in the Department of Computer Science and Engineering at University of Colorado Denver.[4]
Research
Gethner became interested in connections between geometry and art after a high school lesson using a kaleidoscope to turn a drawing into an Escher-like tessellation of the plane. This later inspired some of her research on wallpaper patterns and on converting music into visual patterns.[5]
References
1. "Chauvenet Prizes | Mathematical Association of America". Mathematical Association of America. Retrieved 2019-04-07.
2. Gethner, Ellen; Wagon, Stan; Wick, Brian (1998). "A Stroll Through the Gaussian Primes". American Mathematical Monthly. 105 (4): 327–337. doi:10.2307/2589708. ISSN 0002-9890. JSTOR 2589708.
3. Ellen Gethner at the Mathematics Genealogy Project
4. "UC Denver faculty and staff directory".
5. "Making art from math". Impact. Vol. 3, no. 1. University of Colorado Denver College of Engineering and Applied Science. 2014. pp. 6–8.
External links
• Ellen Gethner publications indexed by Google Scholar
Authority control: Academics
• DBLP
• Google Scholar
• Mathematics Genealogy Project
| Wikipedia |
\begin{document}
\title{Mat\'{e}rn Class Tensor-Valued Random Fields and Beyond} \author{Nikolai Leonenko\thanks{ Cardiff University, United Kingdom} \and Anatoliy Malyarenko\thanks{ Mälardalen University, Sweden}} \date{\today } \maketitle
\begin{abstract} We construct classes of homogeneous random fields on a three-dimensional Euclidean space that take values in linear spaces of tensors of a fixed rank and are isotropic with respect to a fixed orthogonal representation of the group of $3\times 3$ orthogonal matrices. The constructed classes depend on finitely many isotropic spectral densities. We say that such a field belong to either the Mat\'{e}rn or the dual Mat\'{e}rn class if all of the above densities are Mat\'{e}rn or dual Mat\'{e}rn. Several examples are considered. \end{abstract}
\section{Introduction}
Random functions of more than one variable, or \emph{random fields}, were introduced in the 20th years of the past century as mathematical models of physical phenomena like turbulence, see, e.g., \citet{Friedmann1924}, \citet{Karman1938}, \citet{MR0001702}. To explain how random fields appear in continuum physics, consider the following example.
\begin{example} Let $E=E^3$ be a three-dimensional Euclidean point space, and let $V$ be the translation space of $E$ with an inner product $(\boldsymbol{\cdot}, \boldsymbol{\cdot})$. Following \cite{MR1162744}, the elements $A$ of $E$ are called the \emph{places} in $E$. The symbol $B-A$ is the vector in $V$ that translates $A$ into $B$.
Let $\mathcal{B}\subset E$ be a subset of $E$ occupied by a material, e.g., a turbulent fluid or a deformable body. The temperature is a rank~$0$ tensor-valued function $T\colon\mathcal{B}\to\mathbb{R}^1$. The velocity of a fluid is a rank~$1$ tensor-valued function $\mathbf{v}\colon\mathcal{B}\to V$. The strain tensor is a rank~$2$ tensor-valued function $\varepsilon\colon \mathcal{B}\to\mathsf{S}^2(V)$, where $\mathsf{S}^2(V)$ is the linear space of symmetric rank~$2$ tensors over $V$. The piezoelectricity tensor is a rank~$3$ tensor-valued function $\mathsf{D}\colon\mathcal{B}\to\mathsf{S} ^2(V)\otimes V$. The elastic modulus is a rank~$4$ tensor-valued function $ \mathsf{C}\colon\mathcal{B}\to\mathsf{S}^2(\mathsf{S}^2(V))$. Denote the range of any of the above functions by $\mathsf{V}$. Physicists call $ \mathsf{V}$ the \emph{constitutive tensor space}. It is a subspace of the tensor power $V^{\otimes r}$, where $r$ is a nonnegative integer. The form \begin{equation*} (\mathbf{x}_1\otimes\cdots\otimes\mathbf{x}_r,\mathbf{y}_1\otimes\cdots \otimes\mathbf{y}_r) =(\mathbf{x}_1,\mathbf{y}_1)\cdots(\mathbf{x}_r,\mathbf{ y}_r) \end{equation*} can be extended by linearity to the inner product on $V^{\otimes r}$ and then restricted to $\mathsf{V}$.
At microscopic length scales, \emph{spatial randomness} of the material needs to be taken into account. Mathematically, there is a probability space $(\Omega,\mathfrak{F},\mathsf{P})$ and a function $\mathsf{T}(A,\omega)\colon \mathcal{B}\times\Omega\to\mathsf{V}$ such that for any fixed $A_0\in\mathsf{ V}$ and for any Borel set $B\subseteq\mathsf{V}$ the inverse image $\mathsf{T }^{-1}(A_0,B)$ is an event. The map $\mathsf{T}(\mathbf{x},\omega)$ is a \emph{random field}. \end{example}
Translate the whole body $\mathcal{B}$ by a vector $\mathbf{x}\in V$. The random fields $\mathsf{T}(A+\mathbf{x})$ and $\mathsf{T}(A)$ have the same finite-dimensional distributions. It is therefore convenient to assume that there is a random field defined \emph{on all of} $E$ such that its restriction to $\mathcal{B}$ is equal to $\mathsf{T}(A)$. For brevity, denote the new field by the same symbol $\mathsf{T}(A)$ (but this time $A\in E$). The random field $\mathsf{T}(A)$ is \emph{strictly homogeneous}, that is, the random fields $\mathsf{T}(A+\mathbf{x})$ and $\mathsf{T}(A)$ have the same finite-dimensional distributions. In other words, for each positive integer $n$, for each $\mathbf{x}\in V$, and for all distinct places $A_1$, \dots, $A_n\in E$ the random elements $\mathsf{T}(A_1)\oplus\cdots\oplus \mathsf{T}(A_n)$ and $\mathsf{T}(A_1+\mathbf{x})\oplus\cdots\oplus\mathsf{T} (A_n+\mathbf{x})$ of the direct sum on $n$ copies of the space $\mathsf{V}$ have the same probability distribution.
Let $K$ be the material symmetry group of the material body $\mathcal{B}$ acting in $V$. The group $K$ is a subgroup of the orthogonal group $\mathrm{O }(V)$. For simplicity, we assume that the material is fully symmetric, that is, $K=\mathrm{O}(V)$. Fix a place $O\in\mathcal{B}$ and identify $E$ with $ V $ by the map $f$ that maps $A\in E$ to $A-O\in V$. Then $K$ acts in $E$ and rotates the body $\mathcal{B}$ by \begin{equation*} g\cdot A=f^{-1}gfA,\qquad g\in\mathrm{O}(V),\quad A\in\mathcal{B}. \end{equation*} Let $A_0\in\mathcal{B}$. Under the above action of $K$ the point $A_0$ becomes $g\cdot A_0$. The random tensor $\mathsf{T}(A_0)$ becomes $U(g)) \mathsf{T}(A_0)$, where $U$ is the restriction of the orthogonal representation $g\mapsto g^{\otimes r}$ of the group $\mathrm{O}(V)$ to the subspace $\mathsf{V}$ of the space $V^{\otimes r}$. The random fields $ \mathsf{T}(g\cdot A)$ and $U(g))\mathsf{T}(A)$ must have the same finite-dimensional distributions, because $g\cdot A_0$ is the same material point in a different place. Note that this property does not depend on a particular choice of the place $O$, because the field is strictly homogeneous. We call such a field \emph{strictly isotropic}.
Assume that the random field $\mathsf{T}(A)$ is \emph{second-order}, that is \begin{equation*}
\mathsf{E}[\|\mathsf{T}(A)\|^2]<\infty,\qquad A\in E. \end{equation*} Define the \emph{one-point correlation tensor} of the field $\mathsf{T}(A)$ by \begin{equation*} \langle\mathsf{T}(A)\rangle=\mathsf{E}[\mathsf{T}(A)] \end{equation*} and its \emph{two-point correlation tensor} by \begin{equation*} \langle\mathsf{T}(A),\mathsf{T}(B)\rangle=\mathsf{E}[(\mathsf{T}(A) -\langle \mathsf{T}(A)\rangle)\otimes(\mathsf{T}(B) -\langle\mathsf{T}(B)\rangle)]. \end{equation*} Assume that the field $\mathsf{T}(A)$ is \emph{mean-square continuous}, that is, its two-point correlation tensor $\langle\mathsf{T}(A),\mathsf{T} (B)\rangle\colon E\times E\to\mathsf{V}\otimes\mathsf{V}$ is a continuous function.
Note that \citet{MR3064996} had shown that any finite-variance isotropic random field on a compact group is necessarily mean-square continuous under standard measurability assumptions, and hence its covariance function is continuous. In the related settings, the characterisation of covariance function for a real homogeneous isotropic random field in $d$-dimensional Euclidean space was given in the classical paper by \citet{MR1503439}, where it was conjectured that the only form of discontinuity which could be allowed for such a function would occur at the origin. This conjecture was proved by \citet{MR0083534} for $d\geq 2$. This result was widely used in Geostatistics (see, i.e., \citet{MR1671159}, among the others), who argued that the homogenous and isotropic random field could be expressed as a mean-square continuous component and what they called ``nugget effect'', e.g. a purely discontinuous component. In fact this latter component should be necessarily non-measurable (see, i.e., \citet[Example~1.2.5]{MR583435}. The relation between measurability and mean-square continuity in non-compact situation is still unclear even for scalar random fields. That is why we assume in this paper that our random fields are mean-square continuous, and hence their covariance functions are continuous.
If the field $\mathsf{T}(A)$ is strictly homogeneous, then its one-point correlation tensor is a constant tensor in $\mathsf{V}$, while its two-point correlation tensor is a function of the vector $B-A$, i.e., a function on $V$ . Call such a field \emph{wide-sense homogeneous}.
Similarly, if the field $\mathsf{T}(A)$ is strictly isotropic, then we have \begin{equation} \label{eq:3} \begin{aligned} \langle\mathsf{T}(g\cdot A)\rangle&=U(g)\langle\mathsf{T}(A)\rangle,\\ \langle\mathsf{T}(g\cdot A),\mathsf{T}(g\cdot B)\rangle &=(U\otimes U)(g)\langle\mathsf{T}(A),\mathsf{T}(B)\rangle. \end{aligned} \end{equation}
\begin{definition} \label{def:1} A random field $\mathsf{T}(A)$ is called \emph{wide-sense isotropic} if its one-point and two-point correlation tensors satisfy \eqref{eq:3}. \end{definition}
For simplicity, identify the field $\{\,\mathsf{T}(A)\colon A\in E\,\}$ defined on $E$ with the field $\{\,\mathsf{T}^{\prime}(\mathbf{x})\colon \mathbf{x}\in V\,\}$ defined by $\mathsf{T}^{\prime}(\mathbf{x})=\mathsf{T} (O+\mathbf{x})$. Introduce the Cartesian coordinate system $(x,y,z)$ in $V$. Use the introduced system to identify $V$ with the coordinate space $\mathbb{ R}^3$ and $\mathrm{O}(V)$ with $\mathrm{O}(3)$. Call $\mathbb{R}^3$ the \emph{space domain}. The action of $\mathrm{O}(3)$ on $\mathbb{R}^3$ is the matrix-vector multiplication.
Definition~\ref{def:1} was used by many authors including \citet{MR0094844}, \citet{MR2406668}, \citet{sobczyk2012stochastic}.
There is another definition of isotropy.
\begin{definition}[\citep{MR0094844}] \label{def:2} A random field $\mathsf{T}(A)$ is called a \emph{ multidimensional scalar wide-sense isotropic} if its one-point correlation tensor is a constant, while the two-point correlation tensor $\langle\mathsf{
T}(\mathbf{x},\mathsf{T}(\mathbf{y})\rangle$ depends only on $\|\mathbf{y}-
\mathbf{x}\|$. \end{definition}
It is easy to see that Definition~\ref{def:2} is a particular case of Definition~\ref{def:1} when the representation $U$ is trivial, that is, maps all elements $g\in K$ to the identity operator.
In the case of $r=0$, the complete description of the two-point correlation functions of scalar homogeneous and isotropic random fields is as follows. Recall that a measure $\mu$ defined on the Borel $\sigma$-field of a Hausdorff topological space $X$ is called \emph{Borel measure}.
\begin{theorem} \label{th:1} Formula \begin{equation} \label{eq:1} \langle T(\mathbf{x}),T(\mathbf{y})\rangle=\int^{\infty}_0\frac{
\sin(\lambda\|\mathbf{y}-\mathbf{x}\|)} {\lambda\|\mathbf{y}-\mathbf{x}\|}\, \mathrm{d}\mu(\lambda) \end{equation} establishes a one-to-one correspondence between the set of two-point correlation functions of homogeneous and isotropic random fields $T(\mathbf{x })$ on the space domain $\mathbb{R}^3$ and the set of all finite Borel measures $\mu$ on the interval $[0,\infty)$. \end{theorem}
Theorem~\ref{th:1} is a translation of the result proved by \citet{MR1503439} to the language of random fields. This translation is performed as follows. Assume that $B(\mathbf{x})$ is a two-point correlation function of a homogeneous and isotropic random field $T(\mathbf{x})$. Let $n$ be a positive integer, let $\mathbf{x}_1$, \dots, $\mathbf{x}_n$ be $n$ distinct points in $\mathbb{R}^3$, and let $c_1$, \dots, $c_n$ be $n$ complex numbers. Consider the random variable $X=\sum^n_{j=1}c_j[T(\mathbf{x} _j)-\langle T(\mathbf{x}_j)\rangle]$. Its variance is non-negative: \begin{equation*} \mathsf{E}[X^2]=\sum_{j,k=1}^{n}c_j\overline{c_k}\langle T(\mathbf{x}_j),T( \mathbf{x}_k)\rangle\geq 0. \end{equation*} In other words, the two-point correlation function $\langle T(\mathbf{x}),T(
\mathbf{y})\rangle$ is a non\-ne\-ga\-tive-de\-fi\-nite function. Moreover, it is continuous, because the random field $T(\mathbf{x})$ is mean-square continuous, and depends only on the distance $\|\mathbf{y}-\mathbf{x}\|$ between the points $\mathbf{x}$ and $\mathbf{y}$, because the field is homogeneous and isotropic. \citet{MR1503439} proved that Equation~ \eqref{eq:1} describes all of such functions.
Conversely, assume that the function $\langle T(\mathbf{x}),T(\mathbf{y} )\rangle$ is described by Equation~\eqref{eq:1}. The centred Gaussian random field with the two-point correlation function \eqref{eq:1} is homogeneous and isotropic. In other words, there is a link between the theory of random fields and the theory of positive-definite functions.
In what follows, we consider the fields with absolutely continuous spectrum.
\begin{definition}[\citet{MR1009786}] A homogeneous and isotropic random field $T(\mathbf{x})$ has an \emph{ absolutely continuous spectrum} if the measure $\mu$ is absolutely continuous with respect to the measure $4\pi\lambda^2\,\mathrm{d}\lambda$, i.e., there exist a nonnegative measurable function $f(\lambda)$ such that \begin{equation*} \int^{\infty}_0\lambda^2f(\lambda)\,\mathrm{d}\lambda<\infty \end{equation*} and $d\mu(\lambda)=4\pi\lambda^2f(\lambda)\,\mathrm{d}\lambda$. The function $f(\lambda)$ is called the \emph{isotropic spectral density} of the random field $T(\mathbf{x})$. \end{definition}
\begin{example}[The Mat\'{e}rn two-point correlation function] \label{ex:2} Consider a two-point correlation function of a scalar random field $T(\mathbf{x})$ of the form \begin{equation} \label{eq:2} \left\langle T(\mathbf{x}),T(\mathbf{y})\right\rangle =M_{\nu ,a}\left( \mathbf{x},\mathbf{y}\right) =\frac{2^{1-\nu }\sigma ^{2}}{\Gamma \left( \nu \right) }\left( a\left\Vert \mathbf{x}-\mathbf{y}\right\Vert \right) ^{{}\nu }K_{{}\nu }\left( a\left\Vert \mathbf{x}-\mathbf{y}\right\Vert \right) ,\quad \end{equation} where $\sigma ^{2}>0,a>0,\nu >0$ and $K_{{}\nu }\left( z\right) $ is the Bessel function of the third kind of order $\nu$. Here, the parameter $\nu$ measures the differentiability of the random field; the parameter $\sigma $ is its variance and the parameter $a$ measures how quickly the correlation function of the random field decays with distance. The corresponding isotropic spectral density is \begin{equation*} f\left(\lambda\right) =f_{\nu ,a,\sigma ^{2}}\left(\lambda\right) =\frac{ \sigma ^{2}\Gamma \left( \nu +\frac{3}{2}\right) a^{2\nu }}{2\pi ^{3/2}\left( a^{2}+\lambda^{2}\right) ^{\nu +\frac{3}{2}}},\quad \lambda\geq 0. \end{equation*} \end{example}
Note that Example~\ref{ex:2} demonstrates another link, this time between the theory of random fields and the theory of special functions.
In this paper, we consider the following problem. How to define the Mat\'{e} rn two-point correlation tensor for the case of $r>0$? A particular answer to this question can be formulated as follows.
\begin{example}[Parsimonious Mat\'{e}rn model, \citet{MR2752612}] \label{ex:3} We assume that the vector random field \begin{equation*} T\left( \mathbf{x}\right) =\left( T_{1}\left( \mathbf{x}\right) ,\dots,T_{m}\left(\mathbf{x}\right) \right)^{\top} ,\qquad\mathbf{x}\in \mathbb{R}^{3} \end{equation*} has the two-point correlation tensor $B\left(\mathbf{x},\mathbf{y}\right) =\left( B_{ij}\left( \mathbf{x},\mathbf{y}\right) \right) _{1\leq i,j\leq m}. $ It is not straightforward to specify the cross-covariance functions $ B_{ij}\left( \mathbf{x}\right) ,1\leq i,j\leq m,i\neq j$ as non-trivial, valid parametric models\ because of the requirement of their non-negative definiteness. In the multivariate Mat\'{e}rn model, each marginal covariance function \begin{equation*} B_{ii}\left( \mathbf{x},\mathbf{y}\right) =\sigma _{i}^{2}M_{\nu _{i},a_{i}}\left(\mathbf{x},\mathbf{y}\right) ,i=1,...,m, \end{equation*} is of the type \eqref{eq:2} with the isotropic spectral density $ f_{ii}(\lambda)=f_{\nu _{i},a_{i},\sigma _{i}^{2}}\left( \lambda\right) .$
Each cross-covariance function \begin{equation*} B_{ij}\left(\mathbf{x},\mathbf{y}\right) =B_{ji}\left(\mathbf{x},\mathbf{y} \right) =b_{ij}\sigma _{i}\sigma _{j}M_{\nu _{ij},a_{ij}}\left( \mathbf{x}, \mathbf{y}\right) ,1\leq i,j\leq m,i\neq j \end{equation*} is also a Mat\'{e}rn function with co-location correlation coefficient $ b_{ij},$ smoothness parameter $\nu _{ij}$ and scale parameter $a_{ij}.$The spectral densities are \begin{equation*} f_{ij}\left( \mathbf{x}\right) =f_{\nu _{ij},a_{ij},,b_{ij}\sigma _{i}\sigma _{j}}\left( \mathbf{x}\right) ,1\leq i,j\leq m,i\neq j. \end{equation*}
The question then is to determine the values of $\nu _{ij},a_{ij}$ and $ b_{ij}$ so that the non-negative definiteness condition is satisfied. Let $ m\geq 2$. Suppose that \begin{equation*} \nu _{ij}=\frac{1}{2}\left( \nu _{i}+\nu _{j}\right) ,1\leq i,j\leq m,i\neq j, \end{equation*} and that there is a common scale parameter in the sense that there exists an $a>0$ such that \begin{equation*} a_{i}=...=a_{m}=a,\text{ and }a_{ij}=a\text{ for }1\leq i,j\leq m,i\neq j. \end{equation*} Then the multivariate Mat\'{e}rn model provides a valid second-order structure in $\mathbb{R}^{3}$ if \begin{equation*} b_{ij}=\beta _{ij}\left[ \frac{\Gamma \left( \nu _{i}+\frac{3}{2}\right) }{ \Gamma \left( \nu _{i}\right) }\frac{\Gamma \left( \nu _{j}+\frac{3}{2} \right) }{\Gamma \left( \nu _{j}\right) }\right] ^{1/2}\frac{\Gamma \left( \frac{1}{2}\left( \nu _{i}+\nu _{j}\right) \right) }{\Gamma \left( \frac{1}{2 }\left( \nu _{i}+\nu _{j}\right) +\frac{3}{2}\right) } \end{equation*} for $1\leq i,j\leq m,i\neq j,$ where the matrix $\left( \beta _{ij}\right) _{i,j=1,...,m}$ has diagonal elements $\beta _{ii}=1$ for $i=1,...,m,$ and off-diagonal elements $\beta _{ij},1\leq i,j\leq m,i\neq j$ so that it is symmetric and non-negative definite. \newline \end{example}
\begin{example}[Flexible Mat\'{e}rn model] Consider the vector random field $\mathsf{T}(\mathbf{x})\in \mathbb{R}^{m}, \mathbf{x}\in \mathbb{R}^{3}$ with the two-point covariance tensor \begin{equation*} \left\langle T_{i}(x),T_{j}(\mathbf{y})\right\rangle =B_{ij}(\mathbf{x}, \mathbf{y})=\bar{B}_{ij}(\mathbf{y}-\mathbf{x})=\sigma _{ij}M_{\nu _{ij},a_{ij}}\left( \mathbf{x},\mathbf{y}\right) ,1\leq i,j\leq m, \end{equation*} where again \begin{equation*} M_{\nu ,a}\left( \mathbf{x},\mathbf{y}\right) =\frac{2^{1-\nu }\sigma ^{2}}{ \Gamma \left( \nu \right) }\left( a\left\Vert \mathbf{y}-\mathbf{x} \right\Vert \right) ^{{}\nu }K_{{}\nu }\left( a\left\Vert \mathbf{y}-\mathbf{ x}\right\Vert \right) .\quad \end{equation*} We assume that the matrix $\Sigma =(\sigma _{ij})_{1\leq i,j\leq m}=(\sigma _{ij})>0$ (nonnegative definite), and we denote $\sigma _{i}^{2}=\sigma _{ii} $, $i=1$, \dots , $m$.
Then the spectral density $F=(f_{ij})_{1\leq i,j\leq m}$ has the entries \begin{equation*} \begin{aligned} f_{ij}(\bm{\lambda} )&=\frac{1}{(2\pi )^{3}}\int_{\mathbb{R}^{3}}e^{-\mathrm{i}(\bm{\lambda} ,\mathbf{h})}\bar{B}_{ij}(\mathbf{h})\,\mathrm{d}\mathbf{h}\\ &=\sigma _{ij}a_{ij}^{2\nu _{ij}}\frac{1}{(a_{ij}+\left\Vert \bm{\lambda} \right\Vert ^{2})^{\nu _{ij}+\frac{3}{2}}}\frac{\Gamma (\nu _{ij}+\frac{3}{2})}{\Gamma (\nu _{ij})},1\leq i,j\leq m,\lambda \in \mathbb{R}^{3}. \end{aligned} \end{equation*}
We need to find some conditions on parameters $a_{ij}>0,\nu _{ij}>0,$ under which the matrix $F>0$ (nonnegative definite). The general conditions can be found in \citet{MR2946043} and \citet{MR2949350}.
Recall that a symmetric, real $m\times \ m$ matrix $\Theta =(\theta _{ij})_{1\leq i,j\leq m},$ is said to be conditionally negative definite \citep{MR1449393}, if the inequality \begin{equation*} \sum_{i=1}^{m}\sum_{j=1}^{m}c_{i}c_{j}\theta _{ij}\leq 0 \end{equation*} holds for any real numbers $c_{1},...,$ $c_{m,}$ subject to \begin{equation*} \sum_{i=1}^{m}c_{i}=0. \end{equation*}
In general, a necessary condition for the above inequality is \begin{equation*} \theta_{ii}+\theta_{jj}\leq 2\theta_{ij},\qquad i,j=1,...,m, \end{equation*} which implies that all entries of a conditionally negative definite matrix are nonnegative whenever its diagonal entries are non-negative. If all its diagonal entries vanish, a conditionally negative definite matrix is also named a Euclidean distance matrix. It is known that $\Theta =(\theta _{ij})_{1\leq i,j\leq m}$ is conditionally negative definite if and only if an $m\times \ m$ matrix $S$ with entries $\exp \{-\theta_{ij}u\}$ is positive definite, for every fixed $u\geq 0$ (cf. \citet[Theorem 4.1.3]{MR1449393}), or $S=e^{-u\Theta },$ where $e^{\Lambda }$ is an Hadamar exponential of a matrix $\Lambda .$
Some simple examples of conditionally negative definite matrices are
(i) $\theta_{ij}=\theta_{i}+\theta_{j};$
(ii) $\theta_{ij}=\mathrm{const};$
(iii) $\theta_{ij}=\left\vert\theta_{i}-\theta_{j}\right\vert ;$
(iv) $\theta_{ij}=\left\vert\theta_{i}-\theta_{j}\right\vert ^{2}$
(v) $\theta_{ij}=\max \{\theta_{i},\theta_{j}\};$
(vi) $\theta_{ij}=-\theta_{i}\theta_{j}.$
Recall that the Hadamard product of two matrices $A$ and $B$ is the matrix $ A\circ B=(A_{ij}\cdot B_{ij})_{1\leq i,j\leq m}.$ By Schur theorem if $ A>0,B>0,$ then so is $A\circ B.$
Then \begin{equation*} F=\Sigma \circ A\circ B\circ C, \end{equation*} where one need to find conditions under which \begin{equation*} \begin{aligned} A&=\left( \frac{1}{(1+\left\Vert \bm{\lambda} \right\Vert ^{2}/a_{ij}^{2})^{\nu _{ij}+\frac{3}{2}}}\right) _{1\leq i,j\leq m}\geq 0,\qquad B=\left( \frac{1}{a_{ij}^{3}}\right) _{1\leq i,j\leq m}\geq 0,\\ C&=\left( \frac{\Gamma (\nu _{ij}+\frac{3}{2})}{\Gamma (\nu _{ij})}\right) _{1\leq i,j\leq m}\geq 0. \end{aligned} \end{equation*}
We consider first the case 1, in which we assume that \begin{equation*} a_{i}=...=a_{m}=a,\text{ }1\leq i,j\leq m. \end{equation*} Then \begin{equation*} A=e^{-\frac{3}{2}}\left( \exp \{-\nu _{ij}\log (1+\frac{\left\Vert \bm{\lambda}\right\Vert ^{2}}{a^{2}})\}\right) _{1\leq i,j\leq m}\geq 0, \end{equation*} if and only if \ the matrix \begin{equation*} Y=\left( -\nu _{ij}\right) _{1\leq i,j\leq m} \end{equation*} is conditionally negative definite (see above examples (i)-(vi)), then for such $\left( -\nu _{ij}\right) _{1\leq i,j\leq m},$ we have to check that the matrix $C=(\Gamma (\nu _{ij}+\frac{3}{2})/\Gamma (\nu _{ij})_{1\leq i,j\leq m}\geq 0.$ This class is not empty, since it is included the case of the so-called parsimonious model: $\nu _{ij}=\frac{\nu _{i}+\nu _{j}}{2}$ (see Example~\ref{ex:3}). Thus, for the case 1, the following multivariate Mat\'{e}rn models are valid under the following conditions (see, \citet{MR2946043,MR2949350}):
A1) Assume that
i) $a_{i}=...=a_{m}=a,$ $1\leq i,j\leq m;$
ii) $-\nu _{ij}$ ,$1\leq i,j\leq m;$ form a conditionally non-negative matrices;
iii) $\sigma _{ij}\frac{\Gamma (\nu _{ij}+\frac{3}{2})}{\Gamma (\nu _{ij})} ,1\leq i,j\leq m;$ form a non-negative definite matrices.
Consider the case 2: \begin{equation*} \nu _{ij}=\nu >0,\text{ }1\leq i,j\leq m. \end{equation*} Then the following multivariate Mat\'{e}rn models are valid under the following conditions \citep{MR2949350}:
A2) either
a) $-a_{ij}^{2}$ ,$1\leq i,j\leq m,$ form a conditionally non-negative matrix and $\sigma _{ij}a_{ij}^{2\nu },1\leq i,j\leq m,$ form non-negative definite matrices;
or
b) $-a_{ij}^{-2}$ ,$1\leq i,j\leq m,$ form a conditionally non-negative matrix and $\sigma _{ij}/a_{ij}^{3},1\leq i,j\leq m,$ form non-negative definite matrices.
These classes of Mat\'{e}rn models are not empty since in the case of parsimonious model they are consistent with \citet[Theorem~1]{MR2752612}. For the parsimonious model form this paper $($ $\nu _{ij}=\frac{\nu _{ii}+\nu _{jj}}{2},1\leq i,j\leq m),$ the following multivariate Mat\'{e}rn models are valid under conditions
A3) either
a) $\ \nu _{ij}=\frac{\nu _{ii}+\nu _{jj}}{2},a_{ij}^{2}=\frac{ a_{ii}^{2}+a_{jj}^{2}}{2},1\leq i,j\leq m,$ and $\sigma _{ij}a_{ij}^{2\nu _{ij}}/\Gamma (\nu _{ij}),1\leq i,j\leq m,$form non-negative definite matrices;
or
b) $\nu _{ij}=\frac{\nu _{ii}+\nu _{jj}}{2},a_{ij}^{-2}=\frac{ a_{ii}^{-2}+a_{jj}^{-2}}{2},1\leq i,j\leq m,$and $\sigma _{ij}/a_{ij}^{3}/\Gamma (\nu _{ij}),1\leq i,j\leq m,$ form non-negative definite matrices;
The most general conditions and new examples can be found in \citet{MR2946043} and \citet{MR2949350}. The paper by \citet{MR3353096} reviews the main approaches to building multivariate correlation and covariance structures, including the multivariate Mat\'{e}rn models. \end{example}
\begin{example}[Dual Mat\'{e}rn models] \label{ex:dual} Adapting the so-called duality theorem (see, i.e., \citet{MR2557625}), one can show that under the conditions A1, A2 or A3 \begin{equation*} \frac{1}{(1+\left\Vert \mathbf{h}\right\Vert ^{2})^{\nu _{ij}+\frac{3}{2}}} =\int_{\mathbb{R}^{3}}e^{\mathrm{i}(\bm{\lambda },\mathbf{h})}s_{ij}( \bm{\lambda })d\bm{\lambda },\qquad 1\leq i,j\leq m, \end{equation*} where \begin{equation*} s_{ij}(\bm{\lambda )=}\frac{1}{(2\pi )^{3}2^{\nu _{ij}-1}\Gamma (\nu _{ij}+ \frac{3}{2})}(\left\Vert \bm{\lambda }\right\Vert )^{\nu _{ij}}K_{\nu _{ij}}(\left\Vert \bm{\lambda }\right\Vert ),\qquad\bm{\lambda \in }\mathbb{R }^{3},1\leq i,j\leq m, \end{equation*} is the valid spectral density of the vector random field with correlation structure $((1+\left\Vert \mathbf{h}\right\Vert ^{2})^{-(\nu _{ij}+\frac{3}{2 })})_{1\leq i,j\leq m}=(D_{ij}(\mathbf{h}))_{1\leq i,j\leq m}$. We will call it the \emph{dual Mat\'{e}rn model}.
Note that for the Mat\'{e}rn models \begin{equation*} \int_{\mathbb{R}^{3}}\bar{B}_{ij}(\mathbf{x})d\mathbf{x}<\infty . \end{equation*}
This condition is known as short range dependence, while for the dual Mat\'{e}rn model, the long range dependence is possible:
\begin{equation*} \int_{\mathbb{R}^{3}}D_{ij}(\mathbf{h})d\mathbf{h}=\infty ,\text{ if }0<\nu _{ij}<\frac{3}{2}. \end{equation*} \end{example}
When $m=3$, the random field of Example~\ref{ex:3} is scalar isotropic but not isotropic. How to construct examples of homogeneous and \emph{isotropic} vector and tensor random fields with Mat\'{e}rn two-point correlation tensors?
To solve this problem, we develop a sketch of a general theory of homogeneous and isotropic tensor-valued random fields in Section~\ref {sec:general}. This theory was developed by \citet{MR3336288,MR3493458}. In particular, we explain another two links: one leads from the theory of random fields to classical invariant theory, another one was established recently and leads from the theory of random fields to the theory of convex compacta.
In Section~\ref{sec:examples}, we give examples of Mat\'{e}rn homogeneous and isotropic tensor-valued random fields. Finally, in Appendices we shortly describe the mathematical terminology which is not always familiar to specialists in probability: tensors, group representations, and classical invariant theory. For different aspects of theory of random fields see also \citet{MR1687092} and \citet{MR2870527}.
\section{A sketch of a general theory}
\label{sec:general}
Let $r$ be a nonnegative integer, let $\mathsf{V}$ be an invariant subspace of the representation $g\mapsto g^{\otimes r}$ of the group $\mathrm{O}(3)$, and let $U$ be the restriction of the above representation to $\mathsf{V}$. Consider a homogeneous $\mathsf{V}$-valued random field $\mathsf{T}(\mathbf{x })$, $\mathbf{x}\in\mathbb{R}^3$. Assume it is isotropic, that is, satisfies \eqref{eq:3}. It is very easy to see that its one-point correlation tensor $ \langle\mathsf{T}(\mathbf{x})\rangle$ is an arbitrary element of the isotypic subspace of the space $\mathsf{V}$ that corresponds to the trivial representation. In particular, in the case of $r=0$ the representation $U$ is trivial, and $\langle\mathsf{T}(\mathbf{x})\rangle$ is an arbitrary real number. In the case of $r=1$ we have $U(g)=g$. This representation does not contain a trivial component, therefore $\langle\mathsf{T}(\mathbf{x})\rangle= \mathbf{0}$. In the case of $r=2$ and $U(g)=\mathsf{S}^2(g)$ the isotypic subspace that corresponds to the trivial representation is described in Example~\ref{ex:8}, we have $\langle\mathsf{T}(\mathbf{x})\rangle=CI$, where $C$ is an arbitrary real number, and $I$ is the identity operator in $ \mathbb{R}^3$, and so on.
Can we quickly describe the two-point correlation tensor in the same way? The answer is positive. Indeed, the second equation in \eqref{eq:3} means that $\langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle$ is a measurable covariant of the pair $(g,U)$. The integrity basis for polynomial invariants of the defining representation contains one element $I_1=\|
\mathbf{x}\|^2$. By the Wineman--Pipkin theorem (Appendix~\ref{ap:tensors}, Theorem~\ref{th:Wineman-Pipkin}), we obtain \begin{equation*} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle=\sum_{l=1}^{L}
\varphi_l(\|\mathbf{y}-\mathbf{x}\|^2)\mathsf{T}_l(\mathbf{y}-\mathbf{x}), \end{equation*} where $\mathsf{T}_l(\mathbf{y}-\mathbf{x})$ are the basic covariant tensors of the representation $U$.
For example, when $r=1$, the basis covariant tensors of the defining representations are $\delta_{ij}$ and $x_ix_j$ by the result of \citet{MR1488158} mentioned in Appendix~\ref{ap:invariant}. We obtain the result by \citet{MR0001702}: \begin{equation*}
\langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle= \varphi_1(\|
\mathbf{y}-\mathbf{x}\|^2)\delta_{ij} +\varphi_2(\|\mathbf{y}-\mathbf{x}\|^2)
\frac{(y_i-x_i)(y_j-x_j)}{\|\mathbf{y}-\mathbf{x}\|^2}. \end{equation*}
When $r=2$ and $U(g)=\mathsf{S}^2(g)$, the three rank~$4$ isotropic tensors are $\delta_{ij}\delta_{kl}$, $\delta_{ik}\delta_{jl}$, and $ \delta_{il}\delta_{jk}$. Consider the group $\Sigma$ of order~$8$ of the permutations of symbols $i$, $j$, $k$, and $l$, generated by the transpositions $(ij)$, $(kl)$, and the product $(ik)(jl)$. The group $\Sigma$ acts on the set of rank~$4$ isotropic tensors and has two orbits. The sums of elements on each orbit are basis isotropic tensors: \begin{equation*} L^1_{ijkl}=\delta_{ij}\delta_{kl},\qquad L^2_{ijkl}=\delta_{ik}\delta_{jl} +\delta_{il}\delta_{jk}. \end{equation*} Consider the case of degree~$2$ and of order~$4$. For the pair of representations $ (g^{\otimes 4},(\mathbb{R}^3)^{\otimes 4})$ and $(g,\mathbb{R}^3)$ we have $ 6 $~covariant tensors: \begin{equation*} \delta_{il}x_jx_k,\delta_{jk}x_ix_{l},\delta_{jl}x_ix_k, \delta_{ik}x_jx_{l},\delta_{kl}x_ix_j,\delta_{ij}x_kx_{l}. \end{equation*} The action of the group $\Sigma$ has $2$~orbits, and the symmetric covariant tensors are \begin{equation*}
\begin{aligned} \|\mathbf{x}\|^2L^3_{ijkl}(\mathbf{x})&=\delta_{il}x_jx_k +\delta_{jk}x_ix_{l}+\delta_{jl}x_ix_k+\delta_{ik}x_jx_{l},\\
\|\mathbf{x}\|^2L^4_{ijkl}(\mathbf{x})&=\delta_{kl}x_ix_j +\delta_{ij}x_kx_{l}. \end{aligned} \end{equation*} In the case of degree~$4$ and of order~$4$ we have only one covariant: \begin{equation*}
\|\mathbf{x}\|^4L^5_{ijkl}(\mathbf{x})=x_ix_jx_kx_{l}. \end{equation*} The result by \citet{Lomakin1964} \begin{equation*} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle
=\sum_{m=1}^{5}\varphi_m(\|\mathbf{y}-\mathbf{x}\|^2) L^m_{ijkl}(\mathbf{y}- \mathbf{x}) \end{equation*} easily follows.
The case of $r=3$ will be considered in details elsewhere.
When $r=4$ and $U(g)=\mathsf{S}^2(S^2(g))$, the situation is more delicate. There are $8$ symmetric isotropic tensors connected by $1$ syzygy, $13$ basic covariant tensors of degree~$2$ and of order~$8$ connected by $3$ ~syzygies, $10$ basic covariant tensors of degree~$4$ and of order~$8$ connected by $2$~syzygies, $3$ basic covariant tensors of degree~$6$ and of order~$8$, and $1$ basic covariant tensor of degree~$8$ and of order~$8$, see \citet{Malyarenko2016a} and \citet{Malyarenko2016} for details. It follows that there are $29$ independent basic covariant tensors. The result by \citet{Lomakin1965} includes only $15$ of them and is therefore incomplete.
How to find the functions $\varphi_m$? In the case of $r=0$, the answer is given by Theorem~\ref{th:1}: \begin{equation*}
\varphi_1(\|\mathbf{y}-\mathbf{x}\|^2)=\int^{\infty}_0 \frac{\sin(\lambda\|
\mathbf{y}-\mathbf{x}\|)} {\lambda\|\mathbf{y}-\mathbf{x}\|}\,\mathrm{d} \mu(\lambda). \end{equation*} In the case of $r=1$, the answer has been found by \citet{MR0094844}: \begin{equation} \label{eq:Yaglom} \begin{aligned}
\varphi_1(\|\mathbf{y}-\mathbf{x}\|^2)&=\frac{1}{\rho^2}\left(\int^{ \infty}_0j_2(\lambda\rho) \,\mathrm{d}\Phi_2(\lambda)-\int^{\infty}_0j_1(\lambda\rho) \,\mathrm{d}\Phi_1(\lambda)\right),\\
\varphi_2(\|\mathbf{y}-\mathbf{x}\|^2)&=\int^{\infty}_0\frac{j_1(\lambda \rho)}{\lambda\rho} \,\mathrm{d}\Phi_1(\lambda) +\int^{\infty}_0\left(j_0(\lambda\rho)-\frac{j_1(\lambda\rho)}{\lambda\rho} \right)\,\mathrm{d}\Phi_2(\lambda), \end{aligned} \end{equation}
where $\rho=\|\mathbf{y}-\mathbf{x}\|$, $j_n$ are the spherical Bessel functions, and $\Phi_1$ and $\Phi_2$ are two finite measures on $[0,\infty)$ with $\Phi_1(\{0\})=\Phi_2(\{0\})$.
In the general case, we proceed in steps. The main idea is simple. We describe all homogeneous random fields and throw away those that are not isotropic. The homogeneous random fields are described by the following result.
\begin{theorem} \label{th:Kolmogorov} Formula \begin{equation} \label{eq:9} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle =\int_{\hat{ \mathbb{R}}^3}e^{\mathrm{i}(\mathbf{p},\mathbf{y}-\mathbf{x})} \,\mathrm{d} \mu(\mathbf{p}) \end{equation} establishes a one-to-one correspondence between the set of the two-point correlation tensors of homogeneous random fields $\mathsf{T}(\mathbf{x})$ on the \emph{space domain} $\mathbb{R}^3$ with values in a \emph{complex} finite-dimensional space $\mathsf{V}_{\mathbb{C}}$ and the set of all measures $\mu$ on the Borel $\sigma$-field $\mathfrak{B}(\hat{\mathbb{R}}^3)$ of the \emph{wavenumber domain} $\hat{\mathbb{R}}^3$ with values in the cone of nonnegative-definite Hermitian operators in $\mathsf{V}_{\mathbb{C}}$. \end{theorem}
This theorem was proved by \citep{MR0003440,MR0003441} for one-dimensional stochastic processes. Kolmogorov's results have been further developed by \citet{MR0006609}, \citet{MR0013259}, \citep{MR0015712,MR0015713} among others.
We would like to write as many formulae as possible in a coordinate-free form, like \eqref{eq:9}. To do that, let $J$ be a \emph{real structure} in the space $\mathsf{V}_{\mathbb{C}}$, that is, a map $j\colon\mathsf{V}_{ \mathbb{C}}\to\mathsf{V}_{\mathbb{C}}$ with
\begin{itemize} \item $J(\mathsf{x}+\mathsf{y})=J(\mathsf{x})+J(\mathsf{y})$, $\mathsf{x}$, $ \mathsf{y}\in\mathsf{V}_{\mathbb{C}}$.
\item $J(\alpha\mathsf{x})=\overline{\alpha}J(\mathsf{x})$, $\mathsf{x}\in \mathsf{V}_{\mathbb{C}}$, $\alpha\in\mathbb{C}$.
\item $J(J(\mathsf{x}))=\mathsf{x}$, $\mathsf{x}\in\mathsf{V}_{\mathbb{C}}$. \end{itemize}
Any tensor $\mathsf{x}\in\mathsf{V}_{\mathbb{C}}$ can be written as $\mathsf{ x}=\mathsf{x}^++\mathsf{x}^-$, where \begin{equation*} \mathsf{x}^+=\frac{1}{2}(\mathsf{x}+J\mathsf{x}),\qquad \mathsf{x}^-=\frac{1 }{2}(\mathsf{x}-J\mathsf{x}). \end{equation*} Denote \begin{equation*} \mathsf{V}^+=\{\,\mathsf{x}\in\mathsf{V}_{\mathbb{C}}\colon J\mathsf{x}= \mathsf{x}\,\},\qquad\mathsf{V}^-=\{\,\mathsf{x}\in\mathsf{V}_{\mathbb{C} }\colon J\mathsf{x}=-\mathsf{x}\,\}. \end{equation*} Both sets $\mathsf{V}^+$ and $\mathsf{V}^-$ are real vector spaces. If the values of the random field $\mathsf{T}(\mathbf{x})$ lie in $\mathsf{V}^+$, then the measure $\mu$ satisfies the condition \begin{equation} \label{eq:10} \mu(-A)=\mu^{\top}(A) \end{equation} for all Borel subsets $A\subseteq\hat{\mathbb{R}}^3$, where $-A=\{\,-\mathbf{ p}\colon\mathbf{p}\in A\,\}$.
Next, the following Lemma can be proved. Let $\mathbf{p}=(\lambda,\varphi_{ \mathbf{p}},\theta_{\mathbf{p}})$ be the spherical coordinates in the wavenumber domain.
\begin{lemma} A homogeneous random field described by \emph{\eqref{eq:9}} and \emph{ \eqref{eq:10}} is isotropic if and only if its two-point correlation tensor has the form \begin{equation} \label{eq:12} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle=\frac{1}{4\pi} \int_{0}^{\infty}\int_{S^2}e^{\mathrm{i}(\mathbf{p},\mathbf{y}-\mathbf{x})} f(\lambda,\varphi_{\mathbf{p}},\theta_{\mathbf{p}})\sin\theta_{\mathbf{p}} \, \mathrm{d}\varphi_{\mathbf{p}}\,\mathrm{d}\theta_{\mathbf{p}}\,\mathrm{d} \nu(\lambda), \end{equation} where $\nu$ is a finite measure on the interval $[0,\infty)$, and where $f$ is a measurable function taking values in the set of all symmetric nonnegative-definite operators on $\mathsf{V}^+$ with unit trace and satisfying the condition \begin{equation} \label{eq:11} f(g\mathbf{p})=\mathsf{S}^2(U)(g)f(\mathbf{p}),\qquad\mathbf{p}\in\hat{ \mathbb{R}}^3, \quad g\in\mathrm{O}(3). \end{equation} \end{lemma}
When $\lambda=0$, condition~\eqref{eq:11} gives $f(\mathbf{0})=\mathsf{S} ^2(U)(g)f(\mathbf{0})$ for all $g\in\mathrm{O}(3)$. In other words, the tensor $f(\mathbf{0})$ lies in the isotypic subspace of the space $\mathsf{S} ^2(\mathsf{V^+})$ that corresponds to the trivial representation of the group $\mathrm{O}(3)$, call it $\mathsf{H}_1$. The intersection of $\mathsf{H }_1$ with the set of all symmetric nonnegative-definite operators on $ \mathsf{V}^+$ with unit trace is a convex compact set, call it $\mathcal{C} _1 $.
When $\lambda>0$, condition~\eqref{eq:11} gives $f(\lambda,0,0)=\mathsf{S} ^2(U)(g)f(\lambda,0,0)$ for all $g\in\mathrm{O}(2)$, because $\mathrm{O}(2)$ is the subgroup of $\mathrm{O}(3)$ that fixes the point $(\lambda,0,0)$. In other words, consider the restriction of the representation $\mathsf{S}^2(U)$ to the subgroup $\mathrm{O}(2)$. The tensor $f(\lambda,0,0)$ lies in the isotypic subspace of the space $\mathsf{S}^2(\mathsf{V^+})$ that corresponds to the trivial representation of the group $\mathrm{O}(2)$, call it $\mathsf{ H}_0$. We have $\mathsf{H}_1\subset\mathsf{H}_0$, because $\mathrm{O}(2)$ is a subgroup of $\mathrm{O}(3)$. The intersection of $\mathsf{H}_0$ with the set of all symmetric nonnegative-definite operators on $\mathsf{V}^+$ with unit trace is a convex compact set, call it $\mathcal{C}_0$.
Fix an orthonormal basis $\mathsf{T}^{0,1,0}$, \dots, $\mathsf{T}^{0,n_0,0}$ of the space $\mathsf{H}_1$. Assume that the space $\mathsf{H}_0\ominus \mathsf{H}_1$ has the non-zero intersection with the spaces of $n_1$ copies of the irreducible representation $U^{2g}$, $n_2$ copies of the irreducible representation $U^{4g}$, \dots, $n_r$ copies of the irreducible representation $U^{2rg}$ of the group $\mathrm{O}(3)$, and let $\mathsf{T} ^{2\ell,n,m}$, $-2\ell\leq m\leq 2\ell$, be the tensors of the Gordienko basis of the $n$th copy of the representation $U^{2\ell g}$. We have \begin{equation} \label{eq:15} f(\lambda,0,0)=\sum_{\ell=0}^{r}\sum_{n=1}^{n_{\ell}}f_{\ell n}(\lambda) \mathsf{T}^{2\ell,n,0} \end{equation} with $f_{\ell n}(0)=0$ for $\ell>0$ and $1\leq n\leq n_{\ell}$. By \eqref{eq:11} we obtain \begin{equation*} f(\lambda,\varphi_{\mathbf{p}},\theta_{\mathbf{p}})=\sum_{\ell=0}^{r} \sum_{n=1}^{n_{\ell}}f_{\ell n}(\lambda)\sum_{m=-2\ell}^{2\ell} U^{2\ell g}_{m0}(\varphi_{\mathbf{p}},\theta_{\mathbf{p}})\mathsf{T}^{2\ell,n,m}. \end{equation*}
Equation~\eqref{eq:12} takes the form \begin{equation} \label{eq:13} \begin{aligned} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle&=\frac{1}{2 \sqrt{\pi}} \sum_{\ell=0}^{r}\sum_{n=1}^{n_{\ell}}\sum_{m=-2\ell}^{2\ell}\int_{0}^{ \infty}\int_{S^2} e^{\mathrm{i}(\mathbf{p},\mathbf{y}-\mathbf{x})}f_{\ell n}(\lambda) \frac{1}{\sqrt{4\ell+1}}\\ &\quad\times S^m_{2\ell}(\varphi_{\mathbf{p}},\theta_{\mathbf{p}}) \mathsf{T}^{2\ell,n,m}\sin\theta_{\mathbf{p}}\,\mathrm{d}\varphi_{ \mathbf{p}}\, \mathrm{d}\theta_{\mathbf{p}}\,\mathrm{d}\nu(\lambda), \end{aligned} \end{equation} where we used the relation \begin{equation*} U^{2\ell g}_{m0}(\varphi_{\mathbf{p}},\theta_{\mathbf{p}})=\sqrt{\frac{4\pi}{ 4\ell+1}} S^m_{2\ell}(\varphi_{\mathbf{p}},\theta_{\mathbf{p}}). \end{equation*}
Substitute the \emph{Rayleigh expansion} \begin{equation*} \mathrm{e}^{\mathrm{i}(\mathbf{p},\mathbf{r})}=4\pi\sum^{\infty}_{\ell=0}
\sum^{\ell}_{m=-\ell}\mathrm{i}^{\ell}j_{\ell}(\|\mathbf{p}\|\cdot\|\mathbf{r
}\|) S^m_{\ell}(\theta_{\mathbf{p}},\varphi_{\mathbf{p}}) S^m_{\ell}(\theta_{ \mathbf{r}},\varphi_{\mathbf{r}}) \end{equation*} into \eqref{eq:13}. We obtain \begin{equation*} \begin{aligned} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle&=2\sqrt{\pi} \sum_{\ell=0}^{r}\sum_{n=1}^{n_{\ell}}\sum_{m=-2\ell}^{2\ell}\int_{0}^{
\infty} (-1)^{\ell}j_{2\ell}(\lambda\|\mathbf{r}\|)f_{\ell n}(\lambda)\frac{1}{\sqrt{4\ell+1}}\\ &\quad\times S^m_{\ell}(\varphi_{\mathbf{r}},\theta_{\mathbf{r}}) \mathsf{T}^{2\ell,n,m}\,\mathrm{d}\nu(\lambda), \end{aligned} \end{equation*} where $\mathbf{r}=\mathbf{y}-\mathbf{x}$. Returning back to the matrix entries $U^{2\ell g}_{m0}(\varphi_{\mathbf{r}},\theta_{\mathbf{r}})$, we have \begin{equation} \label{eq:14} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle=
\int_{0}^{\infty}\sum_{\ell=0}^{r}(-1)^{\ell}j_{2\ell}(\lambda\|\mathbf{r}
\|) \sum_{n=1}^{n_{\ell}}f_{\ell n}(\lambda) M^{2\ell,n}(\varphi_{\mathbf{r} },\theta_{\mathbf{r}})\,\mathrm{d}\nu(\lambda), \end{equation} where \begin{equation*} M^{2\ell,n}(\varphi_{\mathbf{r}},\theta_{\mathbf{r}})=\sum_{m=-2\ell}^{2 \ell} U^{2\ell g}_{m0}(\varphi_{\mathbf{r}},\theta_{\mathbf{r}})\mathsf{T} ^{2\ell,n,m}. \end{equation*}
It is easy to check that the function $M^{2\ell,n}(\varphi_{\mathbf{r} },\theta_{\mathbf{r}})$ is a covariant of degree~$2\ell$ and of order~$2r$. Therefore, the \emph{$M$-function} is a linear combination of basic symmetric covariant tensors, or \emph{$L$-functions}: \begin{equation*} M^{2\ell,n}(\varphi_{\mathbf{r}},\theta_{\mathbf{r}})=\sum_{k=0}^{\ell}
\sum_{q=1}^{q_{kr}}c_{nkq}\frac{L^{2k,q}(\mathbf{y}-\mathbf{x})} {\|\mathbf{y
}-\mathbf{x}\|^{2k}}, \end{equation*} where $q_{kr}$ is the number of linearly independent symmetric covariant tensors of degree~$2k$ and of order~$2r$. The right hand side is indeed a polynomial in sines and cosines of the angles $\varphi_{\mathbf{r}}$ and $ \theta_{\mathbf{r}}$. Equation~\eqref{eq:14} takes the form \begin{equation*} \begin{aligned} \langle\mathsf{T}(\mathbf{x}),\mathsf{T}(\mathbf{y})\rangle&=
\int_{0}^{\infty}\sum_{\ell=0}^{r}(-1)^{\ell}j_{2\ell}(\lambda\|\mathbf{r}
\|) \sum_{n=1}^{n_{\ell}}f_{\ell n}(\lambda)\\ &\quad\times \sum_{k=0}^{\ell}
\sum_{q=1}^{q_{kr}}c_{nkq}\frac{L^{2k,q}(\mathbf{y}-\mathbf{x})} {\|\mathbf{y
}-\mathbf{x}\|^{2k}}\,\mathrm{d}\nu(\lambda). \end{aligned} \end{equation*}
Recall that $f_{\ell n}(\lambda)$ are measurable functions such that the tensor \eqref{eq:15} lies in $\mathcal{C}_1$ for $\lambda=0$ and in $ \mathcal{C}_0$ for $\lambda>0$. The final form of the two-point correlation tensor of the random field $\mathsf{T}(\mathbf{x})$ is determined by geometry of convex compacta $\mathcal{C}_0$ and $\mathcal{C}_1$. For example, in the case of $r=1$ the set $\mathcal{C}_0$ is an interval (see \citet{MR3493458}), while $\mathcal{C}_1$ is a one-point set inside this interval. The set $\mathcal{C}_0$ has two extreme points, and the corresponding random field is a sum of two uncorrelated components given by Equation~\eqref{eq:b1b2} below. The one-point set $\mathcal{C}_1$ lies in the middle of the interval, the condition $\Phi_1(\{0\})=\Phi_2(\{0\})$ follows. In the case of $r=2$, the set of extreme points of the set $ \mathcal{C}_0$ has three connected components: two one-point sets and an ellipse, see \citet{MR3493458}, and the corresponding random field is a sum of three uncorrelated components.
In general, the two-point correlation tensor of the field has the simplest form when the set $\mathcal{C}_0$ is a simplex. We use this idea in Examples~ \ref{ex:2components} and \ref{ex:5component} below.
\section{Examples of Mat\'{e}rn homogeneous and isotropic random fields}
\label{sec:examples}
\begin{example} \label{ex:2components} Consider a centred homogeneous scalar isotropic random field $T(\mathbf{x})$ on the space $\mathbb{R}^3$ with values in the two-dimensional space $\mathbb{R}^2$. It is easy to see that both $\mathcal{C }_0$ and $\mathcal{C}_1$ are equal to the set of all symmetric nonnegative-definite $2\times 2$ matrices with unit trace. Every such matrix has the form \begin{equation*} \begin{pmatrix} x & y \\ y & 1-x \end{pmatrix} \end{equation*} with $x\in[0,1]$ and $y^2\leq x(1-x)$. Geometrically, $\mathcal{C}_0$ and $ \mathcal{C}_1$ are the balls \begin{equation*} \left(x-\frac{1}{2}\right)^2+y^2=\frac{1}{4}. \end{equation*} Inscribe an equilateral triangle with vertices \begin{equation*} C^1= \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} ,\qquad C^{2,3}=\frac{1}{4} \begin{pmatrix} 1 & \pm\sqrt{3} \\ \pm\sqrt{3} & 3 \end{pmatrix} \end{equation*} into the above ball. The function $f(\mathbf{p})$ takes the form \begin{equation*}
f(\mathbf{p})=\sum_{m=1}^{3}a_m(\|\mathbf{p}\|)C^m, \end{equation*}
where $a_m(\|\mathbf{p}\|)$ are the barycentric coordinates of the point $f( \mathbf{p})$ inside the triangle. The two-point correlation tensor of the field takes the form \begin{equation*} \langle T(\mathbf{x}),T(\mathbf{y})\rangle=\sum_{m=1}^{3}\int_{0}^{\infty}
\frac{\sin(\lambda\|\mathbf{y}-\mathbf{x}\|)}{\lambda\|\mathbf{y}-\mathbf{x}
\|} C^m\,\mathrm{d}\Phi_m(\lambda), \end{equation*} where $\mathrm{d}\Phi_m(\lambda)=a_m(\lambda)\mathrm{d}\nu(\lambda)$ are three finite measures on $[0,\infty)$, and $\nu$ is the measure of Equation~ \eqref{eq:12}. Define $\mathrm{d}\Phi_m(\lambda)$ as Mat\'{e}rn spectral densities of Example~\ref{ex:2} (resp. dual Mat\'{e}rn spectral densities of Example~\ref{ex:dual}). We obtain a scalar homogeneous and isotropic Mat\'{e} rn (resp. dual Mat\'{e}rn) random field. \end{example}
\begin{example} \label{ex:2component} Using \eqref{eq:Yaglom} and the well-known formulae \begin{equation*} j_0(t)=\frac{\sin t}{t},\qquad j_1(t)=\frac{\sin t}{t^2}-\frac{\cos t}{t}, \qquad j_2(t)=\left(\frac{3}{t^2}-1\right)\frac{\sin t}{t}-\frac{3\cos t}{t^2 }, \end{equation*} we write the two-point correlation tensor of rank~$1$ homogeneous and isotropic random field in the form \begin{equation*} \langle\mathbf{v}(\mathbf{x}),\mathbf{v}(\mathbf{y})\rangle =B^{(1)}_{ij}( \mathbf{r})+B^{(2)}_{ij}(\mathbf{r}), \end{equation*} where $\mathbf{r}=\mathbf{y}-\mathbf{x}$, and \begin{equation} \label{eq:b1b2} \begin{aligned} B^{(1)}_{ij}(\mathbf{x},\mathbf{y})&=\int_{0}^{\infty}\left[\left(
-\frac{3\sin(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}\|)^3}
+\frac{\sin(\lambda\|\mathbf{r}\|)}{\lambda\|\mathbf{r}\|}
+\frac{3\cos(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}\|)^2}
\right)\frac{r_ir_j}{\|\mathbf{r}\|^2}\right.\\
&\quad+\left.\left(\frac{\sin(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}
\|)^3}
-\frac{\cos(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}\|)^2}\right) \delta_{ij} \right]\,\mathrm{d}\Phi_1(\lambda),\\ B^{(2)}_{ij}(\mathbf{x},\mathbf{y})&=\int_{0}^{\infty}\left[\left(
\frac{3\sin(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}\|)^3}
-\frac{\sin(\lambda\|\mathbf{r}\|)}{\lambda\|\mathbf{r}\|}
-\frac{3\cos(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}\|)^2}
\right)\frac{r_ir_j}{\|\mathbf{r}\|^2}\right.\\
&\quad+\left.\left(\frac{\sin(\lambda\|\mathbf{r}\|)}{\lambda\|\mathbf{r}\|}
-\frac{\sin(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}\|)^3}
+\frac{\cos(\lambda\|\mathbf{r}\|)}{(\lambda\|\mathbf{r}\|)^2}\right) \delta_{ij} \right]\,\mathrm{d}\Phi_2(\lambda). \end{aligned} \end{equation}
Now assume that the measures $\Phi_1$ and $\Phi_2$ are described by Mat\'{e} rn densities: \begin{equation*} \mathrm{d}\Phi_i(\lambda)=2\pi\lambda^2\frac{\sigma_i^{2}\Gamma \left( \nu_i +\frac{3}{2}\right) a_i^{2\nu_i}}{2\pi ^{3/2}\left( a_i^{2}+\lambda^{2}\right) ^{\nu_i +\frac{3}{2}}},\qquad i=1,2. \end{equation*} It is possible to substitute these densities to \eqref{eq:b1b2} and calculate the integrals using \citet[Equation~2.5.9.1]{MR874986}. We obtain rather long expressions that include the generalised hypergeometric function ${}_1F_2$.
The situation is different for the dual model: \begin{equation*} \mathrm{d}\Phi_i(\lambda)=\frac{1}{(2\pi)^22^{\nu_i-1}\Gamma(\nu_i+3/2)} \lambda^{\nu+2}K_{\nu}(\lambda). \end{equation*} Using \citet[Equations 2.16.14.3, 2.16.14.4]{MR950173}, we obtain \begin{equation*} \begin{aligned}
B^{(1)}_{ij}(\mathbf{x},\mathbf{y})&=C_1\left(-\frac{3\pi\Gamma(2\nu_1)}{4\|
\mathbf{r}\|^3 (1+\|\mathbf{r}\|^2)^{\nu_1/2}} \left[P^{-\nu_1}_{\nu_1-1}\left(
\frac{\|\mathbf{r}\|}{\sqrt{1+\|\mathbf{r}\|^2}}\right)\right.\right.\\
&\quad\left.\left.-P^{-\nu_1}_{\nu_1-1}\left(-\frac{\|\mathbf{r}\|}
{\sqrt{1+\|\mathbf{r}\|^2}}\right)\right]+\frac{2^{\nu_1}\sqrt{\pi}\Gamma(
\nu_1+3/2)} {(1+\|\mathbf{r}\|^2)^{\nu_1+3/2}}\right.\\ &\quad+\left.\frac{3\cdot 2^{\nu_1-1}\sqrt{\pi}\Gamma(\nu_1+1/2)}
{(1+\|\mathbf{r}\|^2)^{\nu_1+1/2}}\right)\frac{r_ir_j}{\|\mathbf{r}\|^2}\\
&\quad+C_1\left(\frac{\pi\Gamma(2\nu_1)}{4\|\mathbf{r}\|^3
(1+\|\mathbf{r}\|^2)^{\nu_1/2}}\left[P^{-\nu_1}_{\nu_1-1}\left(
\frac{\|\mathbf{r}\|}{\sqrt{1+\|\mathbf{r}\|^2}}\right)\right.\right.\\
&\quad\left.\left.-P^{-\nu_1}_{\nu_1-1}\left(-\frac{\|\mathbf{r}\|}
{\sqrt{1+\|\mathbf{r}\|^2}}\right)\right]-\frac{2^{\nu_1}\sqrt{\pi}\Gamma(
\nu_1+1/2)} {(1+\|\mathbf{r}\|^2)^{\nu_1+3/2}}\right)\delta_{ij},\\ \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned}
B^{(2)}_{ij}(\mathbf{x},\mathbf{y})&=C_2\left(\frac{3\pi\Gamma(2\nu_2)}{4\|
\mathbf{r}\|^3 (1+\|\mathbf{r}\|^2)^{\nu_2/2}} \left[P^{-\nu_2}_{\nu_2-1}\left(
\frac{\|\mathbf{r}\|}{\sqrt{1+\|\mathbf{r}\|^2}}\right)\right.\right.\\
&\quad\left.\left.-P^{-\nu_2}_{\nu_2-1}\left(-\frac{\|\mathbf{r}\|}
{\sqrt{1+\|\mathbf{r}\|^2}}\right)\right]-\frac{2^{\nu_2}\sqrt{\pi}\Gamma(
\nu_2+3/2)} {(1+\|\mathbf{r}\|^2)^{\nu_2+3/2}}\right.\\ &\quad-\left.\frac{3\cdot 2^{\nu_2-1}\sqrt{\pi}\Gamma(\nu_2+1/2)}
{(1+\|\mathbf{r}\|^2)^{\nu_2+1/2}}\right)\frac{r_ir_j}{\|\mathbf{r}\|^2}\\
&\quad+C_2\left(\frac{\sqrt{\pi}\Gamma(\nu_2+3/2)2^{\nu_2}}{\|\mathbf{r}\|
(1+\|\mathbf{r}\|^2)^{\nu_2+3/2}}-\frac{\pi\Gamma(2\nu_2)}{4\|\mathbf{r}\|^3
(1+\|\mathbf{r}\|^2)^{\nu_2/2}}\right.\\ &\quad\times\left[P^{-\nu_2}_{\nu_2-1}\left(
\frac{\|\mathbf{r}\|}{\sqrt{1+\|\mathbf{r}\|^2}}\right)-P^{-\nu_2}_{\nu_2-1}
\left(-\frac{\|\mathbf{r}\|}{\sqrt{1+\|\mathbf{r}\|^2}}\right)\right]\\ &\quad-\left.\frac{2^{\nu_2-1}\sqrt{\pi}\Gamma(\nu_2+1/2)}
{\|\mathbf{r}\|^2(1+\|\mathbf{r}\|^2)^{\nu_2+1/2}}\right)\delta_{ij},\\ \end{aligned} \end{equation*} where \begin{equation*} C_i=\frac{1}{(2\pi)^22^{\nu_i-1}\Gamma(\nu_i+3/2)}. \end{equation*} \end{example}
\begin{example} \label{ex:5component} Consider the case when $r=2$ and $U(g)=\mathsf{S}^2(g)$ . In order to write down symmetric rank~$4$ tensors in a compressed matrix form, consider an orthogonal operator $\tau$ acting from $\mathsf{S}^2( \mathsf{S}^2(\mathbb{R}^3))$ to $\mathsf{S}^2(\mathbb{R}^6)$ as follows: \begin{equation*} \tau f_{ijkl}=\left( \begin{smallmatrix} f_{-1-1-1-1} & f_{-1-100} & f_{-1-111} & \sqrt{2}f_{-1-1-10} & \sqrt{2} f_{-1-101} & \sqrt{2}f_{-1-11-1} \\ f_{00-1-1} & f_{0000} & f_{0011} & \sqrt{2}f_{00-10} & \sqrt{2}f_{0001} & \sqrt{2}f_{001-1} \\ f_{11-1-1} & f_{1100} & f_{1111} & \sqrt{2}f_{11-10} & \sqrt{2}f_{1101} & \sqrt{2}f_{111-1} \\ \sqrt{2}f_{-10-1-1} & \sqrt{2}f_{-1000} & \sqrt{2}f_{-1011} & 2f_{-10-10} & 2f_{-1001} & 2f_{-101-1} \\ \sqrt{2}f_{01-1-1} & \sqrt{2}f_{0100} & \sqrt{2}f_{0111} & 2f_{01-10} & 2f_{0101} & 2f_{011-1} \\ \sqrt{2}f_{1-1-1-1} & \sqrt{2}f_{1-100} & \sqrt{2}f_{1-111} & 2f_{1-1-10} & 2f_{1-101} & 2f_{1-11-1} \end{smallmatrix} \right), \end{equation*} see \cite[Equation~(44)]{MR1816224}. It is possible to prove the following. The matrix $\tau f_{ijkl}(\mathbf{0})$ lies in the interval $\mathcal{C}_1$ with extreme points $C^1$ and $C^2$, where the nonzero elements of the symmetric matrix $C^1$ lying on and over the main diagonal are as follows: \begin{equation*} C^1_{11}=C^1_{12}=C^1_{13}=C^1_{22}=C^1_{23}=C^1_{33}=\frac{1}{3}, \end{equation*} while those of the matrix $C^2$ are \begin{equation*} \begin{aligned} C^2_{11}&=C^2_{22}=C^2_{33}=\frac{2}{15},\qquad C^2_{44}=C^2_{55}=C^2_{66}= \frac{1}{5},\\ C^2_{12}&=C^2_{13}=C^2_{23}=-\frac{1}{15}. \end{aligned} \end{equation*} The matrix $\tau f_{ijkl}(\lambda,0,0)$ with $\lambda>0$ lies in the convex compact set $\mathcal{C}_0$. The set of extreme points of $\mathcal{C}_0$ contains three connected components. The first component is the one-point set $\{D^1\}$ with \begin{equation*} D^1_{44}=D^1_{66}=\frac{1}{2}. \end{equation*} The second component is the one-point set $\{D^2\}$ with \begin{equation*} D^2_{11}=D^2_{33}=\frac{1}{4},\qquad D^2_{55}=\frac{1}{2},\qquad D^2_{13}=- \frac{1}{4}. \end{equation*} The third component is the ellipse $\{\,D^{\theta}\colon 0\leq\theta<2\pi\,\} $ with \begin{equation*} \begin{aligned} D^{\theta}_{11}&=D^{\theta}_{33}=D^{\theta}_{13}=\frac{1}{2} \sin^2(\theta/2),\qquad D^{\theta}_{22}=\cos^2(\theta/2),\\ D^{\theta}_{12}&=D^{\theta}_{23}=\frac{1}{2\sqrt{2}}\sin(\theta). \end{aligned} \end{equation*}
Choose three points $D^3$, $D^4$, $D^5$ lying on the above ellipse. If we allow the matrix $\tau f_{ijkl}(\lambda,0,0)$ with $\lambda>0$ to take values in the simplex with vertices $D^i$, $1\leq i\leq 5$, then the two-point correlation tensor of the random field $\varepsilon(\mathbf{x})$ is the sum of five integrals. The more the four-dimensional Lebesgue measure of the simplex in comparison with that of $\mathcal{C}_0$, the wider class of random fields is described.
Note that the simplex should contain the set $\mathcal{C}_1$. The matrix $ C^1 $ lies on the ellipse and corresponds to the value of $\theta=2\arcsin( \sqrt{2/3})$. It follows that one of the above points, say $D^3$, must be equal to $C^1$. If we choose $D^4$ to correspond to the value of $ \theta=2(\pi-\arcsin(\sqrt{2/3}))$, that is, \begin{equation*} D^4_{11}=D^4_{33}=D^4_{13}=\frac{1}{6},\qquad D^4_{22}=\frac{2}{3},\qquad D^4_{12}=D^4_{23}=-\frac{1}{3}, \end{equation*} then \begin{equation*} C^2=\frac{2}{5}(D^1+D^2)+\frac{1}{5}D^4, \end{equation*} and $C^2$ lies in the simplex. Finally, choose $D_5$ to correspond to the value of $\theta=\pi$, that is \begin{equation*} D^5_{11}=D^5_{33}=D^5_{13}=\frac{1}{2}. \end{equation*} The constructed simplex is not the one with maximal possible Lebesgue measure, but the coefficients in formulas are simple.
\begin{theorem} Let $\varepsilon(\mathbf{x})$ be a random field that describes the stress tensor of a deformable body. The following conditions are equivalent.
\begin{enumerate} \item The matrix $\tau f_{ijkl}(\lambda,0,0)$ with $\lambda>0$ takes values in the simplex described above.
\item The correlation tensor of the field has the spectral expansion \begin{equation*} \langle\varepsilon(\mathbf{x}),\varepsilon(\mathbf{y})\rangle=
\sum^5_{n=1}\int^{\infty}_0\sum^5_{q=1} \tilde{N}_{nq}(\lambda,\|\mathbf{r}
\|)L^q_{ijkl}(\mathbf{r})\,\mathrm{d}\Phi_n(\lambda), \end{equation*} where the non-zero functions $\tilde{N}_{nq}(\lambda,r)$ are given in Table~ \emph{\ref{tab:3}}, and where $\Phi_n(\lambda)$ are five finite measures on $ [0,\infty)$ with \begin{equation*} \Phi_1(\{0\})=\Phi_2(\{0\})=2\Phi_4(\{0\}),\qquad \Phi_5(\{0\})=0. \end{equation*} \end{enumerate} \end{theorem}
Assume that all measures $\Phi_n$ are absolutely continuous and their densities are either the Mat\'{e}rn or the dual Mat\'{e}rn densities. The two-point correlation tensors of the corresponding random fields can be calculated in exactly the same way as in Example~\ref{ex:2component}.
\begin{table}[tbp] \caption{The functions $\tilde{N}_{nq}(\protect\lambda,r)$} \label{tab:3}
\begin{tabular}{|l|l|l|} \hline $n$ & $q$ & $N_{nq}(\lambda,r)$ \\ \hline 1 & 1 & $-\frac{1}{15}j_0(\lambda r)-\frac{2}{21}j_2(\lambda r)-\frac{1}{35} j_4(\lambda r)$ \\ 1 & 2 & $\frac{1}{10}j_0(\lambda r)+\frac{1}{14}j_2(\lambda r)-\frac{1}{35} j_4(\lambda r)$ \\ 1 & 3 & $-\frac{3}{28}j_2(\lambda r)+\frac{1}{7}j_4(\lambda r)$ \\ 1 & 4 & $\frac{1}{7}j_2(\lambda r)+\frac{1}{7}j_4(\lambda r)$ \\ 1 & 5 & $-j_4(\lambda r)$ \\ 2 & 1 & $-\frac{1}{15}j_0(\lambda r)+\frac{4}{21}j_2(\lambda r)+\frac{1}{140} j_4(\lambda r)$ \\ 2 & 2 & $\frac{1}{10}j_0(\lambda r)-\frac{1}{7}j_2(\lambda r)+\frac{1}{140} j_4(\lambda r)$ \\ 2 & 3 & $\frac{3}{14}j_2(\lambda r)-\frac{1}{28}j_4(\lambda r)$ \\ 2 & 4 & $-\frac{2}{7}j_2(\lambda r)-\frac{1}{28}j_4(\lambda r)$ \\ 2 & 5 & $\frac{1}{4}j_4(\lambda r)$ \\ 3 & 1 & $\frac{1}{3}j_0(\lambda r)$ \\ 4 & 1 & $-\frac{1}{135}j_0(\lambda r)-\frac{4}{21}j_2(\lambda r)+\frac{3}{70} j_4(\lambda r)$ \\ 4 & 2 & $\frac{1}{90}j_0(\lambda r)+\frac{1}{7}j_2(\lambda r)+\frac{3}{70} j_4(\lambda r)$ \\ 4 & 3 & $-\frac{3}{14}j_2(\lambda r)-\frac{3}{14}j_4(\lambda r)$ \\ 4 & 4 & $\frac{2}{7}j_2(\lambda r)-\frac{3}{14}j_4(\lambda r))$ \\ 4 & 5 & $\frac{3}{2}j_4(\lambda r)$ \\ 5 & 1 & $\frac{1}{5}j_0(\lambda r)-\frac{2}{7}j_2(\lambda r)+\frac{1}{70} j_4(\lambda r)$ \\ 5 & 2 & $\frac{1}{30}j_0(\lambda r)+\frac{2}{21}j_2(\lambda r)+\frac{1}{70} j_4(\lambda r)$ \\ 5 & 3 & $\frac{1}{14}j_2(\lambda r)-\frac{1}{14}j_4(\lambda r)$ \\ 5 & 4 & $\frac{5}{21}j_2(\lambda r)-\frac{1}{14}j_4(\lambda r)$ \\ 5 & 5 & $\frac{1}{2}j_4(\lambda r)$ \\ \hline \end{tabular} \end{table}
Introduce the following notation: \begin{equation*} \begin{aligned} \mathsf{T}^{0,1}_{ijkl}&=\frac{1}{3}\delta_{ij}\delta_{kl},\\ \mathsf{T}^{0,2}_{ijkl}&=\frac{1}{\sqrt{5}} \sum_{n=-2}^{2}g^{n[i,j]}_{2[1,1]} g^{n[k,l]}_{2[1,1]},\\ \mathsf{T}^{2,1,m}_{ijkl}&=\frac{1}{\sqrt{6}}(\delta_{ij}g^{m[k,l]}_{2[1,1]} +\delta_{kl}g^{m[i,j]}_{2[1,1]}),\qquad -2\leq m\leq 2,\\ \mathsf{T}^{2,2,m}_{ijkl}&= \sum_{n,q=-2}^{2}g^{m[n,q]}_{2[2,2]}g^{n[i,j]}_{2[1,1]} g^{q[k,l]}_{2[1,1]},\qquad -2\leq m\leq 2,\\ \mathsf{T}^{4,1,m}_{ijkl}&= \sum_{n,q=-4}^{4}g^{m[n,q]}_{4[2,2]}g^{n[i,j]}_{2[1,1]} g^{q[k,l]}_{2[1,1]},\qquad -4\leq m\leq 4. \end{aligned} \end{equation*} Consider the five nonnegative-definite matrices $A^n$, $1\leq n\leq 5$, with the following matrix entries: \begin{equation*} \begin{aligned} a^{\ell''m''kl,1}_{\ell'm'ij}&=\sqrt{(2\ell'+1)(2\ell''+1)}\left(\frac{1}{ \sqrt{5}} \mathsf{T}^{0,2}_{ijkl}g^{0[m',m'']}_{0[\ell',\ell'']}g^{0[0,0]}_{0[\ell', \ell'']}\right.\\ &\quad-\left.\frac{1}{5\sqrt{14}}\sum_{m=-2}^{2}\mathsf{T}^{2,2,m}_{ijkl} g^{m[m',m'']}_{2[\ell',\ell'']}g^{0[0,0]}_{2[\ell',\ell'']} -\frac{2\sqrt{2}}{9\sqrt{35}}\sum_{m=-4}^{4}\mathsf{T}^{4,1,m}_{ijkl} g^{m[m',m'']}_{4[\ell',\ell'']}g^{0[0,0]}_{4[\ell',\ell'']}\right),\\ a^{\ell''m''kl,2}_{\ell'm'ij}&=\sqrt{(2\ell'+1)(2\ell''+1)}\left(\frac{1}{ \sqrt{5}} \mathsf{T}^{0,2}_{ijkl}g^{0[m',m'']}_{0[\ell',\ell'']}g^{0[0,0]}_{0[\ell', \ell'']}\right.\\ &\quad+\left.\frac{\sqrt{2}}{5\sqrt{7}}\sum_{m=-2}^{2} \mathsf{T}^{2,2,m}_{ijkl} g^{m[m',m'']}_{2[\ell',\ell'']}g^{0[0,0]}_{2[\ell',\ell'']} +\frac{1}{9\sqrt{70}}\sum_{m=-4}^{4}\mathsf{T}^{4,1,m}_{ijkl} g^{m[m',m'']}_{4[\ell',\ell'']}g^{0[0,0]}_{4[\ell',\ell'']}\right),\\ a^{\ell''m''kl,3}_{\ell'm'ij}&=\sqrt{(2\ell'+1)(2\ell''+1)} \mathsf{T}^{0,1}_{ijkl}g^{0[m',m'']}_{0[\ell',\ell'']}g^{0[0,0]}_{0[\ell', \ell'']},\\ a^{\ell''m''kl,4}_{\ell'm'ij}&=\sqrt{(2\ell'+1)(2\ell''+1)}\left(\frac{1}{9 \sqrt{5}} \mathsf{T}^{0,2}_{ijkl}g^{0[m',m'']}_{0[\ell',\ell'']}g^{0[0,0]}_{0[\ell', \ell'']}\right.\\ &\quad-\left.\frac{\sqrt{2}}{5\sqrt{7}}\sum_{m=-2}^{2} \mathsf{T}^{2,2,m}_{ijkl} g^{m[m',m'']}_{2[\ell',\ell'']}g^{0[0,0]}_{2[\ell',\ell'']} +\frac{\sqrt{2}}{3\sqrt{35}}\sum_{m=-4}^{4}\mathsf{T}^{4,1,m}_{ijkl} g^{m[m',m'']}_{4[\ell',\ell'']}g^{0[0,0]}_{4[\ell',\ell'']}\right),\\ a^{\ell''m''kl,5}_{\ell'm'ij}&=\sqrt{(2\ell'+1)(2\ell''+1)}\left(\left( \frac{2}{3} \mathsf{T}^{0,1}_{ijkl}+\frac{1}{3\sqrt{5}} \mathsf{T}^{0,2}_{ijkl}\right)g^{0[m',m'']}_{0[\ell',\ell'']}g^{0[0,0]}_{0[ \ell',\ell'']}\right.\\ &\quad+\left(\frac{2}{9}\sum_{m=-2}^{2}\mathsf{T}^{2,1,m}_{ijkl} -\frac{\sqrt{2}}{9\sqrt{7}}\sum_{m=-2}^{2}\mathsf{T}^{2,2,m}_{ijkl} \right)g^{m[m',m'']}_{2[\ell',\ell'']}g^{0[0,0]}_{2[\ell',\ell'']}\\ &\quad+\left.\frac{\sqrt{2}}{9\sqrt{35}}\sum_{m=-4}^{4} \mathsf{T}^{4,1,m}_{ijkl} g^{m[m',m'']}_{4[\ell',\ell'']}g^{0[0,0]}_{4[\ell',\ell'']}\right), \end{aligned} \end{equation*} and let $L^n$ be infinite lower triangular matrices from Cholesky factorisation of the matrices $A^n$.
\begin{theorem} The following conditions are equivalent.
\begin{enumerate} \item The matrix $\tau f_{ij\ell m}(\lambda,0,0)$ with $\lambda>0$ takes values in the simplex described above.
\item The field $\varepsilon(\mathbf{x})$ has the form \begin{equation*} \varepsilon_{ij}(\rho,\theta,\varphi)=C\delta_{ij}+2\sqrt{\pi} \sum_{n=1}^{5}\sum_{\ell=0}^{\infty} \sum_{m=-\ell}^{\ell}\int_{0}^{\infty}j_{\ell}(\lambda\rho)\,\mathrm{d} Z^{n^{\prime}}_{\ell mij}(\lambda)S^m_{\ell}(\theta,\varphi), \end{equation*} where \begin{equation*} Z^{n^{\prime}}_{\ell mij}(A)=\sum_{(\ell^{\prime},m^{\prime},k,l)\leq(\ell,m,i,j)}Z^n_{\ell^{ \prime}m^{\prime}kl}(A), \end{equation*} and where $Z^n_{\ell^{\prime}m^{\prime}kl}$ is the sequence of uncorrelated scattered random measures on $[0,\infty)$ with control measures $\Phi_n$. \end{enumerate} \end{theorem}
The idea of proof is as follows. Write down the Rayleigh expansion for $ \mathrm{e}^{\mathrm{i}(\mathbf{p},\mathbf{x})}$ and for $\mathrm{e}^{- \mathrm{i}(\mathbf{p},\mathbf{y})}$ separately,substitute both expansions into \eqref{eq:13} and use the following result, known as the \emph{Gaunt integral}: \begin{equation*} \begin{aligned} \int_{S^2}S^{m_1}_{\ell_1}(\theta,\varphi)S^{m_2}_{\ell_2}(\theta,\varphi) S^{m_3}_{\ell_3}(\theta,\varphi)\sin\theta\,\mathrm{d}\varphi\, \mathrm{d} \theta&=\sqrt{\frac{(2\ell_1+1)(2\ell_2+1)}{4\pi(2\ell_3+1)}}\\ &\quad\times g^{m_3[m_1,m_2]}_{\ell_3[\ell_1,\ell_2]}g^{0[0,0]}_{\ell_3[\ell_1,\ell_2]}. \end{aligned} \end{equation*} This theorem can be proved exactly in the same way, as its complex counterpart, see, for example, \citet{MR2840154}. Then apply Karhunen's theorem, see \citet{MR0023013}. \end{example}
\textbf{Acknowledgements}. Nikolai N. Leonenko was supported in part by projects MTM2012-32674 (co-funded by European Regional Development Funds), and MTM2015--71839--P, MINECO, Spain. This research was also supported under Australian Research Council's Discovery Projects funding scheme (project number DP160101366), and under Cardiff Incoming Visiting Fellowship Scheme and International Collaboration Seedcorn Fund.
Anatoliy Malyarenko is grateful to Professor Martin Ostoja-Starzewski for useful him to probabilistic models of continuum physics and fruitful discussions.
\appendix
\section{Tensors}
\label{ap:tensors}
There are several equivalent definitions of tensors. Surprisingly, the most abstract of them is useful in the theory of random fields.
Let $r$ be a nonnegative integer, and let $V_1$, \dots, $V_r$ be linear spaces over the same field $\mathbb{K}$. When $r=0$, define the tensor product of the empty family of spaces as $\mathbb{K}^1$, the one-dimensional linear space over $\mathbb{K}$.
\begin{theorem}[The universal mapping property] There exist a unique linear space $V_1\otimes\cdots\otimes V_r$ and a unique linear operator $\tau\colon V_1\times V_2\times\cdots\times V_r\to V_1\otimes\cdots\otimes V_r$ that satisfy the \emph{universal mapping property}: for any linear space $W$ and for any multilinear map $\beta\colon V_1\times V_2\times\cdots\times V_r\to W$, there exists a unique \emph{linear } operator $B\colon V_1\otimes\cdots\otimes V_r\to X$ such that $ \beta=B\circ\tau$: \begin{equation*} \xymatrix{V_1\times V_2\times\cdots\times V_r\ar[dr]_{\beta}\ar[r]^{\tau} & V_1\otimes\cdots\otimes V_r\ar[d]^B\\ & W} \end{equation*} \end{theorem}
In other words: \emph{the construction of the tensor product of linear spaces reduces the study of multilinear mappings to the study of linear ones} .
The tensor product $\mathbf{v}_1\otimes\cdots\otimes\mathbf{v}_r$ of the vectors $\mathbf{v}_i\in V_i$, $1\leq i\leq r$, is defined by \begin{equation*} \mathbf{v}_1\otimes\cdots\otimes\mathbf{v}_r=\tau(\mathbf{v}_1,\dots,\mathbf{ v}_r). \end{equation*}
Let $V_1$, \dots, $V_r$, $W_1$, \dots, $W_r$ be finite-dimensional linear spaces, and let $A_i\in L(V_i,W_i)$ for $1\leq i\leq r$. The tensor product of linear operators, $A_1\otimes\cdots\otimes A_r$, is a unique element of the space $L(V_1\otimes\cdots\otimes V_r,W_1\otimes\cdots\otimes W_r)$ such that \begin{equation*} (A_1\otimes\cdots\otimes A_r)(\mathbf{v}_1\otimes\cdots\otimes\mathbf{v} _r):=A_1(\mathbf{v}_1)\otimes \cdots\otimes A_r(\mathbf{v}_r),\qquad\mathbf{v }_i\in V_i. \end{equation*}
If all the spaces $V_i$, $1\leq i\leq r$, are copies of the same space $V$, then we write $V^{\otimes r}$ for the $r$-fold tensor product of $V$ with itself, and $\mathbf{v}^{\otimes r}$ for the tensor product of $r$ copies of a vector $\mathbf{v}\in V$. Similarly, for $A\in L(V,V)$ we write $ A^{\otimes r}$ for the $r$-fold tensor product of $A$ with itself. Note that $A^{\otimes 0}$ is the identity operator in the space $\mathbb{K}^1$.
\section{Group representations}
Let $G$ be a topological group. A \emph{finite-dimensional representation} of $G$ is a pair $(\rho,V)$, where $V$ is a finite-dimensional linear space, and $\rho\colon G\to\mathrm{GL}(V)$ is a continuous group homomorphism. Here $\mathrm{GL}(V)$ is the \emph{general linear group of order} $n$, or the group of all invertible $n\times n$ matrices. In what follows, we omit the word ``finite-dimensional'' unless infinite-dimensional representations are under consideration.
In a coordinate form, a representation of $G$ is a continuous group homomorphism $\rho\colon G\to\mathrm{GL}(n,\mathbb{K})$ and the space $ \mathbb{K}^n$.
Let $W\subseteq V$ be a linear subspace of the space~$V$. $W$ is called an \emph{invariant subspace} of the representation $(\rho,V)$ if $\rho(g) \mathbf{w}\in W$ for all $g\in G$ and $\mathbf{w}\in W$. The restriction of $ \rho$ to $W$ is then a representation $(\sigma,W)$ of $G$. Formula \begin{equation*} \tau(g)(\mathbf{v}+W):=\rho(g)\mathbf{v}+W \end{equation*} defines a representation $(\tau,V/W)$ of $G$ in the quotient space $V/W$.
In a coordinate form, take a basis for $W$ and complete it to a basis for $V$ . The matrix of $\rho(g)$ relative to the above basis is \begin{equation} \label{eq:matrixform} \rho(g)= \begin{pmatrix} \sigma(g) & * \\ 0 & \tau(g) \end{pmatrix} . \end{equation}
Let $(\rho,V)$ and $(\tau,W)$ be representations of $G$. An operator $A\in L(V,W)$ is called an \emph{intertwining operator} if \begin{equation} \label{eq:intertwining} \tau(g)A=A\rho(g),\qquad g\in G. \end{equation} The intertwining operators form a linear space $L_G(V,W)$ over $\mathbb{F}$.
The representations $(\rho,V)$ and $(\tau,W)$ are called \emph{equivalent} if the space $L_G(V,W)$ contains an invertible operator. Let $A$ be such an operator. Multiply \eqref{eq:intertwining} by $A^{-1}$ from the right. We obtain \begin{equation*} \tau(g)=A\rho(g)A^{-1},\qquad g\in G. \end{equation*} In a coordinate form, $\tau(g)$ and $\rho(g)$ are matrices of the same presentation, written in two different bases, and $A$ is the transition matrix between the bases.
A representation $(\rho,V)$ with $V\neq\{\mathbf{0}\}$ is called \emph{ reducible} if if there exists an invariant subspace $W\notin\{\{\mathbf{0} \},V\}$. In a coordinate form, all blocks of the matrix \eqref{eq:matrixform} are nonempty. Otherwise, the representation is called \emph{irreducible}.
\begin{example} Let $G=\mathrm{O}(3)$. The mapping $g\mapsto g^{\otimes r}$ is a representation of the group~$G$ in the space $(\mathbb{R}^3)^{\otimes r}$. When $r=0$, this representation is called \emph{trivial}, when $r=1$, it is called \emph{defining}. When $r\geq 2$, this representation is reducible. \end{example}
From now on we suppose that the topological group $G$ is compact. There exists an inner product $(\boldsymbol{\cdot},\boldsymbol{\cdot})$ on $V$ such that \begin{equation*} (\rho(g)\mathbf{v},\rho(g)\mathbf{w})=(\mathbf{v},\mathbf{w}),\qquad\mathbf{v }, \mathbf{w}\in V. \end{equation*} In a coordinate form, we can choose an orthonormal basis in $V$. If $V$ is a complex linear space, then the representation $(\rho,V)$ takes values in $ \mathrm{U}(n)$, the group of $n\times n$ unitary matrices, and we speak of a \emph{unitary representation} If $V$ is a real linear space, then the representation $(\rho,V)$ takes values in $\mathrm{O}(n)$, and we speak of an \emph{orthogonal representation}.
Let $(\pi,V)$ and $(\rho,W)$ be representations of $G$. The \emph{direct sum of representations} is the representation $(\pi\oplus\rho,V\oplus W)$ acting by \begin{equation*} (\pi\oplus\rho)(g)(\mathbf{v}\oplus\mathbf{w}):=\pi(g)\mathbf{v} \oplus\rho(g) \mathbf{w},\qquad g\in G,\quad\mathbf{v}\in V,\quad\mathbf{w} \in W. \end{equation*} In a coordinate form, we have \begin{equation} \label{eq:reducible} \pi\oplus\rho(g)= \begin{pmatrix} \pi(g) & 0 \\ 0 & \rho(g) \end{pmatrix} . \end{equation}
Consider the action $\pi\otimes\rho$ of the group $G$ on the set of tensor products $\mathbf{v}\otimes\mathbf{w}$ defined by \begin{equation*} (\pi\otimes\rho)(g)(\mathbf{v}\otimes\mathbf{w}):=\pi(g)\mathbf{v} \otimes\rho(g) \mathbf{w},\qquad g\in G,\quad\mathbf{v}\in V,\quad\mathbf{w} \in W. \end{equation*} This action may be extended by linearity to the \emph{tensor product of representations} $(\pi\otimes\rho,V\otimes W)$. In a coordinate form, $ (\pi\otimes\rho)(g)$ is a rank $4$ tensor with components \begin{equation*} \mathsf{T}_{ijkl}(g)=\pi_{ij}(g)\rho_{kl}(g),\qquad g\in G. \end{equation*}
A representation $(\sigma,V)$ of a group $G$ is called \emph{completely reducible} if for every invariant subspace $W\subset V$ there exists an invariant subspace $U\subset V$ such that $V=W\oplus U$. In a coordinate form, any basis $\{\mathbf{w}_1,\dots,\mathbf{w}_p\}$ for $W$ can be completed to a basis $\{\mathbf{w}_1,\dots,\mathbf{w}_p,\mathbf{u}_1,\dots, \mathbf{u}_q\}$ for $V$ such that the span of the vectors $\mathbf{u}_1$ ,\dots,$\mathbf{u}_q$ is invariant. The matrix $\sigma(g)$ in the above basis has the form \eqref{eq:reducible}. Any representation of a compact group is completely reducible.
Let $(\rho,V)$ be an irreducible representation of a group $G$. Denote by $ [\rho]$ the equivalence class of all representations of $G$ equivalent to $ (\rho,V)$ and by $\hat{G}$ the set of all equivalence classes of irreducible representations of $G$. For any finite-dimensional representation $ (\sigma,V) $ of $G$, there exists finitely many equivalence classes $[\rho_1] $, \dots, $[\rho_k]\in\hat{G}$ and uniquely determined positive integers $m_1 $, \dots, $m_k$ such that $(\sigma,V)$ is equivalent to the direct sum of $ m_1$ copies of the representation $(\rho_1,V_1)$, \dots, $m_k$ copies of the representation $(\rho_k,V_k)$. The direct sum $m_iV_i$ of $m_i$ copies of the linear space $V_i$ is called the \emph{isotypic subspace} of the space $ V $ that corresponds to the representation $(\rho_i,V_i)$. The numbers $m_i$ are called the \emph{multiplicities} of the irreducible representation $ (\rho_i,V_i)$ in $(\sigma,V)$. The decompositions $V=\sum m_iV_i$ and $ \sigma=\sum m_i\rho_i$ are called the \emph{isotypic decompositions}.
Assume that a compact group $G$ is \emph{easy reducible}. This means that for any three irreducible representation $(\rho,V)$, $(\sigma,W)$, and $ (\tau,U)$ of $G$ the multiplicity $m_{\tau}$ of $\tau$ in $\rho\otimes\sigma$ is equal to either $0$ or $1$. For example, the group $\mathrm{O}(3)$ is easy reducible. Assume $m_{\tau}=1$. Let $\{\,\mathbf{e}^{\rho}_i\colon 1\leq i\leq\dim\rho\,\}$ be an orthonormal basis in $V$, and similarly for $ \sigma$ and $\tau$. There are two natural bases in the space $V\otimes W$. The \emph{coupled basis} is \begin{equation*} \{\,\mathbf{e}^{\rho}_i\otimes\mathbf{e}^{\sigma}_j\colon 1\leq i\leq\dim\rho,1\leq j\leq\dim\sigma\,\}. \end{equation*} The \emph{uncoupled basis} is \begin{equation*} \{\,\mathbf{e}^{\tau}_k\colon m_{\tau}=1,1\leq k\leq\dim\tau\,\}. \end{equation*}
In a coordinate form, the elements of the space $V\otimes W$ are matrices with $\dim\rho$ rows and $\dim\sigma$ columns. The coupled basis consists of matrices having $1$ in the $i$th row and $j$th column, and all other entries equal to $0$. Denote by $c^{k[i,j]}_{\tau[\rho,\sigma]}$ the coefficients of expansion of the vectors of uncoupled basis in the coupled basis: \begin{equation} \label{eq:ClebschGordan} \mathbf{e}^{\tau}_k=\sum_{i=1}^{\dim\rho}\sum^{\dim\sigma}_{j=1} c^{k[i,j]}_{ \tau[\rho,\sigma]}\mathbf{e}^{\rho}_i\otimes\mathbf{e}^{\sigma}_j. \end{equation} The numbers $c^{k[i,j]}_{\tau[\rho,\sigma]}$ are called the \emph{ Clebsch--Gordan coefficients} of the group~$G$. In the coupled basis, the vectors of the uncoupled basis are matrices $c^k_{\tau[\rho,\sigma]}$ with matrix entries $c^{k[i,j]}_{\tau[\rho,\sigma]}$, the \emph{Clebsch--Gordan matrices}.
\begin{example}[Irreducible unitary representations of $\mathrm{SU}(2)$] Let $\ell$ be a non-negative integer or half-integer (the half of an odd integer) number. Let $(\rho_0,\mathbb{C}^1)$ be the trivial representation, and let $(\rho_{1/2},\mathbb{C}^2)$ be the defining representation of $ \mathrm{SU}(2)$. The representation $(\rho_{\ell},\mathbb{C}^{2\ell+1})$ with $\ell=1$, $3/2$, $2$, \dots, is the symmetric tensor power $\rho_{\ell}= \mathsf{S}^{2\ell}(\rho_{1/2})$. No other irreducible unitary representations exist.
We may realise the representations $\rho_{\ell}$ in the space $\mathcal{P} ^{2\ell}(\mathbb{C}^2)$ of homogeneous polynomials of degree $2\ell$ in two formal complex variables $\xi$ and $\eta$ over the two-dimensional complex linear space $\mathbb{C}^2$. The group $\mathrm{SU}(2)$ consists of the matrices \begin{equation} \label{eq:su2} g= \begin{pmatrix} \alpha & \beta \\ -\overline{\beta} & \overline{\alpha} \end{pmatrix}
,\qquad\alpha,\beta\in\mathbb{C},\quad|\alpha|^2+|\beta|^2=1. \end{equation} The representation $\rho_{\ell}$ acts as follows: \begin{equation*} (\rho_{\ell}(g)h)(\xi,\eta)=h(\overline{\alpha}\xi-\beta\eta, \overline{\beta }\xi+\alpha\eta),\qquad h\in\mathcal{P}^{2\ell}(V). \end{equation*} Note that $\rho_{\ell}(-E)=E$ if and only if $\ell$ is integer.
The \emph{Wigner orthonormal basis} in the space $\mathcal{P}^{2\ell}(V)$ is as follows: \begin{equation} \label{eq:Wigner} \mathbf{e}_m(\xi,\eta):=(-1)^{\ell+m}\sqrt{\frac{(2\ell+1)!} { (\ell+m)!(\ell-m)!}}\xi^{\ell+m}\eta^{\ell-m},\qquad m=-\ell,-\ell+1,\dots, \ell. \end{equation} The matrix entries of the operators $\rho_{\ell}(g)$ in the above basis are called \emph{Wigner $D$ functions} and are denoted by $D^{\ell}_{mn}(g)$. The tensor product $\rho_{\ell_1}\otimes\rho_{\ell_2}$ is expanding as follows \begin{equation*} \rho_{\ell_1}(g)\otimes\rho_{\ell_2}(g)
=\sum^{\ell_1+\ell_2}_{\ell=|\ell_1-\ell_2|}\oplus\rho_{\ell}(g). \end{equation*} \end{example}
\begin{example}[Irreducible unitary representations of $\mathrm{SO}(3)$ and $ \mathrm{O}(3)$] Realise the linear space $\mathbb{R}^3$ with coordinates $x_{-1}$, $x_0$, and $x_1$ as the set of traceless Hermitian matrices over $\mathbb{C}^2$ with entries \begin{equation*} \begin{pmatrix} x_0 & x_1+\mathrm{i}x_{-1} \\ x_1-\mathrm{i}x_{-1} & -x_0 \end{pmatrix} . \end{equation*} The matrix \eqref{eq:su2} acts on the so realised $\mathbb{R}^3$ as follows: \begin{equation*} \pi(g) \begin{pmatrix} x_0 & x_1+\mathrm{i}x_{-1} \\ x_1-\mathrm{i}x_{-1} & -x_0 \end{pmatrix} :=g^* \begin{pmatrix} x_0 & x_1+\mathrm{i}x_{-1} \\ x_1-\mathrm{i}x_{-1} & -x_0 \end{pmatrix} g. \end{equation*} The mapping $\pi$ is a homomorphism of $\mathrm{SU}(2)$ onto $\mathrm{SO}(3)$ . The kernel of $\pi$ is $\pm E$. Assume that $(\rho,V)$ is an irreducible unitary representation of $\mathrm{SO}(3)$. Then $(\rho\circ\pi,V)$ is an irreducible unitary representation of $\mathrm{SU}(2)$ with kernel $\pm E$. Then we have $\rho\circ\pi=\rho_{\ell}$ for some integer $\ell$. In other words, every irreducible unitary representation $(\rho_{\ell},V)$ of $ \mathrm{SU}(2)$ with integer $\ell$ gives rise to an irreducible unitary representation of $\mathrm{SO}(3)$, and no other irreducible unitary representations exist. We denote the above representation of $\mathrm{SO}(3)$ again by $(\rho_{\ell},V)$.
Let $\mathrm{SO}(2)$ be the subgroup of $\mathrm{SO}(3)$ that leaves the vector $(0,0,1)^{\top}$ fixed. The restriction of $\rho_{\ell}$ to $\mathrm{ SO}(2)$ is equivalent to the direct sum of irreducible unitary representations $(\mathrm{e}^{\mathrm{i}m\varphi},\mathbb{C}^1)$, $-\ell\leq m\leq\ell$ of $\mathrm{SO}(2)$. Moreover, the space of the representation $( \mathrm{e}^{\mathrm{i}m\varphi},\mathbb{C}^1)$ is spanned by the vector $ \mathbf{e}_m(\xi,\eta)$ of the Wigner basis \eqref{eq:Wigner}.This is where their enumeration comes from.
The group $O(3)$ is the Cartesian product of its normal subgroups $\mathrm{SO }(3)$ and $\{I,-I\}$. The elements of $\mathrm{SO}(3)$ are rotations, \index{rotation} while the elements of the second component are reflections. \index{reflection} Therefore, any irreducible unitary representation of $ O(3) $ is the outer tensor product of some $(\rho_{\ell},V)$ by an irreducible unitary representation of $\{E,-E\}$. The latter group has two irreducible unitary representation: trivial $(\rho_+,\mathbb{C}^1)$ and determinant $(\rho_-,\mathbb{C}^1)$. Denote $\rho_{\ell,+}:=\rho_{\ell} \hat{\otimes}\rho_+$ and $\rho_{\ell,-}:=\rho_{\ell}\hat{\otimes}\rho_-$. These are all irreducible unitary representations of $O(3)$.
Introduce the coordinates on $\mathrm{SO}(3)$, the \emph{Euler angles}. Any rotation $g$ may be performed by three successive rotations:
\begin{itemize} \item rotation $g_0(\psi)$ about the $x_0$-axis through an angle $\psi$, $ 0\leq\psi<2\pi$;
\item rotation $g_{-1}(\theta)$ about the $x_{-1}$-axis through an angle $ \theta$, $0\leq\theta\leq\pi$,
\item rotation $g_0(\varphi)$ about the $x_0$-axis through an angle $\varphi$ , $0\leq\varphi<2\pi$. \end{itemize}
The angles $\psi$, $\theta$, and $\varphi$ are the \emph{Euler angles}. The Wigner $D$ functions are $D^{\ell}_{mn}(\varphi,\theta,\psi)$. The Wigner $D$ functions $D^{\ell}_{m0}$ do not depend on $\psi$ and may be written as $ D^{\ell}_{m0}(\varphi,\theta)$. The \emph{spherical harmonics} \index{spherical harmonics} $Y_{\ell}^m$ are defined by \begin{equation} \label{eq:sphericalcomplex} Y_{\ell}^m(\theta,\varphi):= \sqrt{\frac{2\ell+1}{4\pi}} \overline{D^{\ell}_{m0}(\varphi,\theta)}. \end{equation} Let $(r,\theta,\varphi)$ be the spherical coordinates in $\mathbb{R}^3$: \begin{equation} \label{eq:spherical} \begin{aligned} x_{-1}&=r\sin\theta\sin\varphi,\\ x_0&=r\cos\theta,\\ x_1&=r\sin\theta\cos\varphi. \end{aligned} \end{equation} The measure $\mathrm{d}\Omega:=\sin\theta\,\mathrm{d}\varphi\,\mathrm{d} \theta$ is the Lebesgue measure on the \emph{unit sphere}
\index{unit sphere} $S^2:=\{\,\mathbf{x}\in\mathbb{R}^3\colon\|\mathbf{x}
\|=1\,\}$. The spherical harmonics are orthonormal: \begin{equation*} \int_{S^2}Y_{\ell_1}^{m_1}(\theta,\varphi) \overline{Y_{\ell_2}^{m_2}(\theta,\varphi)} \,\mathrm{d}\Omega=\delta_{ \ell_1\ell_2}\delta_{m_1m_2}. \end{equation*} \end{example}
\begin{example}[Irreducible orthogonal representations of $\mathrm{SO}(3)$ and $\mathrm{O}(3)$] \label{ex:o3orthogonal}
The first model is as follows. For any polynomial $h\in\mathcal{P}^{2\ell}( \mathbb{C}^2)$ denote by $\overline{h}$ the polynomial whose coefficients are conjugate to those of $h$. Define the mapping $J\colon\mathcal{P} ^{2\ell}(\mathbb{C}^2)\to\mathcal{P}^{2\ell}(\mathbb{C}^2)$ as \begin{equation*} (Jh)(\xi,\eta):=\overline{h}(-\eta,\xi). \end{equation*} The orthonormal basis of eigenvectors of $J$ with eigenvalue~$1$ was proposed by \citet{MR1888117}. The vectors of the \emph{Gordienko basis} are as follows ($m\geq 1$): \begin{equation*} \begin{aligned} \mathbf{h}_{-m}(\xi,\eta)&:=\frac{(-\mathrm{i})^{\ell-1}}{\sqrt{2}} [(-1)^m\mathbf{e}_m(\xi,\eta)-\mathbf{e}_{-m}(\xi,\eta)],\\ \mathbf{h}_0(\xi,\eta)&:=(-\mathrm{i})^{\ell}\mathbf{e}_0(\xi,\eta),\\ \mathbf{h}_m(\xi,\eta)&:=-\frac{(-\mathrm{i})^{\ell}}{\sqrt{2}} [(-1)^m\mathbf{e}_m(\xi,\eta)+\mathbf{e}_{-m}(\xi,\eta)]. \end{aligned} \end{equation*} In this basis, the representations $\rho_{\ell,+}$ and $\rho_{\ell,-}$ become orthogonal and will be denoted by $U^{\ell g}$ and $U^{\ell u}$ (g by German \emph{gerade}, \index{representation!gerade} even, and u by \emph{ungerade}, \index{representation!ungerade} odd).
The Clebsh--Gordan coefficients of the groups $\mathrm{SO}(3)$ and $\mathrm{O }(3)$ with respect to the Gordienko basis were calculated by \citet{MR2078714}. We call them \emph{Godunov--Gordienko coefficients} and denote them by $g^{m[m_1,m_2]}_{ \ell[\ell_1,\ell_2]}$. An algorithm for calculation of the Godunov--Gordienko coefficients was proposed by \citet{MR3308053}. \end{example}
\begin{example}[Expansions of tensor representations of the group $\mathrm{O} (3)$] \label{ex:8}
Let $r\geq 2$ be a nonnegative integer, and let $\Sigma_r$ be the permutation group of the numbers $1$, $2$, \dots, $r$. The action \begin{equation*} \sigma\cdot(\mathbf{v}_1\otimes\cdots\otimes\mathbf{v}_r):=\mathbf{v} _{\sigma^{-1}(1)} \otimes\cdots\otimes\mathbf{v}_{\sigma^{-1}(r)},\qquad \sigma\in\Sigma_r, \end{equation*} may be extended by linearity to an orthogonal representation of the group $ \Sigma_r$ in the space $(\mathbb{R}^3)^{\otimes r}$, call it $(\rho_r,( \mathbb{R}^3)^{\otimes r})$. Consider the orthogonal representation $(\tau,( \mathbb{R}^3)^{\otimes r})$ of the group $\mathrm{O}(3)\times\Sigma_r$ acting by \begin{equation*} \tau(g,\sigma)(\mathsf{T}):=\rho^{\otimes r}(g)\rho_r(\sigma)(\mathsf{T} ),\qquad \mathsf{T}\in(\mathbb{R}^3)^{\otimes r}. \end{equation*}
The representation $(\tau,(\mathbb{R}^3)^{\otimes 2})$ of the group $\mathrm{ O}(3)\times\Sigma_2$ is the direct sum of three irreducible components \begin{equation*} \tau=[\rho_{0,+}(g)\tau_+(\sigma)]\oplus[\rho_{1,+}(g)\varepsilon(\sigma)] \oplus[\rho_{2,+}(g)\tau_+(\sigma)], \end{equation*} where $\tau_+$ is the trivial representation of the group $\Sigma_2$, while $ \varepsilon$ is its non-trivial representation. The one-dimensional space of the first component is the span of the identity matrix and consists of scalars. The three-dimensional space of the second component is the space $ \mathsf{\Lambda}^2(\mathbb{R}^3)$ of $3\times 3$ skew-symmetric matrices. Its elements are three-dimensional pseudo-vectors. Finally, the five-dimensional space of the third component consists of $3\times 3$ traceless symmetric matrices (deviators). The second component is $(\mathsf{ \Lambda}^2(g),\mathsf{\Lambda}^2(\mathbb{R}^3))$, and the direct sum of the first and third components is $(\mathsf{S}^2(g),\mathsf{S}^2(\mathbb{R}^3))$.
In general, the representation $(\tau,(\mathbb{R}^3)^{\otimes r})$ is reducible and may be represented as the direct sum of irreducible representations as follows: \begin{equation*} \tau(g,\sigma)=\sum_{\ell=0}^{r}\sum_{q=1}^{N^{\ell}_r}\oplus U^{\ell x}(g) \rho_q(\sigma), \end{equation*} where $q$ is called the \emph{seniority index} of the component $U^{\ell x}(g)\rho_q(\sigma)$, see \citet{PhysRevA.25.2647}, and where $x=g$ for even $r$ and $x=u$ for odd $r$. The number $N^{\ell}_r$ of copies of the representation $U^{\ell x}$ is given by \begin{equation*} N^{\ell}_r=\sum_{k=0}^{\llcorner(r-\ell)/3\lrcorner}\binom{r}{k} \binom{ 2r-3k-\ell-2}{r-2}. \end{equation*} \end{example}
\section{Classical invariant theory}
\label{ap:invariant}
Let $V$ and $W$ be two finite-dimensional linear spaces over the same field $ \mathbb{K}$. Let $(\rho,V)$ and $(\sigma,W)$ be two representations of a group~$G$. A mapping $h\colon W\to V$ is called a \emph{covariant} or \emph{ form-invariant} or a \emph{covariant tensor} of the pair of representations $ (\rho,V)$ and $(\sigma,W)$, if \begin{equation*} h(\sigma(g)\mathbf{w})=\rho(g)h(\mathbf{w}),\qquad g\in G. \end{equation*} In other words, the diagram \begin{equation*} \xymatrix{ W\ar[r]^h\ar[d]^{\sigma} & V\ar[d]^{\rho} \\ W\ar[r]^h & V } \end{equation*} is commutative.
If $V=\mathbb{F}^1$ and $\rho$ is the trivial representation of $G$, then the corresponding covariant scalars are called \emph{absolute invariants} (or just invariants) of the representation $(\sigma,W)$, hence the name \emph{Invariant Theory}. Note that the set $\mathbb{K}[W]^G$ of invariants is an \emph{algebra} over the field $\mathbb{K}$, that is, a linear space over $\mathbb{F}$ with bilinear multiplication operation and a multiplication identity $1$. The product of a covariant $h\colon W\to V$ and an invariant $f\in\mathbb{K}[W]^G$ is again a covariant. In other words, the covariant tensors of the pair of representations $(\rho,V)$ and $(\sigma,W)$ form a \emph{module} over the algebra of invariants of the representation $ (\sigma,W)$.
A mapping $h\colon W\to V$ is called \emph{homogeneous polynomial mapping of degree~$d$} if for any $\mathbf{v}\in V$ the mapping $\mathbf{w}\mapsto (h( \mathbf{w}),\mathbf{v})$ is a homogeneous polynomial of degree~$d$ in $\dim W $ variables. The mapping $h$ is called a \emph{polynomial covariant of degree $d$} if it is homogeneous polynomial mapping of degree~$d$ and a covariant.
Let $(\sigma,W)$ be the defining representation of $G$, and $(\rho,V)$ be the $r$th tensor power of the defining representation. The corresponding covariant tensors are said to have \emph{an order}~$r$. The covariant tensors of degree~$0$ and of order~$r$ of the group $\mathrm{O}(n)$ are known as \emph{isotropic tensors}.
The algebra of invariants and the module of covariant tensors were an object of intensive research. The first general result was obtained by \citet{Gordan1868}. He proved that for any finite-dimensional complex representation of the group $G=\mathrm{SL}(2,\mathbb{C})$ the algebra of invariants and the module of covariant tensors are finitely generated. In other words, there exists an \emph{integrity basis}: a finite set of invariant homogeneous polynomials $I_1$, \dots, $I_N$ such that every polynomial invariant can be written as a polynomial in $I_1$, \dots, $I_N$. An integrity basis is called \emph{minimal} if none of its elements can be expressed as a polynomial in the others. A minimal integrity basis is not necessarily unique, but all minimal integrity bases have the same amount of elements of each degree.
The algebra of invariants is not necessarily free. Some polynomial relations between generators, called \emph{syzygies} may exist.
The importance of polynomial invariants can be explained by the following result. Let $G$ be a closed subgroup of the group $\mathrm{O}(3)$, the group of symmetries of a material. Let $(\rho,\mathsf{V})$, $(\rho_1,\mathsf{V}_1)$ , \dots, $(\rho_N,\mathsf{V}_N)$ be finitely many orthogonal representations of $G$ in real finite-dimensional spaces. Let $\mathsf{T}\colon\mathsf{V} _1\oplus\cdots\oplus\mathsf{V}_N\to\mathsf{V}$ be an \emph{arbitrary} (say, measurable) covariant of the pair $\rho$ and $\rho_1\oplus\cdots\oplus\rho_N$ . Let $\{\,I_k\colon 1\leq k\leq K\,\}$ be an integrity basis for \emph{ polynomial} invariants of the representation $\rho$, and let $\{\,\mathsf{T} _l\colon 1\leq l\leq L\,\}$ be an integrity basis for \emph{polynomial} covariant tensors of the pair $\rho$ and $\rho_1\oplus\cdots\oplus\rho_N$. Following \citet{MR0171421}, we call $\mathsf{T}_l$ \emph{basic covariant tensors}.
\begin{theorem}[\citet{MR0171421}] \label{th:Wineman-Pipkin} A function $\mathsf{T}\colon\mathsf{V} _1\oplus\cdots\oplus\mathsf{V}_N\to\mathsf{V}$ is a measurable covariant of the pair $\rho$ and $\rho_1\oplus\cdots\oplus\rho_N$ if and only if it has the form \begin{equation*} \mathsf{T}(\mathsf{T}_1,\dots,\mathsf{T}_N)=\sum_{l=1}^{L}\varphi_l(I_1, \dots,I_K) \mathsf{T}_l(\mathsf{T}_1,\dots,\mathsf{T}_N), \end{equation*} where $\varphi_l$ are real-valued measurable functions of the elements of an integrity basis. \end{theorem}
In 1939 in the first edition of \citet{MR1488158} Hermann Weyl proved that any polynomial covariant of degree~$d$ and of order~$r$ of the group $ \mathrm{O}(n)$ is a linear combination of products of Kronecker's deltas $ \delta_{ij}$ and second degree homogeneous polynomials $x_ix_j$.
\end{document} | arXiv |
Spreading speeds of rabies with territorial and diffusing rabid foxes
DCDS-B Home
Transition between monostability and bistability of a genetic toggle switch in Escherichia coli
doi: 10.3934/dcdsb.2019247
Influence of feedback controls on the global stability of a stochastic predator-prey model with Holling type Ⅱ response and infinite delays
Kexin Wang ,
School of Mathematics, Renmin University of China, Beijing 100872, China
* Corresponding author: Kexin Wang
Received April 2019 Revised July 2019 Published November 2019
Fund Project: The first author is supported by NSFC grant No. 71531012
Figure(8)
In this work a stochastic Holling-Ⅱ type predator-prey model with infinite delays and feedback controls is investigated. By constructing a Lyapunov function, together with stochastic analysis approach, we obtain that the stochastic controlled predator-prey model admits a unique global positive solution. We then utilize graphical method and stability theorem of stochastic differential equations to investigate the globally asymptotical stability of a unique positive equilibrium for the stochastic controlled predator-prey system. If the stochastic predator-prey system is globally stable, then we show that using suitable feedback controls can alter the position of the unique positive equilibrium and retain the stable property. If the predator-prey system is destabilized by large intensities of white noises, then by choosing the appropriate values of feedback control variables, we can make the system reach a new stable state. Some examples are presented to verify our main results.
Keywords: Global stability, feedback control, stochastic perturbation, predator-prey, infinite delay.
Mathematics Subject Classification: Primary: 34D23, 58J37; Secondary: 92D25, 93B52.
Citation: Kexin Wang. Influence of feedback controls on the global stability of a stochastic predator-prey model with Holling type Ⅱ response and infinite delays. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019247
M. Aizerman and F. Gantmacher, Absolute Stability of Regulator Systems, Holden Day, San Francisco, 1964. Google Scholar
L. Arnold, W. Horsthemke and J. Stucki, The influence of external real and white noise on the Lotka-Volterra model, Biomedical J., 21 (1979), 451-471. doi: 10.1002/bimj.4710210507. Google Scholar
L. Chang, G. Sun, Z. Wang and Z. Jin, Rich dynamics in a spatial predator-prey model with delay, Appl. Math. Comput., 256 (2015), 540-550. doi: 10.1016/j.amc.2015.01.052. Google Scholar
F. Chen, Global stability of a single species model with feedback control and distributed time delay, Appl. Math. Comput., 178 (2006), 474-479. doi: 10.1016/j.amc.2005.11.062. Google Scholar
L. Chen and F. Chen, Global stability of a Leslie-Gower predator-prey model with feedback controls, Appl. Math. Lett., 22 (2009), 1330-1334. doi: 10.1016/j.aml.2009.03.005. Google Scholar
L. Chen, F. Chen and L. Chen, Qualitative analysis of a predator-prey model with Holling type Ⅱ functional response incorporating a constant prey refuge, Nonlinear Anal. Real World Appl., 11 (2010), 246-252. doi: 10.1016/j.nonrwa.2008.10.056. Google Scholar
L. Chen and J. Chen, Nonlinear Biodynamics Systems, Science Press of China, Beijing, 1993. Google Scholar
P. Chesson and R. Warner, Environmental variability promotes coexistence in lottery competitive systems, Amer. Natur., 117 (1981), 923-943. doi: 10.1086/283778. Google Scholar
Y. Fan and L. Wang, Global asymptotical stability of a Logistic model with feedback control, Nonlinear Anal. Real World Appl., 11 (2010), 2686-2697. doi: 10.1016/j.nonrwa.2009.09.016. Google Scholar
T. Faria, Stability and extinction for Lotka-Volterra systems with infinite delay, J. Dynam. Differential Equations, 22 (2010), 299-324. doi: 10.1007/s10884-010-9166-1. Google Scholar
S. Gakkhar and A. Singh, Complex dynamics in a prey-predator system with multiple delays, Commun. Nonlinear Sci. Numer. Simul., 17 (2011), 914-929. doi: 10.1016/j.cnsns.2011.05.047. Google Scholar
T. C. Gard, Persistence in stochastic food web models, Bull. Math. Biol., 46 (1984), 357-370. doi: 10.1007/BF02462011. Google Scholar
T. C. Gard, Stability for multispecies population models in random environments, Nonlinear Anal., 10 (1986), 1411-1419. doi: 10.1016/0362-546X(86)90111-2. Google Scholar
K. Gopalsamy, Stability and Oscillations in Delay Differential Equations of Population Dynamics, Mathematics and Its Applications, 74, Kluwer Academic, Dordrecht, 1992. doi: 10.1007/978-94-015-7920-9. Google Scholar
K. Gopalsamy and P. Weng, Feedback regulation of logistic growth, Internat. J. Math. Math. Sci., 16 (1993), 177-192. doi: 10.1155/S0161171293000213. Google Scholar
D. J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Rev., 43 (2001), 525-546. doi: 10.1137/S0036144500378302. Google Scholar
C. S. Holling, Some characteristics of simple types of predation and parasitism, Can. Entomol., 91 (1959), 385-398. doi: 10.4039/Ent91385-7. Google Scholar
C. S. Holling, The Functional Response of Predators to Prey Density and its Role in Mimicry and Population Regulation, Memoirs of the Entomological Society of Canada, 97 (1965), 5-60. doi: 10.4039/entm9745fv. Google Scholar
C. Ji, D. Jiang and N. Shi, Analysis of a predator-prey model with modified Leslie-Gower and Holling-type Ⅱ schemes with stochastic perturbation, J. Math. Anal. Appl., 359 (2009), 482-498. doi: 10.1016/j.jmaa.2009.05.039. Google Scholar
T. K. Kara and A. Batabyal, Stability and bifurcation of a prey-predator model with time delay, Comptes Rendus Biologies, 332 (2009), 642-651. doi: 10.1016/j.crvi.2009.02.002. Google Scholar
W. Krajewski and U. Viaro, Locating the equilibrium points of a predator-prey model by means of affine state feedback, J. Franklin. Inst., 345 (2008), 489-498. doi: 10.1016/j.jfranklin.2008.02.001. Google Scholar
[22] S. Lefschetz, Stability of Nonlinear Control Systems, Mathematics in Science and Engineering, 13, Academic Press, New York, 1965. doi: 10.1002/zamm.19660460515. Google Scholar
A. Levin, Dispersion and population interactions, Amer. Natur., 108 (1974), 207-228. doi: 10.1086/282900. Google Scholar
S. Li, J. Wu and Y. Dong, Effects of a degeneracy in a diffusive predator-prey model with Holling II functional response, Nonlinear Anal. Real World Appl., 43 (2018), 78-95. doi: 10.1016/j.nonrwa.2018.02.003. Google Scholar
Z. Li, M. Han and F. Chen, Influence of feedback controls on an autonomous Lotka-Volterra competitive system with infinite delays, Nonlinear Anal. Real World Appl., 14 (2013), 402-413. doi: 10.1016/j.nonrwa.2012.07.004. Google Scholar
M. Liu and K. Wang, Global asymptotic stability of a stochastic Lotka-Volterra model with infinite delays, Commun. Nonlinear Sci. Numer. Simul., 17 (2012), 3115-3123. doi: 10.1016/j.cnsns.2011.09.021. Google Scholar
M. Liu and K. Wang, Global stability of a nonlinear stochastic predator-prey system with Beddington-DeAngelis functional response, Commun Nonlinear Sci Numer Simulat, 16 (2011), 1114-1121. doi: 10.1016/j.cnsns.2010.06.015. Google Scholar
Q. Liu, Y. Liu and X. Pan, Global stability of a stochastic predator-prey system with infinite delays, Appl. Math. Comput., 235 (2014), 1-7. doi: 10.1016/j.amc.2014.02.091. Google Scholar
X. Mao, Stochastic Differential Equations and Their Applications, Horwood Publishing, Chichester, 1997. Google Scholar
X. Mao, Stochastic stabilisation and destabilisation, Syst. Control Lett., 23 (1994), 279-290. doi: 10.1016/0167-6911(94)90050-7. Google Scholar
X. Mao, S. Sabais and E. Renshaw, Asymptotic behavior of the stochastic Lotka-Volterra model, J. Math. Anal. Appl., 287 (2003), 141-156. doi: 10.1016/S0022-247X(03)00539-0. Google Scholar
R. M. May, Time-delay versus stability in population models with two and three trophic levels, Ecology, 54 (1973), 315-325. doi: 10.2307/1934339. Google Scholar
E. A. McGehee and E. Peacock-López, Turing patterns in a modified Lotka-Volterra model, Phys. Lett. A, 342 (2005), 90-98. doi: 10.1016/j.physleta.2005.04.098. Google Scholar
M. L. Rosenzweig and R. H. MacArthur, Graphical representation and stability conditions of predator-prey interactions, Am. Naturalist, 97 (1963), 209-223. doi: 10.1086/282272. Google Scholar
Y. Saito, The necessary and sufficient condition for global stability of a Lotka-Volterra cooperative or competition system with delays, J. Math. Anal. Appl., 268 (2002), 109-124. doi: 10.1006/jmaa.2001.7801. Google Scholar
Y. Takeuchi, N. H. Dub, N. T. Hieu and K. Sato, Evolution of predator-prey systems described by a Lotka-Volterra equation under random environment, J. Math. Anal. Appl., 323 (2006), 938-957. doi: 10.1016/j.jmaa.2005.11.009. Google Scholar
Q. Wang, Z. Ji, Z. Wang, M. Ding and H. Zhang, Existence and attractivity of a periodic solution for a ratio-dependent Leslie system with feedback controls, Nonlinear Anal. Real World Appl., 12 (2011), 24-33. doi: 10.1016/j.nonrwa.2010.05.032. Google Scholar
[38] K. Yang, Delay Differential Equations with Applications in Population Dynamics, Mathematics in Science and Engineering, Academic Press, Boston, 1993. Google Scholar
K. Yang, Global stability for infinite delay Lotka-Volterra type systems, J. Differential Equations, 103 (1993), 221-246. doi: 10.1006/jdeq.1993.1048. Google Scholar
K. Yang, Z. Miao, F. Chen and X. Xie, Attractivity of saturated equilibria for Lotka-Volterra systems with infinite delays and feedback controls, J. Math. Anal. Appl., 435 (2016), 874-888. Google Scholar
R. Yang, M. Liu and C. Zhang, A delayed-diffusive predator-prey model with a ratio-dependent functional response, Commun. Nonlinear Sci. Numer. Simul., 53 (2017), 94-110. doi: 10.1016/j.cnsns.2017.04.034. Google Scholar
F. Yin and Y. Li, Positive periodic solutions of a single species model with feedback regulation and distributed time delay, Appl. Math. Comput., 153 (2004), 475-484. doi: 10.1016/S0096-3003(03)00648-9. Google Scholar
Q. Zhang, X. Wen, D. Jiang and Z. Liu, The stability of a predator-prey system with linear mass-action functional response perturbed by white noise, Adv. Differ. Equ., (2016). doi: 10.1186/s13662-016-0776-8. Google Scholar
Figure 1. The parabola $ l_1 $ (green line) and the hyperbola $ l_2 $(blue line) of 9 provided that the condition 7 holds
Figure 2. The region $ R_0 $ where positive equilibria $ (x^{*}, y^{*}) $ will occur under feedback controls
Figure 3. Dynamic behavior of the solution $ (x(t), y(t))^{T} $ of system 20 with the initial condition $ (\varphi_1(\theta), \varphi_2(\theta)) = (1.2 e^{\theta}, 0.8 e^{\theta}), \theta \in (-\infty, 0] $
Figure 4. The region $ R_0 $ where positive equilibria of system 20 will occur under feedback controls
Figure 5. Dynamic behavior of the solution $ (x(t), y(t))^{T} $ of system 20 with small perturbations $ \sigma_1 = \sigma_2 = 0.1 $ and the initial condition $ (\varphi_1(\theta), \varphi_2(\theta)) = (1.2 e^{\theta}, 0.8 e^{\theta}), \theta \in (-\infty, 0] $
Figure 6. Dynamic behavior of the solution $ (x(t), y(t), u_1(t), u_2(t))^{T} $ of system 21 with the initial condition $ (\varphi_1(\theta), \varphi_2(\theta), u_1(0), u_2(0)) = (1.2 e^{\theta}, 0.8 e^{\theta}, 1,1), \theta \in (-\infty, 0] $
Figure 7. Dynamic behavior of the solution $ (x(t), y(t))^{T} $ of system 20 with big perturbations $ \sigma_1 = \sigma_2 = 1 $ and the initial condition $ (\varphi_1(\theta), \varphi_2(\theta)) = (1.2 e^{\theta}, 0.8 e^{\theta}), \theta \in (-\infty, 0] $
Haiying Jing, Zhaoyu Yang. The impact of state feedback control on a predator-prey model with functional response. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 607-614. doi: 10.3934/dcdsb.2004.4.607
Zhong Li, Maoan Han, Fengde Chen. Global stability of a predator-prey system with stage structure and mutual interference. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 173-187. doi: 10.3934/dcdsb.2014.19.173
Yinshu Wu, Wenzhang Huang. Global stability of the predator-prey model with a sigmoid functional response. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1159-1167. doi: 10.3934/dcdsb.2019214
Qizhen Xiao, Binxiang Dai. Heteroclinic bifurcation for a general predator-prey model with Allee effect and state feedback impulsive control strategy. Mathematical Biosciences & Engineering, 2015, 12 (5) : 1065-1081. doi: 10.3934/mbe.2015.12.1065
Guanqi Liu, Yuwen Wang. Stochastic spatiotemporal diffusive predator-prey systems. Communications on Pure & Applied Analysis, 2018, 17 (1) : 67-84. doi: 10.3934/cpaa.2018005
S. Nakaoka, Y. Saito, Y. Takeuchi. Stability, delay, and chaotic behavior in a Lotka-Volterra predator-prey system. Mathematical Biosciences & Engineering, 2006, 3 (1) : 173-187. doi: 10.3934/mbe.2006.3.173
Rui Xu. Global convergence of a predator-prey model with stage structure and spatio-temporal delay. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 273-291. doi: 10.3934/dcdsb.2011.15.273
Hongxiao Hu, Liguang Xu, Kai Wang. A comparison of deterministic and stochastic predator-prey models with disease in the predator. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2837-2863. doi: 10.3934/dcdsb.2018289
Miljana JovanoviĆ, Marija KrstiĆ. Extinction in stochastic predator-prey population model with Allee effect on prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2651-2667. doi: 10.3934/dcdsb.2017129
Shanshan Chen, Jianshe Yu. Stability and bifurcation on predator-prey systems with nonlocal prey competition. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 43-62. doi: 10.3934/dcds.2018002
Canan Çelik. Dynamical behavior of a ratio dependent predator-prey system with distributed delay. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 719-738. doi: 10.3934/dcdsb.2011.16.719
Marcos Lizana, Julio Marín. On the dynamics of a ratio dependent Predator-Prey system with diffusion and delay. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1321-1338. doi: 10.3934/dcdsb.2006.6.1321
Gianni Gilioli, Sara Pasquali, Fabrizio Ruggeri. Nonlinear functional response parameter estimation in a stochastic predator-prey model. Mathematical Biosciences & Engineering, 2012, 9 (1) : 75-96. doi: 10.3934/mbe.2012.9.75
Nguyen Huu Du, Nguyen Thanh Dieu, Tran Dinh Tuong. Dynamic behavior of a stochastic predator-prey system under regime switching. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3483-3498. doi: 10.3934/dcdsb.2017176
Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1507-1519. doi: 10.3934/dcdsb.2013.18.1507
Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Amelia G. Nobile. A non-autonomous stochastic predator-prey model. Mathematical Biosciences & Engineering, 2014, 11 (2) : 167-188. doi: 10.3934/mbe.2014.11.167
Leonid Braverman, Elena Braverman. Stability analysis and bifurcations in a diffusive predator-prey system. Conference Publications, 2009, 2009 (Special) : 92-100. doi: 10.3934/proc.2009.2009.92
Sílvia Cuadrado. Stability of equilibria of a predator-prey model of phenotype evolution. Mathematical Biosciences & Engineering, 2009, 6 (4) : 701-718. doi: 10.3934/mbe.2009.6.701
Wei Feng, Michael T. Cowen, Xin Lu. Coexistence and asymptotic stability in stage-structured predator-prey models. Mathematical Biosciences & Engineering, 2014, 11 (4) : 823-839. doi: 10.3934/mbe.2014.11.823
Antoni Leon Dawidowicz, Anna Poskrobko. Stability problem for the age-dependent predator-prey model. Evolution Equations & Control Theory, 2018, 7 (1) : 79-93. doi: 10.3934/eect.2018005
Kexin Wang | CommonCrawl |
High temperature singlet-based magnetism from Hund's rule correlations
Spin waves and spin-state transitions in a ruthenate high-temperature antiferromagnet
H. Suzuki, H. Gretarsson, … B. Keimer
First principles calculations of the structural, electronic, magnetic, and thermodynamic properties of the Nd2MgGe2 and Gd2MgGe2 intermetallic compounds
S. Menouer, O. Miloud Abid, … U. Schwingenschlögl
Dynamic fingerprint of fractionalized excitations in single-crystalline Cu3Zn(OH)6FBr
Ying Fu, Miao-Ling Lin, … Jia-Wei Mei
Orbital selective Kondo effect in heavy fermion superconductor UTe2
Byungkyun Kang, Sangkook Choi & Hyunsoo Kim
Emergent bound states and impurity pairs in chemically doped Shastry-Sutherland system
Zhenzhong Shi, William Steinhardt, … Sara Haravifard
Evolution of field induced magnetic phase attributed to higher order magnetic moments in TbVO4
Dheeraj Ranaut & K. Mukherjee
Proximate ferromagnetic state in the Kitaev model material α-RuCl3
H. Suzuki, H. Liu, … B. Keimer
Electronic and magnetic properties of the RuX3 (X = Cl, Br, I) family: two siblings—and a cousin?
David A. S. Kaib, Kira Riedl, … Roser Valentí
Concept and realization of Kitaev quantum spin liquids
Hidenori Takagi, Tomohiro Takayama, … Stephen E. Nagler
Lin Miao1,2,
Rourav Basak1,
Sheng Ran3,
Yishuai Xu1,
Erica Kotta1,
Haowei He1,
Jonathan D. Denlinger2,
Yi-De Chuang2,
Y. Zhao ORCID: orcid.org/0000-0002-7331-791X3,4,
Z. Xu3,
J. W. Lynn3,
J. R. Jeffries5,
S. R. Saha3,6,
Ioannis Giannakis7,
Pegor Aynajian7,
Chang-Jong Kang ORCID: orcid.org/0000-0003-2895-48888,
Yilin Wang9,
Gabriel Kotliar8,
Nicholas P. Butch3,6 &
L. Andrew Wray1
Nature Communications volume 10, Article number: 644 (2019) Cite this article
Magnetic properties and materials
Phase transitions and critical phenomena
Uranium compounds can manifest a wide range of fascinating many-body phenomena, and are often thought to be poised at a crossover between localized and itinerant regimes for 5f electrons. The antiferromagnetic dipnictide USb2 has been of recent interest due to the discovery of rich proximate phase diagrams and unusual quantum coherence phenomena. Here, linear-dichroic X-ray absorption and elastic neutron scattering are used to characterize electronic symmetries on uranium in USb2 and isostructural UBi2. Of these two materials, only USb2 is found to enable strong Hund's rule alignment of local magnetic degrees of freedom, and to undergo distinctive changes in local atomic multiplet symmetry across the magnetic phase transition. Theoretical analysis reveals that these and other anomalous properties of the material may be understood by attributing it as the first known high temperature realization of a singlet ground state magnet, in which magnetism occurs through a process that resembles exciton condensation.
Uranium compounds can feature a fascinating interplay of strongly correlated and itinerant electronic physics, setting the stage for emergent phenomena such as quantum criticality, heavy fermion superconductivity, and elusive hidden order states1,2,3,4,5,6,7,8,9,10,11,12,13. The isostructural uranium dipnictides UX2 (X = As, Sb, Bi) present a compositional series in which high near-neighbor uranium-uranium coordination supports robust planar antiferromagnetism (TN~200K, see Fig. 1a, b)7,8. Of these, the USb2 variant has received close attention due to the discovery of several unexplained low temperature quantum coherence phenomena at T < 100K7,9,10,11, and a remarkably rich phase diagram incorporating quantum critical and tricritical points as a function of pressure and magnetic field12,13. However, the effective valence state of uranium and the resulting crystal field state basis defining the f-electron component of local moment and Kondo physics have not been identified.
Singlet ground state magnetism and the ligand cage of U(Bi/Sb)2. a, b The U(Sb/Bi)2 crystal structure is shown with spins indicating the antiferromagnetic structure in UBi2 (TN~180 K) and USb2 (TN~203 K). The uranium atoms have 9-fold ligand coordination with base (S1), middle (S2), and pinnacle (S3) ligand layers as labeled in a with respect to the central uranium atom. c, d In-plane ferromagnetic nucleation regions are circled in c doublet and d singlet ground state magnetic systems. The singlet crystal field ground state has no local moment, causing much of the lattice to have little or no magnetic polarization
Here, X-ray absorption (XAS) at the uranium O-edge and numerical modeling are used to evaluate the low energy atomic multiplet physics of USb2 and UBi2, revealing only USb2 to have significant Hund's rule correlations. These investigations yield the prediction that USb2 must be a uniquely robust realization of a singlet-ground-state magnet, in which magnetic moments appear via the occupation of low-energy excited states on a non-magnetic background (Fig. 1c). The evolution of crystal field symmetries and magnetic ordered moment across the antiferromagnetic phase transition is measured with linear dichroism (XLD) and elastic neutron scattering, confirming that the magnetic transition in USb2 occurs through an exotic process that resembles exciton condensation.
Electron configuration of uranium in UBi2 and USb2
Unlike the case with stronger ligands such as oxygen and chlorine, there is no unambiguously favored effective valence picture for uranium pnictides. Density functional theory suggests that the charge and spin density on uranium are significantly modified by itinerancy effects14,15 (see also Supplementary Note 1), as we will discuss in the analysis below, making it difficult to address this question from secondary characteristics such as the local or ordered moment. However, analyses in 2014–2016 have shown that resonant fine structure at the O-edge (5d→5 f transition) provides a distinctive fingerprint for identifying the nominal valence state and electronic multiplet symmetry on uranium16,17,18,19. X-ray absorption spectra (XAS) of UBi2 and USb2 were measured by the total electron yield (TEY) method, revealing curves that are superficially similar but quantitatively quite different (Fig. 2a). Both curves have prominent resonance features at hυ~100 and ~113 eV that are easily recognized as the 'R1' and 'R2' resonances split by the G-series Slater integrals16. Within models, these resonances are narrowest and most distinct for 5 f0 systems, and merge as 5f electron number increases, becoming difficult to distinguish beyond 5f2 (see Fig. 2a (bottom) simulations). The USb2 sample shows absorption features that closely match the absorption curve of URu2Si216, and are associated with the J = 4 ground states of a 5f2 multiplet. This correspondence can be drawn with little ambiguity by noting a one-to-one feature correspondence with the fine structure present in a second derivative analysis (SDI, see Fig. 2b).
XAS fine structure and valence of UBi2 and USb2. a The x-ray absorption of UBi2 and USb2 on the O-edge of uranium is compared with (bottom) multiplets simulations for 5 f1 (U5+), 5f2 (U4+), and 5f3 (U3+). b A negative second derivative (SDI) of the XAS data and simulated curves, with drop-lines showing feature correspondence. Noise in the SDI has an amplitude comparable to the plotted line thickness, and all features identified with drop-lines were consistently reproducible when moving the beam spot. Prominent absorption features are labeled peak-A (UBi2, hυ = 99.2 eV), peak-B (USb2, hυ = 98.2 eV), and peak-C (USb2, hυ = 100.8 eV). Source data are provided as a Source Data file
The R1 and R2 resonances of UBi2 are more broadly separated than in USb2, and the lower energy R1 feature of UBi2 is missing the prominent leading edge peak at hυ~98.2 eV (peak-B), which is a characteristic feature of 5f2 uranium16,17. The UBi2 spectrum shows relatively little intensity between R1 and R2, and the higher energy R2 resonance has a much sharper intensity onset. All of these features are closely consistent with expectations for a 5f1 multiplet, and the SDI curve in Fig. 2b reveals that the R1 fine structure of UBi2 is a one-to-one match for the 5f1 multiplet. We note that a close analysis is not performed for R2 as it is influenced by strong Fano interference (see Supplementary Note 2). The lack of prominent 5f2 multiplet features suggests that the 5f1 multiplet state is quite pure, and the measurement penetration depth of several nanometers (see Methods) makes it unlikely that this distinction between UHV-cleaved UBi2 and USb2 originates from surface effects. However, the picture for UBi2 is complicated by a very rough cleaved surface, which our STM measurements (see Supplementary Note 3) find to incorporate at least two non-parallel cleavage planes. Surface oxidation in similar compounds is generally associated with the formation of UO2 (5f2) and does not directly explain the observation of a 5f1 state.
We note that even with a clean attribution of multiplet symmetries, it is not at all clear how different the f-orbital occupancy will be for these materials, or what magnetic moment should be expected when the single-site multiplet picture is modified by band-structure-like itinerancy10,11 (see also Supplementary Note 1). The effective multiplet states identified by shallow-core-level spectroscopy represent the coherent multiplet (or angular moment) state on the scattering site and its surrounding ligands, but are relatively insensitive to the degree of charge transfer from the ligands20.
Nonetheless, the 5f1 and 5f2 nominal valence scenarios have very different physical implications. A 5f1 nominal valence state does not incorporate multi-electron Hund's rule physics21,22 (same-site multi-electron spin alignment), and must be magnetically polarizable with non-zero pseudospin in the paramagnetic state due to Kramer's degeneracy (pseudospin ½ for the UBi2 crystal structure). By contrast in the 5f2 case one expects to have a Hund's metal with strong alignment of the 2-electron moment (see dynamical mean field theory (DFT + DMFT) simulation below), and the relatively low symmetry of the 9-fold ligand coordination around uranium strongly favors a non-magnetic singlet crystal electric field (CEF) ground state with Γ1 symmetry, gapped from other CEF states by roughly 1/3 the total spread of state energies in the CEF basis (see Table 1). The Γ1 state contains equal components of diametrically opposed large-moment |mJ = + 4 > and |mJ = -4 > states, and is poised with no net moment by the combination of spin-orbit and CEF interactions. This unusual scenario in which magnetic phenomena emerge in spite of a non-magnetic singlet ground state has been considered in the context of mean-field models23,24,25,26, and appears to be realized at quite low temperatures (typically T < ~10 K) in a handful of rare earth compounds. The resulting magnetic phases are achieved by partially occupying low-lying magnetic excited states, and have been characterized as spin exciton condensates23.
Table 1 The CEF energy hierarchy in USb2
Multiplet symmetry from XLD versus temperature
To address the role of low-lying spin excitations, it is useful to investigate the interplay between magnetism and the occupied multiplet symmetries by measuring the polarization-resolved XAS spectrum as a function of temperature beneath the magnetic transition. Measurements were performed with linear polarization set to horizontal (LH, near z-axis) and vertical (LV, a–b plane) configurations. In the case of UBi2, the XAS spectrum shows little change as a function of temperature from 15 to 210K (Fig. 3a, b), and temperature dependence in the dichroic difference (XLD, Fig. 3b) between these linear polarizations is inconclusive, being dominated by noise from the data normalization process (see Methods and Supplementary Note 4). This lack of temperature dependent XLD is consistent with conventional magnetism from a doublet ground state. The XLD matrix elements do not distinguish between the up- and down-moment states of a Kramers doublet, and so strong XLD is only expected if the magnetic phase incorporates higher energy multiplet symmetries associated with excitations in the paramagnetic state.
Temperature dependence of occupied f-electron symmetries. a The R1 XAS spectrum of UBi2 is shown for linear horizontal (LH) and vertical (LV) polarizations. b The dichroic difference (LH-LV) is shown with temperature distinguished by a rainbow color order (15K (purple), 40K (blue), 80K (green), 120K (yellow), and 210K (red)). c, d Analogous spectra are shown for USb2. Arrows in d show the monotonic trend direction on the peak-B and peak-C resonances as temperature increases. e, f Simulations for 5f2 with mean-field magnetic interactions. g A summary of the linear dichroic difference on the primary XAS resonances of USb2, as a percentage of total XAS intensity at the indicated resonance energy (hυ = 98.2 eV for peak-B, and hυ = 100.8 eV for peak-C). Error bars represent a rough upper bound on the error introduced by curve normalization. h The linear dichroic difference trends from the mean field model. Source data are provided as a Source Data file. Shading in g, h indicates the onset of a magnetic ordered moment
By contrast, the temperature dependence of USb2 shows a large monotonic progression (Fig. 3c, d), suggesting that the atomic symmetry changes significantly in the magnetic phase. The primary absorption peak (hυ~98.2 eV, peak-B) is more pronounced under the LH-polarization at low-temperature, and gradually flattens as temperature increases. The LV polarized spectrum shows the opposite trend, with a sharper peak-B feature visible at high temperature, and a less leading edge intensity at low temperature. This contrasting trend is visible in the temperature dependent XLD in Fig. 3d, as is a monotonic progression with the opposite sign at peak-C (hυ~100.8 eV).
Augmenting the atomic multiplet model for 5f2 uranium with mean-field magnetic exchange (AM + MF) aligned to match the TN~203K phase transition (see Methods) results in the temperature dependent XAS trends shown in Fig. 3e. The temperature dependent changes in peak-B and peak-C in each linear dichroic curve match the sign of the trends seen in the experimental data, but occur with roughly twice the amplitude, as can be seen in Fig. 3d, f. No attempt is made to precisely match the T > 200K linear dichroism, as this is influenced by itinerant and Fano physics not considered in the model. The theoretical amplitude could easily be reduced by adding greater broadening on the energy loss axis or by fine tuning of the model (which has been avoided – see Methods). However, it is difficult to compensate for a factor of two, and the discrepancy is likely to represent a fundamental limitation of the non-itinerant mean field atomic multiplet model. Indeed, when the competition between local moment physics and electronic itinerancy is evaluated for USb2 with dynamical mean field theory (DFT + DMFT), we find that the uranium site shows a non-negligible ~25% admixture of 5f1 and 5f3 configurations (Fig. 4a).
Electronic symmetry convergence in USb2. a The partial multiplet state occupancy on uranium in USb2 from DFT + DMFT numerics, with Hund-aligned symmetries highlighted in bold (3H4 and 4I9/2). b Temperature dependence of the partial occupancy of different multiplet states within a 5f2 mean field model. In spite of a magnetic transition above 200K, roughly 1/3rd of the ground state convergence occurs in the range from 30–100K. The labeled CEF symmetries are only fully accurate in the high temperature paramagnetic state. Beneath the Néel temperature, the Γ1 ground state is magnetically polarized by admixture with Γ2. Shading indicates the onset of a magnetic ordered moment. c The ordered magnetic moment of (red circles) USb2 and (black circles) UBi2 from elastic neutron scattering. The mean field multiplet model for USb2 is shown as a solid blue curve, and critical exponent trends near the phase transition are traced with dashed black lines representing m(T) = mmax(1-T/TN)β. The USb2 data are overlaid with a steep critical exponent trend of β = 0.19 indicating strong fluctuations, and the UBi2 data are overlaid with the conventional 3D Ising critical exponent (β = 0.327). d The Néel temperature as a function of doping level in U1-xThxSb2 (red circles), and the simulated ordered moment in Bohr magnetons (renormalized to 62% as described in Methods; red-hot shading). Source data for all curves are provided as a Source Data file
Magnetic ordered moment and the nature of fluctuations
Compared with conventional magnetism, the singlet ground state provides a far richer environment for low temperature physics within the magnetic phase. In a conventional magnetic system, the energy gap between the ground state and next excited state grows monotonically as temperature is decreased beneath the transition, giving an increasingly inert many-body environment. However, in the case of singlet ground state magnetism, the ground state is difficult to magnetically polarize, causing the energy gap between the ground state and easily polarized excited states to shrink as temperature is lowered and the magnetic order parameter becomes stronger. Consequently, within the AM + MF model, many states keep significant partial occupancy down to T < 100K, and the first excited state (derived from the Γ5 doublet) actually grows in partial occupancy beneath the phase transition (see Fig. 4b). Of the low energy CEF symmetries (tracked in Fig. 4b), Γ5 and Γ2 are of particular importance, as Γ5 is a magnetically polarizable Ising doublet, and Γ2 is a singlet state that can partner coherently with the Γ1 ground state to yield a z-axis magnetic moment (see Supplementary Note 5). These non-ground-state crystal field symmetries retain a roughly 1/3rd of the total occupancy at T = 100K, suggesting that a heat capacity peak similar to a Schottky anomaly should appear at low temperature, as has been observed at T < ~50K in experiments (see the supplementary material of ref. 10). Alternatively, when intersite exchange effects are factored in, the shrinking energy gap between the Γ1 and Γ5 CEF states at low temperature will enable Kondo-like resonance physics and coherent exchange effects that are forbidden in conventional magnets.
Critical behavior at the Néel transition should also differ, as the phase transition in a singlet-ground-state magnet is only possible on a background of strong fluctuations. Measuring the ordered moment as a function of temperature with elastic neutron scattering (Fig. 4c) reveals that the UBi2 moment follows a trend that appears consistent with the β = 0.327 critical exponent for a 3D Ising system27. The order parameter in USb2 has a sharper onset that cannot be fitted sufficiently close to the transition point due to disorder, but can be overlaid with an exponent of β~0.19, and may resemble high-fluctuation scenarios such as tricriticality (β = 0.2528,29). This sharp onset cannot be explained from the AM + MF model (blue curve in Fig. 4c), as mean field models that replace fluctuations with a static field give large critical exponents such as β = 0.5 and unphysically high transition temperatures in systems where fluctuations are important. Another approach to evaluate the importance of fluctuations is to lower the Néel temperature by alloying with non-magnetic thorium (Th), as U1-xThxSb2 (see Fig. 4d), thus quenching thermal fluctuations at the phase transition. Performing such a growth series reveals that the magnetic transition can be suppressed to TN~100 K, but is then abruptly lost at x~0.7, consistent with the need for fluctuations across a CEF gap of kBTN~10 meV, which matches expectations from theory for the energy separation between Γ1 and Γ5 (see Table 1 and Methods).
In summary, we have shown that the USb2 and UBi2 O-edge XAS spectra represent different nominal valence symmetries, with USb2 manifesting 5f2 moments that are expected to create a Hund's metal physical scenario, and UBi2 showing strong 5f1 –like symmetry character. The CEF ground state of a paramagnetic USb2 Hund's metal is theoretically predicted to be a robust non-magnetic singlet, creating an exotic setting for magnetism that resembles an exciton condensate, and is previously only known from fragile and low temperature realizations. The temperature dependence of XLD measurements is found to reveal a symmetry evolution consistent with singlet-based magnetism. Neutron diffraction measurements show a relatively sharp local moment onset at the transition, consistent with the importance of fluctuations to nucleate the singlet-based magnetic transition, and suppressing thermal fluctuations in a doping series is found to quench magnetism beneath TN < ~100K.
Taken together, these measurements are consistent with a singlet-based magnetic energy hierarchy that yields an anomalously large number of thermally accessible degrees of freedom at low temperature (T < 100K), and provides a foundation for explaining the otherwise mysterious coherence effects found in previous transport, heat capacity, and ARPES measurements at T < 100K7,9,10,11. The interchangeability of elements on both the uranium (demonstrated as U1-xThxSb2) and pnictogen site suggests UX2 as a model system for exploring the crossover into both Hund's metal and singlet-ground-state magnetic regimes.
The samples of UBi2 and USb2 were top-posted in a nitrogen glove-box and then transferred within minutes to the ultra high vacuum (UHV) environment. The samples were cleaved in UHV and measured in-situ, with initial U O-edge spectra roughly 30 minutes after cleavage. The UV-XAS measurements were performed in the MERIXS (BL4.0.3) in the Advanced Light Source with base pressure better than 4 × 10−10 Torr. The switch between linear horizontal polarization (LH-pol) and linear vertical polarization (LV-pol) is controlled by an elliptically polarizing undulator (EPU) and keep precisely the same beam spot before and after the switch. The incident angle of the photon beam was 30°, which gives a 75% out-of-plane E-vector spectral component under the LH-pol condition and 100% in-plane E-vector under the LV-pol condition. The XAS signal was collected by the total electron yield (TEY) method. The penetration depth of VUV and soft X-ray XAS measured with the TEY method is generally in the 2–4 nm range set by the mean free path of low energy (E < ~10 eV) secondary electrons created in the scattering process30, making it a much more bulk sensitive technique than single-particle techniques such as angle resolved photoemission.
Air-exposed UBi2 can degrade rapidly due to oxidization. No evidence of a large volume fraction of oxide or other phases was found from neutron scattering data for USb2 and UBi2. Possible sample oxidation was surveyed by measuring oxygen L1-edge XAS via TEY for both USb2 and UBi2 during the uranium O-edge XAS experiments. An oxygen L1-edge signal was visible at the cleaved surface of both samples, and found to have similar intensity for both USb2 and UBi2 samples (Supplementary Note 6).
The O-edge XAS curves observed under LH-pol and LV-pol polarization are normalized by assigning constant intensity to the integrated area of the R1 region. Spectral intensity was integrated between featureless start (95 eV) and end points (102 eV) for both UBi2 and USb2. The linear dichroism of the XAS in the main text is defined as:
$$I_{{\mathrm{LD}}} = \left( {I_{{\mathrm{LH}}} - I_{{\mathrm{LV}}}} \right)/I_{{\mathrm{LH}}({\mathrm{max}})}$$
where ILH(max) is the XAS intensity maximum under LH-pol condition within R1 region. The monotonic temperature linear dichroism of USb2 in the main text is a solid result under different data normalization process but linear dichroic rate can be influenced by some factors, for example the irreducible background in ILH(max). In the simulation, tuning the broadening factor is also easy to change simulated linear dichroic rate which make seriously quantitative comparison of the linear dichroism between experiment and the simulation meaningless.
Neutron diffraction measurements were performed on single crystals at the BT-7 thermal triple axis spectrometer at the NIST Center for Neutron Research31 using a 14.7 meV energy and collimation: open - 25′ - sample - 25′ - 120′. For USb2, the magnetic intensity at the (1, 0, 0.5) peak was compared to the nuclear intensity at the (1, 0, 1) peak, while the temperature dependence of the (1 1, 0.5) peak was used to calculate an order parameter. For UBi2, the temperature-dependent magnetic intensity at the (1, 1, 1) peak was compared to nuclear intensity at (1, 1, 1) peak at 200K, above the Néel temperature. In both cases, an f2 magnetic form factor was assumed32.
Atomic multiplet + mean field model (AM + MF)
Atomic multiplet calculations were performed as in ref. 16, describing 5d105 fn → 5d95 fn+1 X-ray absorption in the dipole approximation. Hartree-Fock parameters were obtained from the Cowan code33, and full diagonalization of the multiplet Hamiltonian was performed using LAPACK drivers34. Hartree-Fock parameters for 5f multipole interactions renormalized by a factor of β = 0.7 for UBi2, and a more significant renormalization of β = 0.55 was found to improve correspondence for USb2. This difference matches the expected trend across a transition between 5f2 and 5f1 local multiplet states. Core-valence multipole interactions renormalized by βC = 0.55, consistent with other shallow core hole actinide studies35. The 5f spin orbit is not renormalized in USb2 but renormalized by a factor of 1.15 in UBi2 due to the much larger spin orbit coupling on bismuth. A detailed comparison of simulation results generated from two sets of Hartree-Fock parameters is included in Supplementary Note 7.
Total electron yield is dominated by secondary electrons following Auger decay of the primary scattering site. We have assigned core hole lifetime parameters to describe this decay, and adopted the common approximation that the number of secondary electrons escaping from the material following each core hole decay event is independent of the incident photon energy. For the 5f1 simulation, the core hole inverse lifetime is Γ = 1.4 eV at hυ < 100 eV, Γ = 1.8 eV at 100 eV < hυ < 108.5 eV, and 6.5 eV at hυ > 108.5 eV. For 5f2 and 5f3 simulations, feature widths were obtained from a core hole inverse lifetime set to Γ = 1.3 eV (hυ < 99 eV), Γ = 1.5 eV (99 eV < hυ < 103.5 eV), and 6.5 eV (hυ > 103.5 eV). In the 5f3 simulation, assigning the 103 eV XAS feature to R1 (longer lifetime) as in the Fig. 2 makes it more prominent than if it is assigned to R2 (shorter lifetime). It is also worth noting that scenarios intermediate to 5f2 and 5f3 do not necessarily closely resemble the 5f3 endpoint, and spectral weight in the 103 eV 5f3 XAS peak may depend significantly on local hybridization. However, in real materials, 5f3 character is associated with a downward shift in the R1 resonance onset energy that is opposite to what is observed in our data36.
The mean field model was implemented by considering the USb2 uranium sublattice with Ising exchange coupling between nearest neighbors:
$$H = \mathop {\sum}\nolimits_i {A,_{i} + {\sum} {\langle i,k\rangle J_{i,k}S_{z,i}S_{z,k}} }$$
where A,i is the 5f2 single-atom multiplet Hamiltonian, Ji,k is an exchange coupling parameter with distinct values for in-plane versus out-of-plane nearest neighbors, and Sz,i is the z-moment spin operator acting on site i. Mean field theory allows us to replace one of the spin interaction terms (Sz,k) with a temperature-dependent expectation value, and describe the properties of the system in terms of a thermally weighted single-atom multiplet state ensemble. The specific values of individual Ji,k terms are unimportant in this approximation, however their signs must match the antiferromagnetic structure in Fig. 1, and the sum of the absolute value of near-neighbor terms must equal Jeff = ∑<k> |Jn,k| = 43 meV to yield a magnetic transition at TN = 203 K. When considering the doped case of U1-xThxSb2, the expectation value < Sz,k > is effectively reduced by weighting in the appropriate density of 0-moment 5f0 Th sites.
The CEF energy hierarchy has not been fine tuned. Perturbation strengths are scaled to set the lowest energy excitation to 10 meV, a round number that roughly matches the lowest kBTN value at which a magnetic transition is observed in U1-xThxSb2. This assignment gives a total energy scale for crystal field physics that is approximately comparable to room temperature (ΔCEF~kBTN), as expected for this class of materials, and the associated orbital energies were found to correspond reasonably (within <~30%) with coarse estimates from density functional theory. The crystal field parameters are listed in the first column of Table 1.
The low temperature ordered moment of M = 1.90 μB seen by neutron scattering is matched by downward-renormalizing the moment calculated in the mean field model to 62% (see Fig. 4d shading). Within density functional theory (DFT) models, the consideration of itinerant electronic states provides a mechanism to explain most of this discrepancy. In DFT simulations, the spin component of the magnetic moment is enhanced to MS~2 μB14,15, larger than the maximal value of MS~1.4 μB that we find in the 5 f2 (J = 4) atomic multiplet picture. This larger DFT spin moment is directly opposed to the orbital magnetic moment, resulting in a smaller overall ordered moment. The ordered moment in the multiplet simulation could alternatively be reduced by strengthening the crystal field, but this is challenging to physically motivate, and has the opposite effect of reducing the spin moment to MS < 1 μB.
Density functional theory + dynamical mean field theory (DFT + DMFT)
The combination of density functional theory (DFT) and dynamical mean-field theory (DMFT)37, as implemented in the full-potential linearized augmented plane-wave method38,39, was used to describe the competition between the localized and itinerant nature of 5f-electron systems. The correlated uranium 5f electrons were treated dynamically by the DMFT local self-energy, while all other delocalized spd electrons were treated on the DFT level. The vertex corrected one-crossing approximation38 was adopted as the impurity solver, in which full atomic interaction matrix was taken into account. The Coulomb interaction U = 4.0 eV and the Hund's coupling J = 0.57 eV were used for the DFT + DMFT calculations.
Though the source code used for these multiplet calculations is not publicly available, there are excellent options with equivalent capabilities such as CTM4XAS (http://www.anorg.chem.uu.nl/CTM4XAS/) and Quanty (http://www.quanty.org).
All relevant data of this study are available from the corresponding author upon reasonable request.
Palstra, T. T. M. et al. Superconducting and magnetic transitions in the heavy-fermion system URu2Si2. Phys. Rev. Lett. 55, 2727–2730 (1985).
Maple, M. B. et al. Partially gapped Fermi surface in the heavy-electron superconductor URu2Si2. Phys. Rev. Lett. 56, 185–188 (1986).
Schlabitz, W. J. et al. Superconductivity and magnetic order in a strongly interacting fermi-system: URu2Si2. Z. Phys. B 62, 171–177 (1986).
Fisher, R. A. et al. Specific heat of URu2Si2: effect of pressure and magnetic field on the magnetic and superconducting transitions. Phys. B 163, 419–423 (1990).
Pfleiderer, C. Superconducting phases of f-electron compounds. Rev. Mod. Phys. 81, 1551–1624 (2009).
Moore, K. T. & van der Laan, G. Nature of the 5f states in actinide metals. Rev. Mod. Phys. 81, 235–298 (2009).
Aoki, D. et al. Cylindrical Fermi surfaces formed by a fiat magnetic Brillouin zone in uranium dipnictides. Philos. Mag. B 80, 1517–1544 (2004).
Leciejewicz, J., Troć, R., Murasik, A. & Zygmunt, A. Neutron‐diffraction study of antiferromagnetism in USb2 and UBi2. Phys. Status Solidi B 22, 517–526 (1967).
Wawryk, R. Magnetic and transport properties of UBi2 and USb2 single crystals. Philos. Mag. 86, 1775–1787 (2006).
Qi, J. et al. Measurement of two low-temperature energy gaps in the electronic structure of antiferromagnetic USb2 using ultrafast optical spectroscopy. Phys. Rev. Lett. 111, 057402 (2013).
Xie, D. H. et al. Direct measurement of the localized-itinerant transition, hybridization and antiferromagnetic transition of 5f electrons. Preprint at https://arxiv.org/abs/1611.08059 (2016).
Stillwell, R. L. et al. Tricritical point of the f -electron antiferromagnet USb2 driven by high magnetic fields. Phys. Rev. B 95, 014414 (2017).
Jeffries, J. R. et al. Emergent ferromagnetism and T -linear scattering in USb2 at high pressure. Phys. Rev. B 93, 184406 (2016).
Lebègue, S., Oppeneer, P. M. & Eriksson, O. Ab initio study of the electronic properties and Fermi surface of the uranium dipnictides. Phys. Rev. B 73, 045119 (2006).
Ghasemikhah, E., Jalali Asadabadi, S., Ahmad, I. & Yazdani-Kacoeia, M. Ab initio studies of electric field gradients and magnetic properties of uranium dipnicties. RSC Adv. 5, 37592 (2015).
Wray, L. A. et al. Spectroscopic determination of the atomic f-electron symmetry underlying hidden order in URu2Si2. Phys. Rev. Lett. 114, 236401 (2015).
Butorin, S. M. Resonant inelastic X-ray scattering as a probe of optical scale excitations in strongly electron-correlated systems: quasi-localized view. J. Electron Spectrosc. Relat. Phenom. 110-111, 213–223 (2000).
Sundermann, M. et al. Direct bulk-sensitive probe of 5f symmetry in URu2Si2. Proc. Natl Acad. Sci. USA 113, 13989–13994 (2016).
Kvashnina, K. O. & de Groot, F. M. F. Invisible structures in the X-ray absorption spectra of actinides. J. Electron Spectrosc. Relat. Phenom. 194, 88–93 (2014).
Augustin, E. et al. Charge transfer excitations in VUV and soft x-ray resonant scattering spectroscopies. J. Electron Spectrosc. Relat. Phenom. 220, 121–124 (2017).
Haule, K. & Kotliar, G. Coherence–incoherence crossover in the normal state of iron oxypnictides and importance of Hund's rule coupling. New J. Phys. 11, 025021 (2009).
Georges, A., de' Medici, L. & Mravlje, J. Strong correlations from Hund's coupling. Annu. Rev. Condens. Matter Phys. 4, 137–178 (2013).
Wang, Y.-L. & Cooper, B. R. Collective excitations and magnetic ordering in materials with singlet crystal-field ground state. Phys. Rev. 172, 539 (1968).
Cooper, B. & Vogt, O. Singlet ground state magnetism. J. De. Phys. Colloq. 32, C1–958 (1971).
Lindgard, P.-A. & Schmid, B. Theory of singlet-ground-state magnetism: Application to field-induced transitions, in CsFeCl3 and CsFeBr3. Phys. Rev. B 48, 13636–13646 (1993).
Haule, K. & Kotliar, G. Arrested Kondo effect and hidden order in URu2Si2. Nat. Phys. 5, 796–799 (2009).
Campostrini, M., Pelissetto, M., Rossi, P. & Vicari, E. 25th-order high-temperature expansion results for three-dimensional Ising-like systems on the simple-cubic lattice. Phys. Rev. E 65, 066127 (2002).
Landau, L. D. On the theory of specific heat anomalies. Phys. Z. Sowjetunion 8, 113 (1935).
Huang, K. Statistical Mechanics. 2nd ed (Wiley, New York, 1987).
Stöhr, J. . NEXAFS Spectroscopy. 1st ed, (Springer, Berlin, 1992). Corr. 2nd ed. 2003.
Lynn, J. W. et al. Double-focusing thermal triple-axis spectrometer at the NCNR. J. Res. Natl Inst. Stand. Technol. 117, 61–79 (2012).
Freeman, A. J., Desclaux, J. P., Lander, G. H. & Faber, J. Jr. Neutron magnetic form factors of uranium ions. Phys. Rev. B 13, 1168–1176 (1976).
Robert D. Cowan's Atomic Structure Code https://www.tcd.ie/Physics/people/Cormac.McGuinness/Cowan/ (2009).
Anderson, E. et al. LAPACK User's Guide. 3rd ed., (SIAM, Philadelphia, 1999).
Gupta, S. S. et al. Coexistence of bound and virtual-bound states in shallow-core to valence x-ray spectroscopies. Phys. Rev. B 84, 075134 (2011).
Kotani, A. & Ogasawara, H. Theory of core-level spectroscopy in actinide systems. Phys. B. 186–188, 16–20 (1993).
Kotliar, G. et al. Electronic structure calculations with dynamical mean-field theory. Rev. Mod. Phys. 78, 865–951 (2006).
Haule, K., Yee, C.-H. & Kim, K. Dynamical mean-field theory within the full-potential methods: electronic structure of CeIrIn5, CeCoIn5, and CeRhIn5. Phys. Rev. B 81, 195107 (2010).
Blaha, P., Schwarz, K., Madsen, G., Kvasnicka, D. & Luitz, J. WIEN2k: An Augmented Plane Wave + LO Program for Calculating Crystal Properties. (TU Wien, Vienna, 2001).
We are grateful for discussions with S. Roy and L. Klein. This research used resources of the Advanced Light Source, which is a DOE Office of Science User Facility under contract no. DE-AC02-05CH11231. Work at NYU was supported by the MRSEC Program of the National Science Foundation under Award Number DMR-1420073. P.A. acknowledges funding from the U.S. National Science Foundation CAREER under award No. NSF-DMR 1654482. The identification of any commercial product or trade name does not imply endorsement or recommendation by the National Institute of Standards and Technology. G.K. and C.-J.K. are supported by DOE BES under grant no. DE-FG02-99ER45761. G.K. carried out this work during his sabbatical leave at the NYU Center for Quantum Phenomena, and gratefully acknowledges NYU and the Simons foundation for sabbatical support. Y.W. was supported by the US Department of energy, Office of Science, Basic Energy Sciences as a part of the Computational Materials Science Program through the Center for Computational Design of Functional Strongly Correlated Materials and Theoretical Spectroscopy.
Department of Physics, New York University, New York, NY, 10003, USA
Lin Miao, Rourav Basak, Yishuai Xu, Erica Kotta, Haowei He & L. Andrew Wray
Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
Lin Miao, Jonathan D. Denlinger & Yi-De Chuang
NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, MD, 20899, USA
Sheng Ran, Y. Zhao, Z. Xu, J. W. Lynn, S. R. Saha & Nicholas P. Butch
Department of Materials Science and Engineering, University of Maryland, College Park, MD, 20742, USA
Y. Zhao
Materials Science Division, Lawrence Livermore National Laboratory, Livermore, CA, 94550, USA
J. R. Jeffries
Center for Nanophysics and Advanced Materials, Department of Physics, University of Maryland, College Park, MD, 20742, USA
S. R. Saha & Nicholas P. Butch
Department of Physics, Applied Physics and Astronomy, Binghamton University, Binghamton, NY, 13902, USA
Ioannis Giannakis & Pegor Aynajian
Department of Physics and Astronomy, Rutgers University, Piscataway, NJ, 08854-8019, USA
Chang-Jong Kang & Gabriel Kotliar
Department of Condensed Matter Physics and Materials Science, Brookhaven National Laboratory, Upton, NY, 11973, USA
Yilin Wang
Lin Miao
Rourav Basak
Sheng Ran
Yishuai Xu
Erica Kotta
Haowei He
Jonathan D. Denlinger
Yi-De Chuang
Z. Xu
J. W. Lynn
S. R. Saha
Ioannis Giannakis
Pegor Aynajian
Chang-Jong Kang
Gabriel Kotliar
Nicholas P. Butch
L. Andrew Wray
L.M., R.B., Y.X., E.K., and H.H. carried out the XAS experiments with support from J.D.D., Y.-D.C., and J.R.J.; neutron measurements were performed by S.R. and N.P.B. with support from Y.Z., Z.X., and J.W.L.; STM measurements were performed by I.G., with guidance from P.A.; high quality samples were synthesized by S.R. and S.R.S. with guidance from N.P.B.; multiplet simulations were performed by L.M. with guidance from L.A.W., and DFT + DMFT simulations were performed by C.-J.K. with assistance from Y.W. and guidance from G.K.; L.M., R.B., Y.X., P.A., N.P.B., and L.A.W participated in the analysis, figure planning, and draft preparation; L.A.W. was responsible for the conception and the overall direction, planning, and integration among different research units.
Correspondence to L. Andrew Wray.
Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Miao, L., Basak, R., Ran, S. et al. High temperature singlet-based magnetism from Hund's rule correlations. Nat Commun 10, 644 (2019). https://doi.org/10.1038/s41467-019-08497-3
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Do Monoid Homomorphisms preserve the identity?
In both my textbook (Hungerford's Algebra), and in class, it is claimed that Monoid Homomorphisms are not required to preserve the identity. Interestingly enough, the Wikipedia page for Monoids requires Monoid Homomorphisms to preserve the identity element: https://en.wikipedia.org/wiki/Monoid#Monoid_homomorphisms. I haven't found an example of the former, so I thought I'd prove the opposite statement.
I believe that I have proved the opposite assertion, based on the proof that I used to show that Group Homomorphisms preserve the identity. Since I don't use any information stating that elements are invertible, I think my proof is still valid.
Let $M, N$ be monoids, and let $f:M\rightarrow N $ be a homomorphism of monoids. Let $m,e_{M} \in M$ be an arbitrary element and the identity in $M$ respectively .
Then: $$f(m) = f(m\cdot e_{M}) = f(m)\cdot f(e_{M})$$ $$f(m) = f(e_{M} \cdot m) = f(e_{M}) \cdot f(m)$$ Thus: $$f(m)\cdot f(e_{M}) = f(e_{M}) \cdot f(m) = f(m), \forall m \in M $$ This seems to imply my assertion. Is there anything wrong with my proof?
group-theory proof-verification monoid
Thomas DavisThomas Davis
$\begingroup$ Morphisms should always preserve structures. Having a morphing that does not preserve of part of the structure, the identity is highly dubious. It would make $\mathbf{Mon}$ a very weird category for instance. $\endgroup$
– Lærne
$\begingroup$ You've misunderstood Hungerford -- he only defines "homomorphism" for semigroups. The quote you refer to is basically the statement "not every semigroup homomorphism between monoids is actually a monoid homomorphism", except Hungerford never seems to define what a monoid homomorphism is (it must preserve the identity) nor what a group homomorphism is (it must preserve the identity and inverses). $\endgroup$
$\begingroup$ ... a strange feature of the theory of groups is that every semigroup homomorphism between groups turns out to also be a group homomorphism. Because of this, it is (distressingly, IMO) common for introductory texts to define "group homomorphism" to be "semigroup homomorphism between groups". $\endgroup$
$\begingroup$ @Hurkyl I think you comment should be posted as answer. Maybe it does not exactly answer as it is posed int the title, but this title is based on a wrong interpretation of the text in Hungerford's Algebra. Comments are volatile objects. We could not count on that they will not be removed. $\endgroup$
– miracle173
As 57Jimmy points out in their comment, you have not proved that the "identity" you have found is the identity of the whole monoid.
Let make this all formal:
If $f:A\rightarrow B$ is a semigroup homomorphism and $A$ and $B$ are monoids then it is not necessarily true that $f(e_A)=e_B$.
As a counter-example, take your favourite monoid $A$ and then attach an identity to obtain a new monoid, $B$. Then the embedding map $A\hookrightarrow B$ is a semigroup homomorphism, but the image of the identity isn't the identity of $B$. For example, take $A=\{e\}$ such that $e^2=e$ and attach an identity $1$ to obtain a new monoid $B$, so $B=\{e, 1\}$ where $1\cdot e=e=e\cdot 1$ and $1^2=1$. Then clearly the monoid $A$ embeds into $B$, but $e$ is not the identity of $B$ (it is in fact the zero, as $1\cdot e=e$, etc.)
$\begingroup$ What do you mean by "attach an identity"? $\endgroup$
– Thomas Davis
$\begingroup$ @user I've edited my answer to give an example. This is a pretty standard idea in semigroup theory - just attach an element $1$ such that $1\cdot a=a=a\cdot 1$ for all $a\in A$. (So, for example, every semigroup embeds into a monoid in this way.) $\endgroup$
The problem is that you have only proved that $f(e_M)$ is an identity for the elements in the image of $f$, not for all the elements in $N$. This is also specified on Wikipedia. So in general, if you do not require it, it is not true that the identity is preserved. Here is a counterexample:
$$(\mathbb{R},*,1) \to (\mathbb{R},*,1), r \mapsto 0.$$
I personally find it strange to require that there is an identity but not that it is preserved by morphisms...
57Jimmy57Jimmy
Given an algebraic structure $(A, f_1, f_2, \ldots)$ in the sense of universal algebra morphism are always required to respect all operations (this also includes the constants, as these are modelled as $0$-ary operations). Similar, substructures are closed under all operations, hence they include the same constants as the original algebraic structure.
A monoid has signature $(M, \cdot, 1)$, and respecting $1$ means that the identity should be preserved, similar a group homomorphism is required to preseve inversion. A submonoid is a subset that contains $1$ and so on.
As pointed out by others, we could not get rid of the requirement that the identity should be preserved, i.e. monoids does not form a full subcategory of semigroups. But as you have shown, the image of the identity forms an identity in the image of the morphism.
But sometimes we have some relations between these concepts, particular in the case of groups. Groups form a full subcategory of semigroups, for a proof see here. Similar, a finite subsets of a group is a subsemigroup iff it is a subgroup, this follows by looking at the powers of elements from the subset. So sometimes authors may define group homomorphisms in this weaker sense, or even subgroups just as subsemigroups if they are just concerned with finite groups. Something similar concerns kernels. In general these are relations defined by looking at that elements that are mapped to the same element under a homomorphims, giving rise to a congruence relation and congruence classes, again in the group case such a kernel is fully specified by just giving a single congruence class which form a normal subgroup, and almost every group theorist uses this more restrictive definition of kernel then. Also in a monoid there could be many subsemigroups that form themselve monoids or even subgroups (again look at the powers of some element), but these are not submonoids if they do not contain the identity of the whole monoid. So, sometimes we do not use all properties of the correct definitions, but in your case of monoids we do need all of them.
StefanHStefanH
Your proof just proves that for every element in $\text{Im}(f)$ the element $f(e)$ acts as a unit but it does not imply that this holds for every element of $N$, unless $f$ is surjective.
Hence it is necessary to require that a monoid homomorphism preserves unit.
Allow me to provide a ciunter-example that explain why we need to require in the definition of monoid-homomorphism the unit preservation properties, i.e. why it cannot be deduced.
Consider the the monoid $\mathbb N$ of the natural numbers with addition and let $B$ be the monoid of truth-values, i.e. $B=\{\bot=false,\top=true\}$ with the monoid structure given by the the logical or $\lor$ and having unit the truth value $\bot$.
The mapping $$\begin{align*}f \colon \mathbb N &\longrightarrow B\\ f(n)&=\top \\ \end{align*}$$ is a semigroup homomorphism, i.e. $$f(n+m)=\top=\top \lor \top =f(n)\lor f(m)$$ but it is not a monoid homomorphism, because it does not preserve the unit, $f(0) \ne \bot$.
Since monoids are semigroups it is interesting studying semigroup-homomorphism between them, nevertheless they should notnregarded as the correct notion of morphism for monoid. A good notion of morphism should oreserve all the structure and as the examle above shows semigroup-homomorphisms may fail in doing so.
Giorgio MossaGiorgio Mossa
Not the answer you're looking for? Browse other questions tagged group-theory proof-verification monoid or ask your own question.
The intersection of all ideals of a monoid
Are these adjoint functors to/from the category of monoids with semigroup homomorphisms?
"The regular languages over $A$ are the homomorphic pre-images in $A^*$ of subsets of finite monoids."
Why does a group homomorphism preserve more structure than a monoid homomorphism while satisfying fewer equations
Group freely generated by monoid
Embedding a semigroup into a monoid | CommonCrawl |
Non-autonomous 2D Newton-Boussinesq equation with oscillating external forces and its uniform attractor
EECT Home
Some results on the behaviour of transfer functions at the right half plane
doi: 10.3934/eect.2020092
Uniform stability in a vectorial full Von Kármán thermoelastic system with solenoidal dissipation and free boundary conditions
Catherine Lebiedzik
Wayne State University, Department of Mathematics, Detroit, MI 48201 USA
Received February 2020 Revised August 2020 Published September 2020
We will consider the full von Kármán thermoelastic system with free boundary conditions and dissipation imposed only on the in-plane displacement. It will be shown that the corresponding solutions are exponentially stable, though there is no mechanical dissipation on the vertical displacements. The main tools used are: (i) partial analyticity of the linearized semigroup and (ii) trace estimates which exploit the hidden regularity harvested from partial analyticity.
Keywords: Uniform stability, nonlinear thermoelasticity, Von Kármán plates, free boundary conditions.
Mathematics Subject Classification: Primary:35B35, 35M30, 74B20, 74F05, 74K20;Secondary:35A01, 35A02.
Citation: Catherine Lebiedzik. Uniform stability in a vectorial full Von Kármán thermoelastic system with solenoidal dissipation and free boundary conditions. Evolution Equations & Control Theory, doi: 10.3934/eect.2020092
G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system without mechanical dissipation, Rend. Istit. Mat. Univ. Trieste, 28 (1997), 1-28. Google Scholar
M. Belishev and I. Lasiecka, The dynamical Lamé system: Regularity of solutions, boundary controllability and boundary data continuation, ESAIM, Control Optim. Calc. Var., 8 (2002), 143-167. doi: 10.1051/cocv:2002058. Google Scholar
A. Benabdallah, Modeling of Von Kármán system with thermal effects, Prepublications de L'equipe de Mathematiques de Besancon, 99 (1999). Google Scholar
A. Benabdallah and I. Lasiecka, Exponential decay rates for a full Von Kármán system of dynamic thermoelasticity, J. Differential Equations, 160 (2000), 51-93. doi: 10.1006/jdeq.1999.3656. Google Scholar
A. Benabdallah and I. Lasiecka, Exponential decay rates for a full Von Kármán thermoelastic system with nonlinear thermal coupling, ESIAM: Proceedings. Contrôle des systèmes gouvernés par des équations aux dérivées partielles, 8 (2000), 13-38. doi: 10.1051/proc:2000002. Google Scholar
A. Benabdallah and D. Teniou, Exponential stability of a Von Kármán model with thermal effects, Electronic Journal of Differential Equations, (1998), 1–13. Google Scholar
A. Bensoussan, G. Da Prato, M. Delfour and S. Mitter and K. Sanjoy, Representation and Control of Infinite Dimensional Systems, 2$^nd$ edition, Birkhäuser Boston, Inc., Boston, MA, 2007. doi: 10.1007/978-0-8176-4581-6. Google Scholar
P. G. Ciarlet, Mathematical Elasticity. Vol. Ⅲ. Theory of Shells, in Studies in Mathematics and its Applications, Vol. 29, North-Holland Publishing Co., Amsterdam, 2000. Google Scholar
P. G. Ciarlet and P.Rabier, Les Equations de Von Karman, Springer, Berlin, 1980. Google Scholar
C. M. Dafermos, On the existence and asymptotic stability of solutions to the equations of linear thermoelasticity, Archive for Rational Mechanics and Analysis, 29 (1968), 241-271. doi: 10.1007/BF00276727. Google Scholar
E. H. Dowell, A Modern Course in Aeroelasticity, 5th edition, Springer, Cham, 2015. doi: 10.1007/978-3-319-09453-3. Google Scholar
A. Favini, M. Horn, I. Lasiecka and D. Tataru, Global existence, uniqueness and regularity of solutions to a von Kármán system with nonlinear boundary dissipation, Differential Integral Equation, 9 (1996), 267-294. Google Scholar
I. Chueshov and I. Lasiecka, Von Kármán Evolution Equations, Springer, New York, 2010. doi: 10.1007/978-0-387-87712-9. Google Scholar
J. U. Kim, On the energy decay of a linear thermoelastic bar and plate, SIAM J. Math. Anal., 23 (1992), 889-899. doi: 10.1137/0523047. Google Scholar
H. Koch, Slow decay in linear thermoelasticity, Quart. Appl. Math, 58 (2000), 601-612. doi: 10.1090/qam/1788420. Google Scholar
H. Koch and I. Lasiecka, Hadamard well-posedness of weak solutions in nonlinear dynamic elasticity – full von Kármán systems, in Progress in Nonlinear Differential Equations and their Appl., Vol. 50, Birkhäuser, Basel, 2002. Google Scholar
W. T. Koiter, On the nonlinear theory of thin elastic shells, Nederl. Akad. Wetensch. Proc. Ser. B, 69 (1966), 1-54. Google Scholar
J. Lagnese, Boundary Stabilization of Thin Plates, SIAM, Philadelphia, PA, 1989. doi: 10.1137/1.9781611970821. Google Scholar
J. Lagnese and G. Leugering, Uniform stabilization of a nonlinear beam by nonlinear boundary feedback, J. Differential Equations, 91 (1991), 355-388. doi: 10.1016/0022-0396(91)90145-Y. Google Scholar
J. Lagnese and J.-L. Lions, Modelling analysis and control of thin plates, in Research in Applied Mathematics, Vol. 6, Masson, Paris, 1988. Google Scholar
I. Lasiecka, Uniform stabilizability of a full von Kármán system with nonlinear boundary feedback, SIAM J. Control Optim., 36 (1998), 1376-1422. doi: 10.1137/S0363012996301907. Google Scholar
I. Lasiecka, Uniform decay rates for full von Kármán system of dynamic thermoelasticity with free boundary conditions and partial boundary dissipation, Comm. Partial Differential Equations, 24 (1999), 1801-1847. doi: 10.1080/03605309908821483. Google Scholar
I. Lasiecka, Mathematical Control Theory of Coupled PDEs, SIAM, Philadelphia, PA, 2002. doi: 10.1137/1.9780898717099. Google Scholar
I. Lasiecka, J. L. Lions and R. Triggiani, Nonhomogeneous boundary value problems for second-order hyperbolic operators, J. Math. Pures Appl., 65 (1986), 149-192. Google Scholar
I. Lasiecka, T. Ma and R. Montiero, Global smooth attractors for dynamics of thermal shallow shells without vertical dissipation, Trans. Amer. Math. Soc., 371 (2019), 8051-8096. doi: 10.1090/tran/7756. Google Scholar
I. Lasiecka, T. Ma and R. Montiero, Long-time dynamics of vectorial von Kármán system with nonlinear thermal effects and free boundary condition, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 1037-1072. doi: 10.3934/dcdsb.2018141. Google Scholar
I. Lasiecka and R. Triggiani, Sharp regularity theory for second order hyperbolic equations of Neumann type. Part Ⅰ. $L_2$ nonhomogenous data, Ann. Mat. Pura Appl., 157 (1990), 285-367. doi: 10.1007/BF01765322. Google Scholar
I. Lasiecka and R. Triggiani, Analyticity of thermoelastic semigroups with free boundary conditions, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 27 (1999), 457-487. Google Scholar
[29] I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, in Encyclopedia of Mathematics and its Applications, Volume Ⅰ and Ⅱ, Cambridge University Press, Cambridge, 2000. Google Scholar
J. L. Lions, Quelques Methodes de Résolution des Problèmes aux Limits Nonlinéaires, Dunod, Gauthier-Villars, Paris, 1969. Google Scholar
Z. Liu and M. Renardy, A note on the equations of a thermoelastic plate, Appl. Math. Lett, 8 (1995), 1-6. doi: 10.1016/0893-9659(95)00020-Q. Google Scholar
S. Miyatake, Mixed problem for hyperbolic equation of second order, J. Math. Kyoto Univ., 13 (1973), 435-487. doi: 10.1215/kjm/1250523319. Google Scholar
V. I. Sedenko, On uniqueness of the generalized solutions of initial boundary value problem for Marguerre-Vlasov nonlinear oscillations of the shallow shells, Russian Izvestiya, North-Caucasus Region, Ser. Natural Sciences, 1-2 (1994). Google Scholar
D. Tataru, On the regularity of boundary traces for the wave equation, Annali di Scuola Normale Superiore, 26 (1998), 185-206. Google Scholar
T. von Kármán, Festigkeitprobleme in Maschinenbau, Encyklopedie der Mathematischen Wissenschaften, 4 (1910), 314-385. Google Scholar
Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020050
Duy Phan. Approximate controllability for Navier–Stokes equations in $ \rm3D $ cylinders under Lions boundary conditions by an explicit saturating set. Evolution Equations & Control Theory, 2021, 10 (1) : 199-227. doi: 10.3934/eect.2020062
Mathew Gluck. Classification of solutions to a system of $ n^{\rm th} $ order equations on $ \mathbb R^n $. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246
Yubiao Liu, Chunguo Zhang, Tehuan Chen. Stabilization of 2-d Mindlin-Timoshenko plates with localized acoustic boundary feedback. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021006
Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033
Chueh-Hsin Chang, Chiun-Chuan Chen, Chih-Chiang Huang. Traveling wave solutions of a free boundary problem with latent heat effect. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021028
Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400
Eduard Feireisl, Elisabetta Rocca, Giulio Schimperna, Arghir Zarnescu. Weak sequential stability for a nonlinear model of nematic electrolytes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 219-241. doi: 10.3934/dcdss.2020366
Roland Schnaubelt, Martin Spitz. Local wellposedness of quasilinear Maxwell equations with absorbing boundary conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 155-198. doi: 10.3934/eect.2020061
Kuntal Bhandari, Franck Boyer. Boundary null-controllability of coupled parabolic systems with Robin conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 61-102. doi: 10.3934/eect.2020052
Qianqian Hou, Tai-Chia Lin, Zhi-An Wang. On a singularly perturbed semi-linear problem with Robin boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 401-414. doi: 10.3934/dcdsb.2020083
Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283
Franck Davhys Reval Langa, Morgan Pierre. A doubly splitting scheme for the Caginalp system with singular potentials and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 653-676. doi: 10.3934/dcdss.2020353
Jingjing Wang, Zaiyun Peng, Zhi Lin, Daqiong Zhou. On the stability of solutions for the generalized vector quasi-equilibrium problems via free-disposal set. Journal of Industrial & Management Optimization, 2021, 17 (2) : 869-887. doi: 10.3934/jimo.2020002
Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054
Huijuan Song, Bei Hu, Zejia Wang. Stationary solutions of a free boundary problem modeling the growth of vascular tumors with a necrotic core. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 667-691. doi: 10.3934/dcdsb.2020084 | CommonCrawl |
\begin{document}
\title{Two explicit Skorokhod embeddings for simple symmetric random walk\thanks{Xue Dong He thanks research funds from Columbia University and The Chinese University of Hong Kong. Jan Ob{\l}{\'o}j thanks CUHK for hosting him as a visitor in March 2013 and gratefully acknowledges support from the ERC (grant no.\ 335421) under the EU's $7^{\textrm{th}}$ FP, and from St John's College in Oxford. The research of Xun Yu Zhou was supported by grants from Columbia University, the Oxford--Man Institute of Quantitative Finance and Oxford--Nie Financial Big Data Lab. The authors are grateful to anonymous reviewers for their comments and in particular express thanks to one reviewer who has suggested the current method of proof of Theorem \ref{thm:reformulate} to replace our original one and made comments which led to our Remark \ref{rk:nonuniqueAY}.} }
\author{Xue Dong He\thanks{Corresponding Author. Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong. Email: \texttt{[email protected]}.} \and Sang Hu\thanks{School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 518172. Email: \texttt{[email protected]}.} \and Jan Ob{\l}{\'o}j\thanks{Mathematical Institute and St John's College, University of Oxford, Oxford, UK. Email: \texttt{[email protected]}.} \and Xun Yu Zhou\thanks{Department of Industrial Engineering and Operations Research, Columbia University, New York, New York 10027. Email: \texttt{[email protected]}.}}
\maketitle
\begin{abstract}
Motivated by problems in behavioural finance, we provide two explicit constructions of a randomized stopping time which embeds a given centered distribution $\mu$ on integers into a simple symmetric random walk in a uniformly integrable manner. Our first construction has a simple Markovian structure: at each step, we stop if an independent coin with a state-dependent bias returns tails. Our second construction is a discrete analogue of the celebrated Az\'ema--Yor solution and requires independent coin tosses only when excursions away from maximum breach predefined levels. Further, this construction maximizes the distribution of the stopped running maximum among all uniformly integrable embeddings of $\mu$.
{\bf Keywords:} Skorokhod embedding; simple symmetric random walk; randomized stopping time; Az\'ema-Yor stopping time.
\end{abstract}
\section{Introduction}\label{se:Introduction}
We contribute to the literature on Skorokhod embedding problem (SEP). The SEP, in general, refers to the problem of finding a stopping time $\tau$, such that a given stochastic process $X$, when stopped, has the prescribed distribution $\mu$: $X_\tau\sim \mu$. When such a $\tau$ exists we say that $\tau$ embeds $\mu$ into $X$. This problem was first formulated and solved by \citet{Skorokhod:65} when $X$ is a standard Brownian motion. It has remained an active field of study ever since, see \cite{Obloj2004:SkorokhodEmbedding} for a survey, and has recently seen a revived interest thanks to an intimate connection with the Martingale Optimal Transport, see \cite{BeiglbockCoxHuesmann} and the references therein.
In this paper, we consider the SEP for the simple symmetric random walk. Our interest arose from a casino gambling model of \citet{Barberis2012:Casino} in which the gambler's cumulative gain and loss process is modeled by a random walk. The gambler has to decide when to stop gambling and her preferences are given by cumulative prospect theory \citep{TverskyKahneman1992:CPT}. Such preferences lead to dynamic inconsistency, so this optimal stopping problem cannot be solved by the classical Snell envelop and dynamic programming approaches. By applying the Skorokhod embedding result we obtain here, \citet{HeEtal2014:StoppingStrategies} convert the optimal stopping problem into an infinite-dimensional optimization problem, find the (pre-committed) optimal stopping time, and study the gambler's behavior in the casino.
To discuss our results, let us first introduce some notation. We let $X=S=(S_t:t\geq 0)$ be a simple symmetric random walk defined on a filtered probability space $(\Omega,{\cal F},\mathbb{F},\mathbb{P})$, where $\mathbb{F}=({\cal F}_t)_{t\geq 0}$. We work in discrete time so here, and elsewhere, $\{t\geq 0\}$ denotes $t\in \{0,1,2,\ldots\}$. We let ${\cal T}(\mathbb{F})$ be the set of $\mathbb{F}$--stopping times and say that $\tau\in {\cal F}(\mathbb{F})$ is uniformly integrable (UI) if the stopped process $(S_{t\wedge \tau}:t\geq 0)$ is UI. Here, and more generally when considering martingale, one typically restricts the attention to UI stopping times to avoid trivialities and to obtain solutions which are of interest and use. We let $\mathbb{Z}$ denote the set of integers and ${\cal M}_0(\mathbb{Z})$ the set of probability measures on $\mathbb{Z}$ which admit finite first moment and are centered. Our prime interest is in stopping times which solve the SEP: \begin{equation} \label{eq:SEPdef} \mathrm{SEP}(\mathbb{F},\mu):= \left\{\tau\in {\cal T}(\mathbb{F}): S_\tau\sim \mu \textrm{ and }\tau \textrm{ is UI}\right\}. \end{equation} Clearly if $\mathrm{SEP}(\mathbb{F},\mu)\neq \emptyset$ then $\mu\in {\cal M}_0(\mathbb{Z})$. For embeddings in a Brownian motion, the analogue of \eqref{eq:SEPdef} has a solution if and only if $\mu\in {\cal M}_0(\mathbb{R})$. However, in the present setup, the reverse implication depends on the filtration $\mathbb{F}$. If we consider the natural filtration $\mathbb{F}^S=({\cal F}^S_t:t\geq 0)$ where ${\cal F}^S_t=\sigma(S_u:u\leq t)$, then \citet{CoxObloj2008:ClassesofMeasures} showed that the set of probability measures $\mu$ on $\mathbb{Z}$ for which $\mathrm{SEP}(\mathbb{F}^S,\mu)\neq \emptyset$ is a fractal subset of ${\cal M}_0(\mathbb{Z})$. In contrast, \citet{Rost1971:Markoff} and \citet{Dinges1974:Stopping} showed how to solve the SEP using randomized stopping times, so that if $\mathbb{F}$ is rich enough then $\mathrm{SEP}(\mathbb{F},\mu)\neq \emptyset$ for all $\mu\in {\cal M}_0(\mathbb{Z})$. We note also that the introduction of external randomness is natural from the point of view of applications. In the casino model of \citet{Barberis2012:Casino} mentioned above, \citet{HeHuOblojZhou2016:Randomization} and \citet{HendersonHobsonTse14} showed that the gambler is strictly better off when she uses extra randomness, such as a coin toss, in her stopping strategy instead of relying on $\tau\in {\cal T}(\mathbb{F}^S)$. Similarly, randomized stopping times are useful in solving optimal stopping problems, see e.g.\ \citet{BelomestnyKraetschmer2014:OptimalStopping} and \citet{HeEtal2014:StoppingStrategies}.
Our contribution is to give two new constructions of $\tau\in \mathrm{SEP}(\mathbb{F},\mu)$ with certain desirable optimality properties. Our first construction, in Section \ref{se:RanodmizedPI} below, has minimal (Markovian) dependence property: a decision to stop only depends on the current state of $S$ and an independent coin toss. The coins are suitably biased, with state dependent probabilities, which can be readily computed using an explicit algorithm we provide. Such a strategy is easy to compute and easy to implement, which is important for applications, e.g.\ to justify its use by economic agents, see \citet{HeHuOblojZhou2016:Randomization}. We also link our construction to the embedding in \citet{CoxHobsonObloj:11} and show that the former corresponds to a suitable projection of the latter. Our second construction, presented in Section \ref{se:AYlikeStoppingTimes}, is a discrete--time analogue of the \citet{AzemaYor1979} embedding. It is also explicit and its first appeal lies in the fact that it only stops when the loss, relative to the last maximum, gets too large. It also has an inherent probabilistic interest: it maximizes $\mathbb{P}(\max_{t\leq \tau}S_t\geq x)$, simultaneously for all $x\in \mathbb{R}$, among all $\tau\in {\cal T}(\mathbb{F})$, attaining the classical \citet{BlackwellDubins1963:AConverse} bound for $x\in \mathbb{N}$. We conclude the paper with explicit examples worked out in Section \ref{se:Example}.
\section{Randomized Markovian Solution to the SEP}\label{se:RanodmizedPI}
To formalize our first construction, consider $\mathbf{r}=(r^x:x\in \mathbb{Z})\in [0,1]^{\mathbb{Z}}$ and a family of Bernoulli random variables $\boldsymbol{\xi}=\{\xi_{t}^x\}_{t\ge 0,x\in\mathbb{Z}}$ with $\mathbb{P}(\xi_{t}^x=0)=r^{x}=1-\mathbb{P}(\xi_{t}^x=1)$, which are independent of each other and of $S$. Each $\xi_{t}^x$ stands for the outcome of a coin toss at time $t$ when $S_t=x$ with $1$ standing for heads and $0$ standing for tails. To such $\boldsymbol{\xi}$ we associate \begin{align}\label{eq:StoppingTimeCoinToss} \tau(\mathbf{r}):=\inf\{t\ge 0: \xi_{t}^{S_t}=0\}, \end{align} which is a stopping time relative to $\mathbb{F}^{S,\boldsymbol{\xi}}=({\cal F}^{S,\boldsymbol{\xi}}_t:t\geq 0)$ where ${\cal F}^{S,\boldsymbol{\xi}}_t:=\sigma(S_s,\xi_{s}^{S_s},s\leq t)$. The decision to stop at time $t$ only depends on the state $S_t$ and an independent coin toss. Accordingly, we refer to $\tau(\mathbf{r})$ as a {\em randomized Markovian stopping time}. It is clear however that the distribution of $S_{\tau(\mathbf{r})}$ is a function of $\mathbf{r}$ and does not depend on the particular choice of the random variable $\boldsymbol{\xi}$. The following shows that such stopping times allow us to solve \eqref{eq:SEPdef} for all $\mu\in {\cal M}_0(\mathbb{Z})$. \begin{theorem}\label{thm:reformulate}
For any $\mu \in \mathcal{M}_0(\mathbb{Z})$, there exists $\mathbf{r}_\mu\in [0,1]^\mathbb{Z}$ such that $\tau(\mathbf{r}_\mu)$ solves $\mathrm{SEP}(\mathbb{F}^{S,\boldsymbol{\xi}},\mu)$. \end{theorem} \subsection{Proof of Theorem \ref{thm:reformulate}}\label{sec:proofMarkovian} We establish the theorem by embedding our setup into a Brownian setting and then using Theorem 5.1 in \citet{CoxHobsonObloj:11}. We reserve $t$ for the discrete time parameter and use $u\in [0,\infty)$ for the continuous time parameter. We assume our probability space $(\Omega,{\cal F},\mathbb{P})$ supports a standard Brownian motion $B=(B_u)_{u\geq 0}$. We let $\mathbb{G}=({\cal G}_u)_{u\geq 0}$ denote its natural filtration taken right-continuous and complete and $L_u^y$ denote its local time at time $u\geq 0$ and level $y\in \mathbb{R}$. Recall that $\mu\in \mathcal{M}_0(\mathbb{Z})$ is fixed. \citet{CoxHobsonObloj:11} show that there exists a measure $m$ on $\mathbb{R}$ such that, for a Poisson random measure $\Delta^m$ with intensity $du \times m(dx)$, independent of $B$, $$T^m=\inf\{u\geq 0: \Delta^m(R_u)\geq 1 \},\quad \text{where }R_u=\{(s,y): L_u^y>s\},$$ is minimal and embeds $\mu$, i.e., $B_{T^m}\sim \mu$ and $(B_{T^m\land u}: u \geq 0)$ is uniformly integrable, the latter being equivalent to minimality of $T^m$, see \citet[Sec.~8]{Obloj2004:SkorokhodEmbedding}. Moreover, by the construction in \citet{CoxHobsonObloj:11}, $m(I^c)=+\infty$ for any interval $I$ that contains the support of $\mu$. For $\mu \in \mathcal{M}_0(\mathbb{Z})$ it follows that $m$ is a measure on $\mathbb{Z}$: $m(dy)=\sum_{x\in \mathbb{Z}}m^x\delta_{x}(dy)$ ; otherwise, $\Delta^m(R_u)$ can possibly hit $1$ when the local time $L_u^y$ accumulates at certain non-integer level $y$, in which case $B_{T^m}$ takes value $y$. In addition, if $\mu$ has support bounded from above, i.e., $\mu([\bar{x},\infty))=\mu(\{\bar{x}\})>0$ for a certain $\bar x\in\mathbb{Z}$, then $m^{\bar{x}}=\infty$ and $T^m\leq \inf\{u\geq 0: B_u \geq \bar{x}\}$, with analogous expressions when the support is bounded from below. We note that $N^x_u= \Delta^m(\{(s,x): s\leq u\})$ is a Poisson process with parameter $m^x$, $x\in \mathbb{Z}$ and its first arrival time $\rho^x=\inf\{u\ge 0: N^x_u\geq 1\}$ is exponentially distributed with parameter $m^x$. We can now rewrite the embedding time as $$T^m=\inf\{u\geq 0: L_u^x\geq \rho^x \text{ for some }x\in \mathbb{Z}\}.$$ Consider now consecutive hitting times of integers $$ \sigma_0=0,\quad \sigma_t=\inf\left\{u\geq \sigma_{t-1}: B_u\in \mathbb{Z}\setminus\{B_{\sigma_{t-1}}\}\right\},\quad t=1,2,\ldots,$$ and note that $X_t:= B_{\sigma_t}$ is a simple symmetric random walk. Recall that the measure $dL^x_u$ is supported on $\{u: B_u=x\}$. This implies a particularly simple structure of the stopping time $T^m$. First, note that $T^m\neq \sigma_t$ unless $m^{X_t}=\infty$. Then, let us describe $T^m$ conditionally on $T^m>\sigma_t$. In particular, $\rho^{X_t}>L^{X_t}_{\sigma_t}$ and $\rho^{X_t}-L^{X_t}_{\sigma_t}$ is again exponential with parameter $m^{X_t}$. $T^m$ happens in $(\sigma_t,\sigma_{t+1})$ if and only if the local time accumulated at level $X_t$ is greater than this exponential variable. Clearly this event depends on the past only through the value of $X_t$. More formally, considering the new Brownian motion $B^t_u=B_{u+\sigma_t}-B_{\sigma_t}$, $u\geq 0$, and denoting its local time in zero as $L^{(0,t)}_u$ we see that $T^m\in (\sigma_t,\sigma_{t+1})$ if and only if $\{L^{(0,t)}_{\sigma_{t+1}}\geq \rho^{X_t}-L^{X_t}_{\sigma_t}\}$ which, conditionally on $X_t=x$, is independent of ${\cal F}_{\sigma_t}$ and has probability which only depends on $x$. Further, in this case $B_{T^m}=X_t$. If we let $\tau_{B,X}=t$ on $\{\sigma_t\leq T^m<\sigma_{t+1}\}$ then it follows that $\tau_{B,X}$ is a stopping time relative to the natural filtration of $X$ enlarged with a suitable family of independent random variables, it has the precise structure of $\tau(\mathbf{r})$ in \eqref{eq:StoppingTimeCoinToss}, embeds $\mu$ and is UI. More precisely, using the space homogeneity of Brownian motion, we can define the probabilities using just the local time in zero, and it follows that if we take $$r^x=\mathbb{P}(L^0_{\sigma_1}>\rho^x)$$ then $\tau(\mathbf{r})\in \mathrm{SEP}(\mathbb{F}^{S,\boldsymbol{\xi}},\mu)$ as required.
\begin{remark}\label{rk:maximalr}
We note that by following the methodology in \citet{CoxHobsonObloj:11} one could write a direct proof of Theorem \ref{thm:reformulate}, albeit longer and more involved than the one above. In particular, it is insightful to point out that if $\mu$ has a finite support -- $\mu([\underline{x},\bar{x}])=1$ with $\mu(\{\underline{x}\}) > 0$, $\mu(\{\bar{x}\}) > 0$ for some $\underline{x}<0<\bar{x}$ -- then $\mathbf{r}$ as constructed above
can be shown to be the maximal element in the set
\begin{equation*} \mathcal{R}_{\mu} = \{\mathbf{r} \in [0,1]^{\mathbb{Z}}: r^i=1 \text{ if }i\notin (\underline{x},\bar{x}) \text{ and } \mathbb{P}(S_{\tau(\mathbf{r})} = i) \leq \mu(\{i\}) \text{ if } i \in (\underline{x},\bar{x}) \}. \end{equation*} \end{remark}
\subsection{Algorithmic computation of the stopping probabilities $\mathbf{r}_\mu$} \label{subse:ConstructionRandPI} In this section, we work under the assumptions of Theorem \ref{thm:reformulate} and provide an algorithmic method for computing $\{r^i\}_{i\in \mathbb{Z}}$ obtained therein. We let $\tau=\tau(\mathbf{r}_\mu)$ and $g^i$ denote the expected number of visits of $S$ to state $i$ strictly before $\tau$, i.e.: \begin{align}\label{eq:ProbReaching} g^i:=\mathbb{E}\left[\sum_{t=0}^{\tau-1}\mathbf{1}_{S_t=i}\right]= \sum_{t=0}^\infty\mathbb{P}\left(\tau>t,S_{t} = i\right). \end{align} Denote $a^+:= \max\{a,0\}$. It is a matter of straightforward verification to check that for any $i\leq 0\leq j$ the processes $$(i-S_t)^+-\frac12 \sum_{u=0}^{t-1}\mathbf{1}_{S_u=i},\quad (S_t-j)^+-\frac12 \sum_{u=0}^{t-1}\mathbf{1}_{S_u=j},\quad t\geq 0,$$ are martingales. To compute $g^i$, we then apply the optional sampling theorem at $\tau\land t$ and let $t\to \infty$. Using the fact that $\{S_{\tau\land t}\}$ is a UI family of random variables together with monotone convergence theorem, we deduce that \begin{align} g^i &= 2 \mathbb{E}[(S_\tau-i)^+] = 2\sum_{k=i}^{+\infty} \mathbb{P}(S_\tau \ge k+1) ,\quad i=0,1,2,\dots,\label{eq:GiPositive}\\ g^i &= 2 \mathbb{E}[(i-S_\tau)^+] = 2\sum_{k=-\infty}^i \mathbb{P}(S_\tau \le k-1),\quad i=0,-1,-2,\dots\label{eq:GiNegative} \end{align} Writing $p^i:= \mu(\{i\})=\mathbb{P}(S_{\tau}=i)$, we now compute \begin{align*} \begin{split} p^i &= \mathbb{P}\left(S_{\tau} = i\right) =\sum_{t=0}^{\infty} \mathbb{P}\left(\tau = t, S_{t} = i\right)\\ & = \sum_{t=0}^{\infty} \mathbb{P}\left(\xi_{u,S_u} = 1, u=0,1,\dots, t-1,\xi_{t,S_t}=0, S_{t} = i\right)\textrm{, and by conditioning}\\
& = \sum_{t=0}^{\infty} \mathbb{P}\left(\xi_{u,S_u} = 1, u=0,1,\dots, t-1,S_{t} = i\right)r^i \\ &= r^i \sum_{t=0}^{\infty} \mathbb{P}\left(\tau\ge t,S_{t} = i\right) = r^i(g^i+p^i). \end{split} \end{align*} Therefore, if $p^i+g^i>0$, we must have $r^{i} = \frac{p^{i}}{p^{i} + g^{i}}$. If $p^i>0$ and $g^i=0$, which is the case if and only if $i$ is on the boundaries of the support, we have $r^i=1$, i.e., we have to stop instantly. If $p^i+g^i=0$, then $p^i=g^i=0$ and this can only happen for states outside the boundaries of the support. In this case, we set $r^i=1$, which is consistent with the characterisation in Remark \ref{rk:maximalr}. Thus $\mathbf{r}_\mu=(r^i)$ in Theorem \ref{thm:reformulate} is given by \begin{align}\label{eq:Computeri} r^{i} = \frac{p^{i}}{p^{i} + g^{i}}\mathbf 1_{\{p^{i} + g^{i}>0\}} +\mathbf 1_{\{p^{i} + g^{i}=0\}},\quad i\in \mathbb{Z}, \end{align} where $p^i=\mu(\{i\})$ and $g^i$ can be calculated from \eqref{eq:GiPositive} and \eqref{eq:GiNegative}. This can be seen as the equation on the bottom of page S22 in \cite{CoxHobsonObloj:11} specialised to our setup. While that equation is only argued heuristically therein, in our setup we can give it a rigorous meaning and proof.
\section{Randomized Az\'ema-Yor solution to the SEP}\label{se:AYlikeStoppingTimes} Let us recall the celebrated Az\'ema-Yor solution to the SEP for a standard Brownian motion $(B_u:u\geq 0)$. As above, we reserve $t$ for the discrete time parameter and use $u\in [0,\infty)$ for the continuous time parameter. To a centered probability measure $\mu$ on $\mathbb{R}$ we associate its barycenter function \begin{align}\label{eq:barycentrefunction} \psi_\mu(x):=\frac{1}{\bar \mu(x)}\int_{[x,+\infty)}y\mu(dy),\quad x\in \mathbb{R}, \end{align} where $\bar \mu(x):=\mu\big([x,+\infty)\big)$ and $\psi_\mu(x):=x$ for $x$ such that $\mu([x,+\infty))=0$. We let $b_\mu$ denote the right-continuous inverse of $\psi_\mu$; i.e., $b_\mu(y):=\sup\{x:\psi_{\mu}(x)\le y\}$, $y\ge 0$. Then \begin{align}\label{eq:AYcontdef} T^{AY}_\mu:=\inf\{u\ge 0: B_u\leq b_\mu(B^*_u)\},\quad \textrm{ where } B^*_u:= \sup_{s\leq u} B_s, \end{align} satisfies $B_{T^{AY}_\mu}\sim \mu$ and $(B_{u\wedge T^{AY}_\mu}:u \geq 0)$ is UI. Furthermore, for any other such solution $\widetilde T$ to the SEP, and any $x\geq 0$, we have $\mathbb{P}(B^*_{\widetilde T}\geq x)\leq \mathbb{P}(B^*_{T^{AY}_\mu}\geq x)=\bar\mu^{HL}(x)$; hence $T^{AY}_\mu$ maximizes the distribution of the maximum in the stochastic order. Here $\bar \mu^{HL}$ is the Hardy-Littlewood transform of $\mu$ and the bound $\bar\mu^{HL}(x)$ is due to \cite{BlackwellDubins1963:AConverse}, and has been extensively studied since; see e.g. \cite{CarraroElKarouiObloj:09} for details.
A direct transcription of the Az\'ema-Yor embedding to the context of a simple symmetric random walk only works for measures $\mu\in{\cal M}_0(\mathbb{Z})$ for which $\psi_\mu(x)\in \mathbb{N}$ for all $x\in\mathbb{R}$, which is a restrictive condition; see \citet{CoxObloj2008:ClassesofMeasures} for details. For a general $\mu\in {\cal M}_0(\mathbb{Z})$ we should seek instead to emulate the structure of the stopping time: the process stops when its drawdown hits a certain level, i.e., when $B_u^*-B_u\ge B_u^*-b_\mu(B_u^*)$, or when it reaches a new maximum at which time a new maximum drawdown level is set, whichever comes first. Only in general, we expect to use an independent randomization when deciding to stop or to continue an excursion away from the maximum. Surprisingly, this can be done explicitly and the resulting stopping time maximizes stochastically the distribution of the running maximum of the stopped random walk among all solutions in \eqref{eq:SEPdef}.
Before we state the theorem we need to introduce some notation. Let $\mu \in \mathcal{M}_0(\mathbb{Z})$ and denote $\bar x:=\sup\{n\in\mathbb Z:\mu(\{n\})>0\}$ and $\underline{x}:=\inf\{n\in\mathbb{Z}:\mu(\{n\})>0\}$ the bounds of the support of $\mu$. The barycentre function $\psi_\mu$ is piece-wise constant on $(-\infty,\bar x]$ with jumps in atoms of $\mu$, is non-decreasing, left-continuous with $\psi_\mu(x)>x$ for $x<\bar x$, $\psi_\mu(x)=x$ for $x\in[\bar x,+\infty)$, and $\psi_\mu(-\infty)=0$. The inverse function $b_\mu$ is right-continuous, non-decreasing with $b_\mu(0)=\underline{x}$, and is integer valued on $(0,\bar x)$. Further, $b_\mu(y)<y$ for $y<\bar x$ and $b_\mu(y)=y$ for $y\in[\bar x,+\infty)$; in particular $b_\mu(n)\leq n-1$ for $n\in \mathbb{Z}\cap [0,\overline{x})$. Moreover, for any $n\in \mathbb{Z}\cap [\underline{x},\overline{x}]$, $\mu(\{n\})>0$ if and only if $n$ is in the range of $b_\mu$; consequently, $\mu(\{y\})=0$ for any $y$ that is not in the range of $b_\mu$.
For each $1\le n<\bar x$, $\{b_\mu(y):y\in[n,n+1]\} \cap (-\infty, n]$ is a nonempty set of finitely many integers which we rank and denote as $x^n_1>x^n_2>\dots>x^n_{m_n+1}$. Similarly, we rank $\{b_\mu(y):y\in[0,1]\}$, which is a nonempty set of finitely many integers $x^0_1>x^0_2>\dots > x^0_{m_0+1}$ if $b_\mu(0)>-\infty$ and a set of countably many integers $x^0_1>x^0_2>\dots$ otherwise. Then, for each $1\le n< \bar x$, $x^{n-1}_{1} = x^{n}_{m_{n}+1}=b_\mu(n)\le n-1$. Note that we may have $m_n=0$, in which case $x^{n-1}_1=x^n_{m_n+1}=x^n_1$.
For each $1\le n<\bar x$ and for $n=0$ when $\underline{x}>-\infty$, define \begin{align} \begin{split}\label{eq:DefineRho} \Gamma^n:&=\bar \mu\left(x^{n}_{m_n+1}\right)\frac{\psi_{\mu}(x^{n}_{m_n+1})-x^{n}_{m_n+1}}{n-x^{n}_{m_n+1}},\\ g^n_k:&=\frac{n+1-x^{n}_{k}}{\Gamma^n}\mu(\{x^n_k\}),\quad k=2,3,\dots, m_n,\\ g^n_{m_n+1}:&=\frac{n+1-x^{n}_{m_n+1}}{\Gamma^n}\left[\mu(\{x^{n}_{m_n+1}\}) - \bar \mu\left(x^{n}_{m_n+1}\right)\frac{n-\psi_\mu(x^{n}_{m_n+1})}{n-x^{n}_{m_n+1}}\right],\\ f^n_{m_n+1}:&=0, \quad f^n_{k} = f^n_{k+1}+g^n_{k+1},\quad k=1,2,\dots, m_n,\quad f^n_0:=1. \end{split} \end{align} When $\underline{x}=-\infty$, define \begin{align}\label{eq:DefineRho0} g^0_k = (1-x^{0}_{k})\mu(\{x^0_k\}),\; k\ge 2,\quad f^0_k = \sum_{i=k+1}^\infty g^0_i,\; k\ge 1,\quad f^0_0:=1. \end{align} Then, as we will see in the proof of Theorem \ref{th:SkorokhodembeddingAYstoppingtime}, for each $1\le n<\bar x$ and for $n=0$ when $\underline{x}>-\infty$, $\rho^n_k:=1-(f^n_k/f^n_{k-1})$ is in $[0,1)$ for $k=1,\dots, m_n$ and we let $\rho^n_{m_n+1}:=1$; when $\underline{x}=-\infty$, $\rho^0_k:=1-(f^0_k/f^0_{k-1})$ is in $(0,1)$ for each $k\ge 1$ and we set $m_0+1:=+\infty$. Let $\boldsymbol{\eta}=(\eta^n_k: 0 \leq n<\bar x, k\in \mathbb{Z}\cap [1, m_n+1])$ be a family of mutually independent Bernoulli random variables, independent of $S$, with $\mathbb{P}(\eta^n_k=0)=\rho^n_k=1-\mathbb{P}(\eta^n_k=1)$. We let $S_t^*:=\sup_{r\leq t}S_r$ and define the enlarged filtration $\mathbb{F}^{S,\boldsymbol{\eta}}$ via ${\cal F}_t^{S,\boldsymbol{\eta}}:=\sigma(S_u,\eta^{S^*_u}_k:k\in \mathbb{Z}\cap [1, m_{S^*_u}+1], u\leq t)$.
We are now ready to define our Az\'ema--Yor embedding for $S$. It is an $\mathbb{F}^{S,\boldsymbol{\eta}}$--stopping time which, in analogy to \eqref{eq:AYcontdef}, stops when an excursion away from the maximum breaches a given level. However, since the maximum only takes integer values, we emulate the behaviour of $B_{u\land T^{AY}_\mu}$ between hitting times of two consecutive integers in an averaged manner, using independent randomization. Specifically, if we let $\mathcal{H}_n:=\inf\{t\geq 0: S_t=n\}$ then after $\mathcal{H}_n$ but before $\mathcal{H}_{n+1}$ we may stop at each of $x^n_1>x^n_2>\ldots>x^n_{m_n}$ depending on the independent coin tosses $(\eta^n_k)$, while we stop a.s.\ if we hit $x^n_{m_n+1}$. If we first hit $n+1$ then a new set of stopping levels is set. Finally, we stop upon hitting $\bar x$. \begin{theorem}\label{th:SkorokhodembeddingAYstoppingtime}
Let $\mu \in \mathcal{M}_0(\mathbb{Z})$ and $\tau^{AY}_\mu$ be given by
\begin{equation*}
\tau^{AY}_\mu:= \inf\left\{t\geq 0: S_t\leq x^{S^*_t}_k \textrm{ and }\eta^{S^*_t}_k=0 \textrm{ for some } k\in \mathbb{Z}\cap [1, m_n+1]\right\}\land \mathcal{H}_{\bar x}.
\end{equation*}
Then $\tau^{AY}_\mu\in \mathrm{SEP}(\mathbb{F}^{S,\boldsymbol{\eta}},\mu)$ and
for any $\sigma\in \mathrm{SEP}(\mathbb{F},\mu)$
\begin{equation}\label{eq:AYoptimality}
\mathbb{P}(S^*_\sigma \geq n) \leq \mathbb{P}(S^*_{\tau^{AY}_\mu} \geq n) = \bar \mu(b_\mu(n)) \frac{\psi_\mu(b_\mu(n)) - b_\mu(n)}{n-b_\mu(n)}
,\quad n\in \mathbb{N},
\end{equation}
with the convention $\frac{0}{0}=1$. \end{theorem} The optimality property in \eqref{eq:AYoptimality} is analogous to the optimality of $T^{AY}_\mu$ in a Brownian setup, as described above and the bound in \eqref{eq:AYoptimality} coincides with $\bar \mu^{HL}(n)$. \begin{remark} Note that by considering our solution for $(-S_t)_{t\geq 0}$ we obtain a reversed Az\'ema--Yor solution which stops when the maximum drawup since the time of the historical minimum hits certain levels. It follows from \eqref{eq:AYoptimality} that such embedding maximizes the distribution of the running minimum in the stochastic order. \end{remark} \begin{remark}\label{rk:nonuniqueAY}
We do not claim that $\tau^{AY}_\mu$ is the only embedding which achieves the upper bound in \eqref{eq:AYoptimality}. Our construction inherits the main structural property of the Az\'ema-Yor embedding for $B$: when a new maximum is hit, a lower threshold is set (which may depend on an independent randomisation) and we either stop when this threshold is hit or else a new maximum is set. This, in effect, averages out the behaviour of $B_{u\land T^{AY}_\mu}$ between hitting times of two consecutive integers. Instead, we could consider averaging out only the behaviour of $B_{u\land T^{AY}_\mu}$ between the first hitting time of an integer $n$ and the minimum between the hitting time of $n+1$ and the return of the embedded walk to $n$. The resulting embedding would have the following structure: each time the random walk is at its maximum, $S_t=S^*_t=n$ a (randomized) threshold is set and the walk stops if it hits this threshold. If not, a new instance of the threshold level is drawn when the walk returns to $n$. This is iterated until the walk is stopped at the current threshold level or when $n+1$ is hit which changes the distribution of the threshold level. This embedding appears less natural for us, having the casino gambling motivation in mind, however it should share the optimality property of $\tau^{AY}_\mu$ in \eqref{eq:AYoptimality}. \end{remark}
\subsection{Proof of Theorem \ref{th:SkorokhodembeddingAYstoppingtime}}
It is straightforward to verify that all the conclusions of the theorem hold for the case in which $\mu(\{0\})=1$, so we assume in the following that $\mu(\{0\})<1$ and thus $\underline{x}<0<\bar x$.
Throughout the proof we let $\tau=\tau^{AY}_\mu$ and recall that $\mathcal{H}_j = \inf\{t\ge 0: S_t = j\}$.
We first prove constructively that $\rho^n_i$'s are well defined and the constructed stopping time $\tau$ embeds $\mu$ into the random walk. Specifically, we argue by induction that for any $0\le j<\bar x$ the stopping time $\tau$ satisfies
\begin{align}\label{eq:AYlikestoppingtime}
\begin{cases}
\mathbb{P}(S_{\tau} = y \text{ and } \tau < \mathcal{H}_{j+1}) = \mu(\{y\}), &\text{ for } y < x^{j}_1\\
\mathbb{P}(S_{\tau} = y \text{ and } \tau < \mathcal{H}_{j+1}) = \bar {\mu}(x^j_1) \frac{j+1-\psi_{\mu}(x^{j}_1)}{j+1-x^{j}_1}, &\text{ for } y = x^j_1\\
\mathbb{P}(S_{\tau} = y \text{ and } \tau < \mathcal{H}_{j+1}) = 0, &\text{ for } y > x^j_1.
\end{cases}
\end{align} For clarity of the presentation, somewhat lengthy and technical, proof of the above equalities is relegated to Section \ref{sec:proof_keyrep} below.
Next, we show that $S_\tau\sim\mu$. If $\bar x=+\infty$, by the construction of $\tau$, we have
\begin{align*}
\mathbb{P}(\tau=+\infty)\le \lim_{n\rightarrow+\infty}\mathbb{P}(\tau\ge \mathcal{H}_{n}) = \lim_{n\rightarrow+\infty}{\bar {\mu}(x^{n}_{m_n+1})}\frac{\psi_{\mu}(x^{n}_{m_n+1})-x^{n}_{m_n+1}}{n-x^{n}_{m_n+1}}=0,
\end{align*}
since $\lim_{n\rightarrow+\infty}x^{n}_{m_n+1} = \lim_{n\rightarrow+\infty}b_\mu(n)=+\infty$, $\lim_{x\rightarrow +\infty}\bar \mu(x)=0$, and $\psi_\mu(x^{n}_{m_n+1})\in [x^{n}_{m_n+1},n)$. Consequently, $\tau<+\infty$ a.s. and for any $y\in\mathbb Z$, $\mathbb{P}(S_{\tau} = y) =\lim_{n\rightarrow+\infty}\mathbb{P}(S_{\tau} = y , \tau < \mathcal{H}_{n}) = \mu(\{y\})$. If $\bar x<+\infty$, by definition, $\tau\le \mathcal{H}_{\bar x}<\infty$ a.s. Moreover, because \eqref{eq:AYlikestoppingtime} is true for $j=\bar x-1$, we have $\mathbb{P}(S_\tau = y) = \mathbb{P}(S_\tau = y,\tau < \mathcal{H}_{\bar x}) = \mu(\{y\})$
for any $y<x^{\bar x-1}_1$. From the definition of $x^{\bar x-1}_1$, we conclude that $b_\mu$ does not take any values in $(x^{\bar x-1}_1,\bar x)$, so $\mu(\{n\})=0$ for any integer $n$ in this interval. Since $S_\tau\leq \bar x$, it remains to argue $\mathbb{P}(S_\tau = x^{\bar x-1}_1) = \mu(\{x^{\bar x-1}_1\})$. This follows from \eqref{eq:AYlikestoppingtime} with $j=\bar x-1$:
\begin{align*}
\mathbb{P}(&S_\tau = x^{\bar x-1}_1) = \mathbb{P}(S_\tau = x^{\bar x-1}_1,\tau < \mathcal{H}_{\bar x}) \\
&= \bar \mu(x^{\bar x-1}_1)\frac{\bar x-\psi_\mu(x^{\bar x-1}_1)}{\bar x-x^{\bar x-1}_1}=\frac{\bar x \bar \mu(x^{\bar x-1}_1) - \sum_{n\ge x^{\bar x-1}_1}n\mu(\{n\})}{\bar x-x^{\bar x-1}_1}\\
&= \frac{\bar x \left( \mu(\{x^{\bar x-1}_1\})+\mu(\{\bar x\})\right) - \left(x^{\bar x-1}_1\mu(\{x^{\bar x-1}_1\})+\bar x \mu(\{\bar x\})\right)}{\bar x-x^{\bar x-1}_1}=\mu(\{x^{\bar x-1}_1\}),
\end{align*}
where the fourth equality follows because $\mu(\{n\})=0$ for any $n> x^{\bar x-1}_1$ and $n\neq \bar x$.
To conclude that $\tau\in \mathrm{SEP}(\mathbb{F}^{S,\boldsymbol{\eta}},\mu)$ it remains to argue that $\tau$ is UI, which is equivalent to $\lim_{K\rightarrow +\infty} K\mathbb{P}(\sup_{t \geq 0} |S_{\tau\wedge t}|\ge K)=0$, see e.g. \citet{AzemaGundyYor:79}. We first show that $$\lim_{K\rightarrow +\infty} K\mathbb{P}(\sup_{t \geq 0} S_{\tau\wedge t}\ge K)=\lim_{K\rightarrow +\infty} K \mathbb{P}(S^*_\tau\ge K)=0.$$ Because $\{S_{\tau\wedge t}\}$ never visits states outside any interval that contains the support of $\mu$, we only need to prove this when $\bar x=+\infty$, and hence $\lim_{y\rightarrow +\infty}b_\mu(y)=+\infty$, and taking $K\in \mathbb{N}$.
By \eqref{eq:ProbHittingn} and the construction of $\tau$, we see that for $n\in\mathbb{N}$, $ n<\bar x$, we have
\begin{align}
&\mathbb{P}(S^*_\tau\ge n)= \mathbb{P} (\tau \geq \mathcal{H}_n) = \bar \mu(x^n_{m_n+1}) \frac{\psi_\mu(x^n_{m_n+1}) - x^n_{m_n+1}}{n-x^n_{m_n+1}} \notag \\
& = \bar \mu(b_\mu(n)) \frac{\psi_\mu(b_\mu(n)) - b_\mu(n)}{n-b_\mu(n)} = \bar \mu(b_\mu(n)+) \frac{\psi_\mu(b_\mu(n)+) - b_\mu(n)}{n-b_\mu(n)},\label{eq:AYdistrmax}
\end{align} where the last equality is the case because \begin{align*}
&\bar \mu(b_\mu(n)) \psi_\mu(b_\mu(n)) - \bar \mu(b_\mu(n)) b_\mu(n)\\
&= \int_{[b_\mu(n),+\infty)}y\mu(dy) -\mu(\{b_\mu(n)\})b_\mu(n)- \bar \mu(b_\mu(n)+) b_\mu(n)\\
& = \int_{(b_\mu(n),+\infty)}y\mu(dy)- \bar \mu(b_\mu(n)+) b_\mu(n) = \bar \mu(b_\mu(n)+)\big(\psi_\mu(b_\mu(n)+) - b_\mu(n)\big); \end{align*} consequently,
\begin{align*}
n \mathbb{P}(S^*_\tau\ge n) & = \bar \mu(b_\mu(n)+) \frac{\psi_\mu(b_\mu(n)+) - b_\mu(n)}{n-b_\mu(n)}n \\
&= \bar \mu(b_\mu(n)+) \frac{\psi_\mu(b_\mu(n)+) - n}{n-b_\mu(n)}n + \bar \mu(b_\mu(n)+)n.
\end{align*}
For $a<c<d$ the function $y\to y(d-y)/(y-a)$ attains its maximum on $[c,d]$ in $y=c$. Taking
$a=b_\mu(n)<c=\psi_\mu(b_\mu(n))\leq y=n<d=\psi_\mu(b_\mu(n)+)$, we can bound the first term by
\begin{align*}
& \bar \mu(b_\mu(n)+) \frac{\psi_\mu(b_\mu(n)+) - n}{n-b_\mu(n)}n
\leq \bar \mu(b_\mu(n)+) \frac{\psi_\mu(b_\mu(n)+) - \psi_\mu(b_\mu(n))}{\psi_\mu(b_\mu(n))-b_\mu(n)}\psi_\mu(b_\mu(n))\\
& =\mu(\{b_\mu(n)\})\psi_\mu(b_\mu(n))
\leq \bar \mu(b_\mu(n))\psi_\mu(b_\mu(n)) =\sum_{y\ge b_\mu(n)}y\mu(\{y\}),
\end{align*}
which goes to zero with $n\to \infty$ since $\mu\in {\cal M}_0(\mathbb{Z})$. Similarly, $\psi_\mu(b_\mu(n)+)>n$ gives
$$\bar\mu(b_\mu(n+))n\leq \bar\mu(b_\mu(n+))\psi_\mu(b_\mu(n)+)=\sum_{y>b_\mu(n)}y\mu(\{y\})\stackrel{n\to\infty}{\longrightarrow}0$$
and we conclude that $n\mathbb{P}(S^*_\tau\ge n)\to 0$ as $n\to \infty$.
It remains to argue that
\begin{align*}
\lim_{n\rightarrow +\infty} n\mathbb{P}(\inf_{t \geq 0} S_{\tau \wedge t}\le -n)=0.
\end{align*}
This is trivial if $\underline{x}>-\infty$. Otherwise $b_\mu(0)=\underline{x}=-\infty$ and $x^0_i$'s are infinitely many. For $n\in\mathbb{N}$, by the construction of $\tau$, $\inf_{t \geq 0} S_{t \wedge \tau}\le- n$ implies that $S$ visits $-n$ before hitting 1 and $S$ is not stopped at any $x^0_i>-n$. Denote the $i_n:=\sup\{i\ge 1|x^0_i> -n\}$ and note that $i_n\to\infty$ as $n\to \infty$. By construction, the probability that $S$ does not stop at $x^0_i$ given that $S$ reaches $x^0_i$ is $f^0_i/f^0_{i-1}$, $i\ge 1$. On the other hand, the probability that $S$ visits $-n$ before hitting 1 is $1/(n+1)$. Therefore, the probability that $S$ visits $-n$ before hitting 1 and $S$ is not stopped at any $x^0_i>-n$ is
\begin{align*}
\frac{1}{n+1}\prod_{i=1}^{i_n}f^0_i/f^0_{i-1} = \frac{1}{n+1}f^0_{i_n}.
\end{align*}
From \eqref{eq:DefineRho0}, $\lim_{k\rightarrow +\infty}f^0_k=0$ since $\mu$ has a finite first moment. Therefore,
\begin{align*}
\limsup_{n\rightarrow +\infty}n \mathbb{P}(\inf_{t \geq 0} S_{\tau \wedge t}\le -n)\le \limsup_{n\rightarrow +\infty}n\frac{1}{n+1}f^0_{i_n}=0.
\end{align*}
The above concludes the proof of $\tau\in \mathrm{SEP}(\mathbb{F}^{S,\boldsymbol{\eta}},\mu)$.
While \eqref{eq:AYoptimality} may be deduced from known bounds, as explained before, we provide a quick self--contained proof. Fix any $n\geq 1$ and $\sigma\in \mathrm{SEP}(\mathbb{F},\mu)$. When $\bar x<+\infty$, by UI, we have $ \mathbb{P}(S^*_\tau\ge \bar x) =\mathbb{P}(S_\tau= \bar x)=\mu(\{\bar x\}) = P(S_\sigma=\bar x)= \mathbb{P}(S^*_\sigma\ge \bar x) $ and $ \mathbb{P}(S^*_\tau\ge n) = \mathbb{P}(S^*_\sigma\ge n)=0$ for any $n>\bar x$.
Next, by Doob's maximal equality and the UI of $\sigma$, $\mathbb{E}[(S_\sigma - n) {\bf 1}_{S^*_\sigma \geq n}] = 0$ and hence, for $k\leq n\in \mathbb{N}$,
\begin{align}
0 & = \mathbb{E}[(S_\sigma - n) {\bf 1}_{S^*_\sigma \geq n} ] = \mathbb{E}[(S_\sigma - n) {\bf 1}_{S_\sigma \geq k}] + \mathbb{E}[(S_\sigma - n) ({\bf 1}_{S^*_\sigma \geq n} - {\bf 1}_{S_\sigma \geq k} )] \notag\\
& \leq \mathbb{E}[(S_\sigma - n) {\bf 1}_{S_\sigma \geq k}] + (k - n) \mathbb{E}[ {\bf 1}_{S^*_\sigma \geq n} {\bf 1}_{S_\sigma < k}] - (k - n) \mathbb{E}[{\bf 1}_{S^*_\sigma < n} {\bf 1}_{S_\sigma \geq k} ] \notag\\
& = \mathbb{E}[(S_\sigma - n) {\bf 1}_{S_\sigma \geq k}] + (k-n) \mathbb{E}[{\bf 1}_{S^*_\sigma \geq n} - {\bf 1}_{S_\sigma \geq k} ] \notag \\
&= \mathbb{E}[(S_\sigma - k) {\bf 1}_{S_\sigma \geq k}]- (n-k) \mathbb{P}(S^*_\sigma\geq n).\label{eq:DoobMaximal}
\end{align}
Considering $\mathbb{N}\ni n<\overline{x}$ and $k=b_\mu(n)<n$, and recalling \eqref{eq:AYdistrmax}, we obtain
\begin{align*}
\mathbb{P}(S^*_\sigma\geq n) &\le \frac{\mathbb{E}[(S_\sigma -b_\mu(n)) {\bf 1}_{S_\sigma \geq b_\mu(n)}]}{n-b_\mu(n)}
= \frac{\bar \mu(b_\mu(n))\left(\psi_\mu(b_\mu(n)) - b_\mu(n)\right)}{n-b_\mu(n)}\\
&= \mathbb{P}(S^*_\tau\geq n).
\end{align*}
\subsection{Proof of \eqref{eq:AYlikestoppingtime}}\label{sec:proof_keyrep}
First, we show the inductive step: we prove that \eqref{eq:AYlikestoppingtime} holds for $j=n<\bar x$ given that it holds for $j=0,\dots, n-1$. Because \eqref{eq:AYlikestoppingtime} is true for $j=n-1$, we obtain
\begin{align}
\mathbb{P}(\tau \geq \mathcal{H}_n) &= 1 - \mathbb{P}(\tau < \mathcal{H}_n) = 1- \left(\sum_{y < x^{n-1}_1} \mu(\{y\})\right) - \bar {\mu}(x^{n-1}_1) \frac{n-\psi_{\mu}(x^{n-1}_1)}{n-x^{n-1}_1} \notag \\
&=\bar {\mu}(x^{n-1}_1)- \bar {\mu}(x^{n-1}_1) \frac{n-\psi_{\mu}(x^{n-1}_1)}{n-x^{n-1}_1} = {\bar {\mu}(x^{n-1}_1)}\frac{\psi_{\mu}(x^{n-1}_1)-x^{n-1}_1}{n-x^{n-1}_1} \notag\\
&= {\bar {\mu}(x^{n}_{m_n+1})}\frac{\psi_{\mu}(x^{n}_{m_n+1})-x^{n}_{m_n+1}}{n-x^{n}_{m_n+1}} = \Gamma_n,\label{eq:ProbHittingn}
\end{align}
where $\Gamma_n$ is defined as in \eqref{eq:DefineRho}. Recalling that $x^{n}_{m_n+1}=b_\mu(n)\le n-1<\bar x$ and $\psi_\mu(x)<x$ for $x<\bar x$, we conclude that $\mathbb{P}(\tau \geq \mathcal{H}_n)>0$.
Consider first the case when $m_{n}=0$, so that $x^{n}_1 = x^{n}_{m_n+1}=x^{n-1}_1$
and $\tau$ stops if $x^n_1$ is hit between $\mathcal{H}_n$ and $\mathcal{H}_{n+1}$. Consequently,
\begin{align*}
&\mathbb{P}(S_{\tau} = x^{n}_1,\tau < \mathcal{H}_{n+1})
=\mathbb{P}(S_{\tau} = x^{n}_1 ,\tau < \mathcal{H}_n)+ \mathbb{P}(S_{\tau} = x^{n}_1, \tau < \mathcal{H}_{n+1} | \tau \geq \mathcal{H}_n) \cdot \mathbb{P}(\tau \geq \mathcal{H}_n)\\
&=\mathbb{P}(S_{\tau} = x^{n-1}_1, \tau < \mathcal{H}_n)+ \frac{1}{n+1-x^{n}_1} \cdot {\bar {\mu}(x^{n}_{m_n+1})}\frac{\psi_{\mu}(x^{n}_{m_n+1})-x^{n}_{m_n+1}}{n-x^{n}_{m_n+1}}\\
&=\bar {\mu}(x^{n-1}_1) \frac{n-\psi_{\mu}(x^{n-1}_1)}{n-x^{n-1}_1}+ \frac{1}{n+1-x^{n}_1} \cdot {\bar {\mu}(x^{n}_{m_n+1})}\frac{\psi_{\mu}(x^{n}_{m_n+1})-x^{n}_{m_n+1}}{n-x^{n}_{m_n+1}}\\
&= \bar {\mu}(x^n_1) \frac{n-\psi_{\mu}(x^n_1)}{n-x^n_1}+\frac{1}{n+1-x^{n}_1} \cdot \bar {\mu}(x^n_1) \frac{\psi_{\mu}(x^n_1)-x^n_1}{n-x^n_1}\\
&= \bar {\mu}(x^{n}_1) \frac{n+1-\psi_{\mu}(x^{n}_1)}{n+1-x^{n}_1},\quad \textrm{ and we conclude that \eqref{eq:AYlikestoppingtime} holds for $j=n$.}
\end{align*}
Next, we consider the case in which $m_n\ge 1$.
A direct calculation yields
\begin{align*}
&\mu(\{x^{n}_{m_n+1}\}) - \bar \mu(x^{n}_{m_n+1})\frac{n-\psi_\mu(x^{n}_{m_n+1})}{n-x^{n}_{m_n+1}}\\
&= \frac{1}{n-x^{n}_{m_n+1}}\left[\sum_{y>x^n_{m_n+1}} y\mu(\{y\}) - n\mu((x^n_{m_n+1},+\infty))\right] \\
&= \frac{\mu((x^n_{m_n+1},+\infty))}{n-x^{n}_{m_n+1}}\left[\psi_\mu(x^n_{m_n+1}+)-n\right]=\frac{\mu((x^n_{m_n+1},+\infty))}{n-x^{n}_{m_n+1}}\left[\psi_\mu(b_\mu(n)+)-n\right]> 0,
\end{align*}
where the inequality follows from $x^n_{m_n+1}=b_\mu(n) <n<\bar x$ and $\psi_\mu(b_\mu(y)+)>y$ for any $y<\bar x$. Consequently, $g_{m_n+1}>0$.
On the other hand, because $\mu(\{y\})=0$ for any $y$ that is not in the range of $b_\mu$, we conclude that
\begin{align*}
&\sum_{k=2}^{m_n+1}(n+1-x^{n}_{k})\mu(\{x^n_k\})=\sum_{x^n_{m_n+1}\le y<x^{n}_1}(n+1-y)\mu(\{y\})\\
&=(n+1)(\bar \mu(x^n_{m_n+1})-\bar \mu(x^n_{1})) - \left[\psi_\mu(x^n_{m_n+1})\bar \mu (x^n_{m_n+1}) - \psi_\mu(x^n_{1})\bar \mu (x^n_{1})\right]\\
&= (n+1-\psi_\mu(x^n_{m_n+1}))\bar \mu(x^n_{m_n+1}) - (n+1-\psi_\mu(x^n_{1}))\bar \mu(x^n_{1}).
\end{align*}
Consequently, we have
\begin{align*}
f_1^n=&\sum_{k=2}^{m_n+1}g_k\\
= &\frac{1}{\Gamma_n}\left[\sum_{k=2}^{m_n+1}(n+1-x^{n}_{k})\mu(\{x^n_k\}) - (n+1-x^{n}_{m_n+1}) \bar \mu(x^{n}_{m_n+1})\frac{n-\psi_\mu(x^{n}_{m_n+1})}{n-x^{n}_{m_n+1}} \right]\\
=&\frac{1}{\Gamma_n}\left[\bar\mu(x^n_{m_n+1})\frac{\psi_\mu(x^n_{m_n+1})-x^n_{m_n+1}}{n-x^n_{m_n+1}}-(n+1-\psi_\mu(x^n_{1}))\bar \mu(x^n_{1})\right]\\
=& 1-\frac{(n+1-\psi_\mu(x^n_{1}))\bar \mu(x^n_{1})}{\mathbb{P}(\tau\ge \mathcal{H}_n)}\le 1,
\end{align*}
where the last inequality holds because $\psi_\mu(x^n_{1}) = \psi_\mu\big(\min(b_\mu(n+1),n)\big)\le \psi_\mu\big(b_\mu(n+1)\big)\le n+1$.
It follows, since $g^n_k>0,k=2,\dots,m_n+1$, that $f_k$ is strictly decreasing in $k=1,2,\dots, m_n$ with $f^n_{m_n}>0$ and $f^n_1\le 1$. Consequently, $\rho^n_k$ is well defined and $\rho^n_k\in[0,1)$, $k=1,\dots, m_n$. Recall $\rho^n_{m_n+1} = 1 = 1-(f^n_{m_n+1}/f^n_{m_n})$. Set $x^n_0:=n$. Then, for each $k=1,\dots, m_n+1$,
\begin{align*}
&\mathbb{P}(S_{\tau} = x^n_k , \mathcal{H}_n \leq \tau < \mathcal{H}_{n+1})
= \mathbb{P}(S_{\tau} = x^n_k , \tau < \mathcal{H}_{n+1} | \tau \geq \mathcal{H}_n) \cdot \mathbb{P}(\tau \geq \mathcal{H}_n)\\
& =\left[\prod_{j=1}^{k-1} \left(\frac{n+1-x^{n}_{j-1}}{n+1-x^n_{j}}(1-\rho^n_{j})\right)\right]\frac{n+1-x^{n}_{k-1}}{n+1-x^n_{k}}\rho^n_{k}\mathbb{P}(\tau \geq \mathcal{H}_n)\\
& = \frac{1}{n+1-x^n_{k}}\left[\prod_{j=1}^{k-1} (1-\rho^n_{j})\right]\rho^n_{k}\mathbb{P}(\tau \geq \mathcal{H}_n)
= \frac{1}{n+1-x^n_{k}}(f^n_{k-1}-f^n_k)\mathbb{P}(\tau \geq \mathcal{H}_n).
\end{align*}
Therefore, recalling the definition of $f^n_k$ and $g^n_k$,
for $k=2,\dots, m_n$, we have $\mathbb{P}(S_{\tau} = x^n_k,\mathcal{H}_n \leq \tau < \mathcal{H}_{n+1}) = \mu(\{x^n_k\})$.
Further, $\mathbb{P}(S_{\tau} = x^n_{m_n+1},\mathcal{H}_n \leq \tau < \mathcal{H}_{n+1}) = \mu(\{x^n_{m_n+1}\}) - \bar \mu(x^{n}_{m_n+1})\frac{n-\psi_\mu(x^{n}_{m_n+1})}{n-x^{n}_{m_n+1}}$ and $\mathbb{P}(S_{\tau} = x^n_1,\mathcal{H}_n \leq \tau < \mathcal{H}_{n+1}) = \frac{n+1-\psi_{\mu}(x^{n}_{1})}{n+1-x^n_{1}}\bar \mu(x^n_1)$
and we verify that \eqref{eq:AYlikestoppingtime} holds for $j=n$.
We move on to showing the inductive base step: we prove \eqref{eq:AYlikestoppingtime} holds for $j=0$. When $\underline{x}>-\infty$, $x^0_i$'s are finitely many and the proof is exactly as above. When $\underline{x}=-\infty$, by definition, $g^0_k>0$ because $x^0_k\le x^0_1 \le 0$ and $\mu(\{x^0_k\})>0$. Recalling that $\mu(\{y\})=0$ for any $y$ that is not in the range of $b_\mu$, we have
\begin{align*}
\sum_{k=2}^\infty g^0_k &= \sum_{k=2}^\infty (1-x^{0}_{k})\mu(\{x^0_k\}) = \sum_{y< x^0_1}(1-y)\mu(\{y\}) = 1-\bar \mu(x^0_1) - \sum_{y< x^0_1}y\mu(\{y\})\\
& =1-\bar \mu(x^0_1) + \sum_{y\ge x^0_1}y\mu(\{y\}) = 1- \bar \mu(x^0_1) \left(1-\psi_\mu(x^0_1)\right).
\end{align*}
Because $x^0_1= \min(b_\mu(1),0)$ and $\psi_\mu(b_\mu(y))\le y$ for any $y$, we conclude that $\psi_\mu(x^0_1) \le \psi_\mu(b_\mu(1))\le 1$. In addition, $\bar \mu(x^0_1)<1$, showing that $\sum_{k=2}^\infty g^0_k<1$. Therefore, $f^0_k$'s are well defined, positive, and strictly decreasing in $k$, and $f^0_1< 1$. Therefore, $\rho_k^0\in (0,1)$ for each $k\ge 1$. Following the same arguments as previously, one concludes that \eqref{eq:AYlikestoppingtime} holds for $j=0$.
\section{Examples}\label{se:Example} We end this paper with an explicit computation of our two embeddings for two examples.
\subsection{Optimal Gambling Strategy} The first example is a measure $\mu$ arising naturally from the casino gambling model studied in \citet{HeEtal2014:StoppingStrategies}. Therein, a gambler whose preferences are represented by cumulative prospect theory \citep{TverskyKahneman1992:CPT} is repeatedly offered a fair bet in a casino and decides when to stop gambling and exit the casino. The optimal distribution of the gambler's gain and loss at the exit time is a certain $\mu\in {\cal M}_0(\mathbb{Z})$, which may be characterised explicitly, see \citet[Theorem 2]{HeEtal2014:StoppingStrategies}.
With a set of reasonable model parameters\footnote{Specifically: $\alpha_+ = 0.6$, $\delta_+ = 0.7$, $\alpha_- = 0.8$, $\delta_- = 0.7$, and $\lambda = 1.05$.}, we obtain
\begin{align}\label{eq:DistributionExample}
\mu(\{n\})=
\begin{cases}
0.4465\times ((n^{0.6}-(n-1)^{0.6})^{\frac{10}{3}}-((n+1)^{0.6}-n^{0.6})^{\frac{10}{3}}),& n\ge 2,\\
0.3297, & n=1, \\
0.6216, & n=-1,\\
0, & \text{otherwise}.
\end{cases}
\end{align}
We first exhibit the randomized Markovian stopping time $\tau(\mathbf{r})$ of Theorem \ref{thm:reformulate}. Using the algorithm given in Section \ref{subse:ConstructionRandPI} we compute $r^i$, the probabilities of a coin tossed at $S_t=i$ turning up tails, for all $i\in \mathbb{Z}$:
\begin{align*}
r^1 &= 0.4040,\; r^2 = 0.0600, \; r^3 = 0.0253, \; r^4 = 0.0140,\; r^5 = 0.0089,\\
r^6 &= 0.0061,\; r^7 = 0.0045,\; r^8 = 0.0034,\; r^9 = 0.0027,\; r^{10} = 0.0021,\dots
\end{align*}
Note that $\mu(\{0\})=0$ and $\mu(\{n\})=0,n\le -2$; so one does not stop at 0 and must stop upon reaching $-1$. The stopping time $\tau(\mathbf{r})$ is illustrated in the left pane of Figure \ref{fi:PIAY}: $S$ is represented by a recombining binomial tree. Black nodes stand for ``stop", white nodes stand for ``continue", and grey nodes stand for the cases in which a random coin is tossed and one stops if and only if the coin turns up tails. The probability that the random coin turns tails is shown on the top of each grey node.
Next we follow Theorem \ref{th:SkorokhodembeddingAYstoppingtime} to construct a randomized Az\'ema-Yor stopping time $\tau_\mu^{AY}$ embedding $\mu$. To this end, we compute $x^n_k$'s and $\rho^n_k$'s, which stand for the drawdown levels that are set after reaching maximum $n$ and the probabilities that the coins tossed at these levels turn up tails, respectively:
\begin{align*}
m_{0} &= 0,\; x^0_1 = -1,\quad m_1=1,\; x^1_1=1,\; \rho^1_1=0.2704,\; x^1_2=-1,\\
m_2 &= 0,\; x^2_1=1,\quad m_3=0,\; x^3_1=1,\quad m_4=0,\; x^4_1=1,\\
m_5&=1,\; x^5_1=2,\;\rho^5_1=0.0049,\; x^5_2=1,\quad m_6=0,\; x^6_1=1,\dots
\end{align*}
The stopping time is then illustrated in the right pane of Figure \ref{fi:PIAY}: $S$ is represented by a non-recombining binomial tree. Again, black nodes stand for ``stop", white nodes stand for ``continue", and grey nodes stand for the cases in which a random coin is tossed and one stops if and only if the coin turns up tails. The probability that the random coin turns up tails is shown on the top of each grey node.
By definition, $\tau(\mathbf{r})$ is Markovian: at each time time $t$, the decision to stop depends only on the current state and an independent coin toss. However, to implement the strategy, one needs to toss a coin most of the times. In contrast $\tau_\mu^{AY}$ requires less independent coin tossing: e.g.\ in the first five periods at most one such a coin toss, but it is path-dependent. For instance, consider $t=3$ and $S_t=1$. If one reaches this node along the path from (0,0), through (1,1) and (2,2), and to (3,1), then she stops. If one reaches this node along the path from (0,0), through (1,1) and (2,0), and to (3,1), then she continues. Therefore, compared to the randomized Markovian strategy, the randomized Az\'ema-Yor strategy may involve less independent coin tosses but is typically path-dependent\footnote{Indeed, \cite{HeHuOblojZhou2016:Randomization} showed that, theoretically, any path-dependent strategy is equivalent to a randomization of Markovian strategies.} as it considers relative loss when deciding if to stop or not.
\begin{figure}
\caption{Randomized Markovian stopping time $\tau(\mathbf{r})$ (left pane) and randomized Az\'ema-Yor stopping time $\tau_\mu^{AY}$ (right pane) embedding probability measure $\mu$ in \eqref{eq:DistributionExample} into the simple symmetric random walk $S$. Black nodes stand for ``stop", white nodes stand for ``continue", and grey nodes stand for the cases in which a random coin is tossed and one stops if and only if the coin turns up tails. Each node is marked on the right by a pair $(t,x)$ representing time $t$ and $S_t=x$. Each grey node is marked on the top by a number showing the probability that the random coin tossed at that node turns up tails.}
\label{fi:PIAY}
\end{figure}
\subsection{Mixed Geometric Measure} The second example is a mixed geometric measure $\mu$ on $\mathbb{Z}$ with \begin{align}\label{eq:DistributionExamplegeometric} \mu(\{n\})= \begin{cases} \gamma_+\left[ q_+(1-q_+)^{n-1}\right] , & n \geq 1,\\ 1 - \gamma_+ - \gamma_-, & n=0,\\ \gamma_-\left[q_-(1-q_-)^{-n-1}\right] , & n \leq -1, \end{cases} \end{align} where $\gamma_\pm \ge 0$, $q_\pm\in (0,1)$, $\gamma_++\gamma_-\le 1$, and $\gamma_+/ q_+ = \gamma_- / q_-$ so that $\mu \in {\cal M}_0(\mathbb{Z})$.
The randomized Markovian stopping time that embeds $\mu$ given by \eqref{eq:DistributionExamplegeometric} can be derived analytically. Indeed, according to the algorithm given in Section \ref{subse:ConstructionRandPI}, the probability of a coin tossed at $S_t=i$ turning up tails is \begin{align*}
r^i =
\begin{cases}
q_+^2/[(1-q_+)^2 + 1], & i \geq 1,\\
(1 - \gamma_+ - \gamma_-)/[1 - \gamma_+ - \gamma_- + 2(\gamma_+/q_+)], &i = 0, \\
q_-^2/[(1 - q_-)^2 + 1], & i \leq -1.
\end{cases} \end{align*} The randomized Az\'ema-Yor stopping time that embeds $\mu$ given by \eqref{eq:DistributionExamplegeometric} can also be derived analytically. Because the formulae for $x^n_k$'s and $\rho^n_k$'s are tedious, we chose not to present them here. Instead, we illustrate the two embeddings in Figure \ref{fi:PIAYMG} by setting $q_+ = \gamma_+ = 5/12$ and $q_- = \gamma_- = 13/24$. As in the previous example, the randomized Az\'ema-Yor stopping time involves less randomization than the randomized Markovian stopping time at the cost of being path-dependent.
\begin{figure}
\caption{Randomized Markovian stopping time (left-panel) and randomized Az\'ema-Yor stopping (right panel) embedding probability measure \eqref{eq:DistributionExamplegeometric} with $q_+ = \gamma_+ = 5/12$ and $q_- = \gamma_- = 13/24$ into the random walk $\{S_t\}$. Black nodes stand for ``stop", white nodes stand for ``continue", and grey nodes stand for the cases in which a random coin is tossed and one stops if and only if the coin turns tails. Each node is marked on the right by a pair $(t,x)$ representing time $t$ and $S_t=x$. Each grey node is marked on the top by a number showing the probability that the random coin tossed at that node turns up tails.}
\label{fi:PIAYMG}
\end{figure}
\end{document} | arXiv |
Treatment of missing data in Bayesian network structure learning: an application to linked biomedical and social survey data
Xuejia Ke1,2,
Katherine Keenan2 &
V. Anne Smith1
Availability of linked biomedical and social science data has risen dramatically in past decades, facilitating holistic and systems-based analyses. Among these, Bayesian networks have great potential to tackle complex interdisciplinary problems, because they can easily model inter-relations between variables. They work by encoding conditional independence relationships discovered via advanced inference algorithms. One challenge is dealing with missing data, ubiquitous in survey or biomedical datasets. Missing data is rarely addressed in an advanced way in Bayesian networks; the most common approach is to discard all samples containing missing measurements. This can lead to biased estimates. Here, we examine how Bayesian network structure learning can incorporate missing data.
We use a simulation approach to compare a commonly used method in frequentist statistics, multiple imputation by chained equations (MICE), with one specific for Bayesian network learning, structural expectation-maximization (SEM). We simulate multiple incomplete categorical (discrete) data sets with different missingness mechanisms, variable numbers, data amount, and missingness proportions. We evaluate performance of MICE and SEM in capturing network structure. We then apply SEM combined with community analysis to a real-world dataset of linked biomedical and social data to investigate associations between socio-demographic factors and multiple chronic conditions in the US elderly population.
We find that applying either method (MICE or SEM) provides better structure recovery than doing nothing, and SEM in general outperforms MICE. This finding is robust across missingness mechanisms, variable numbers, data amount and missingness proportions. We also find that imputed data from SEM is more accurate than from MICE. Our real-world application recovers known inter-relationships among socio-demographic factors and common multimorbidities. This network analysis also highlights potential areas of investigation, such as links between cancer and cognitive impairment and disconnect between self-assessed memory decline and standard cognitive impairment measurement.
Our simulation results suggest taking advantage of the additional information provided by network structure during SEM improves the performance of Bayesian networks; this might be especially useful for social science and other interdisciplinary analyses. Our case study show that comorbidities of different diseases interact with each other and are closely associated with socio-demographic factors.
Bayesian networks (BNs), first proposed by Pearl [1], are a flexible statistical tool for encoding probabilistic relationships with directed acyclic graphs (DAGs) [2]. BNs have a wide range of applications, including developing expert systems for predicting diseases [3], disclosing diffusion of messages in social networks [4], reconstructing gene regulatory networks [5], and inferring neuronal networks [6] and ecological networks [7]. However, BNs are still only rarely applied to population health and social science questions. Relatedly, use of survey data for BN structure learning is limited.
Schematic diagram of Multiple Imputation by Chained Equations approach. For a given incomplete dataset, MICE firstly imputes all missing values via univariate imputation methods. Then it removes the imputed values from variables one by one and creates a model by using the other complete samples. After that, it imputes missingness in each variable in turn using the created model and the remaining variables. These steps are repeated until the data is completed. It then subtracts this new completed data from the initial imputed values to get a difference matrix. The new completed data then becomes the starting point for the next iteration. The whole process is iterated until a pre-defined threshold on the difference between initial imputed and new completed data is met
Schematic diagram of Structural Expectation-Maximization algorithm. SEM has two components: E-step and M-step. It considers a BN structure for the incomplete data at the very beginning. Then it applies the iterative two steps, alternating E-step and M-step. E-step estimates the values of missing data by computing the expected statistics using the current network structure. The M-step maximizes the scoring function and updates the resulting network structure. These two steps are repeated until convergence is met
Compared with other fields of study, for instance, experimental biological systems, missing data are more pervasive in observational and survey data. There are plentiful causes, including item missingness, e.g., unanswered questions in questionnaires, data entry errors, or subject missingness, e.g., patients dropping out in longitudinal research, or missing samples. Missing data not only reduce overall statistical power and precision, but can lead to biased inferences in subsequent data analysis [8]. Taking a popular method of listwise deletion (e.g., undertaking analysis only on those complete cases without any missing data) as an example, its statistical power and precision would be inevitably reduced because of the decreased sample size.
Based on the different processes leading to the missingness, every missing data pattern can be generally classified into three categories - missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) [9]. This nomenclature is widely used in statistical data analysis and is also referred to as the missing data mechanisms. MCAR occurs if the missingness is unrelated to both unobserved and observed variables. Data are said to be MAR if the missingness is related to observed variables but not to any unobserved variables given the observed ones. MNAR is the most complicated because its missingness relates to both unobserved and observed variables [9]. These three patterns cause different levels of risks of bias in data analysis. For instance, listwise deletion analysis in MAR and MNAR data would yield more biased estimates than MCAR [10].
Multiple imputation by chained equations (MICE) is a popular multiple imputation method used in biomedical, epidemiological and social science fields. It is designed to impute missing data values under the missing data assumption MAR [11, 12]. Compared to single imputation, multiple imputation methods are less biased because they take account of the uncertainty of the missing data by combining multiple predictions for each missing value. MICE uses a divide and conquer approach to replace missing values for all variables in the data set: it focuses on one variable at a time and makes use of other variables to predict the missing values in that focused variable. Figure 1 illustrates how MICE imputes missing values for a given incomplete data set. Firstly, it imputes all values by using univariate imputation methods (e.g., replace missing values by the median of a single variable) to create a starting point. Then it removes the imputed values from each variable in turn and creates a model (e.g., a linear regression model) using the complete samples. This model may or may not include all variables in the dataset. After that, it imputes the values in each variable using this model and other values in the remaining variables. These steps are repeated until the data is completed. Then it subtracts this completed data from the starting point to get a difference matrix. To make this difference close to 0, the whole process is iterated, using the just completed data as a new starting point, until a pre-defined threshold on the difference between the starting point and new completed data is met. Depending on the features of the focused variable, MICE employs different multivariate regression models to predict the missing values (e.g., logistic regression for binary dependant variables). In epidemiology and clinical research, multiple imputation can enhance reliability of inferences based on data with values missing at random (MAR); however, the same procedures are not suitable for MNAR data, and thus further work is required to address MNAR data in a multiple imputation framework [8].
Learning BN structure from incomplete data is quite challenging. Depending on the missing data mechanisms (e.g., MNAR or MAR), learning would be biased if we simply delete incomplete observations. However, while BNs can theoretically consider completion of the dataset, to do so for all missing values in all possible configurations would increase computational time infeasibly (exponential increase per missing data point) [13].
The structural expectation-maximization (SEM) algorithm makes BN structure learning from incomplete data computationally feasible by changing its search space to be over structures rather than parameters and structures. SEM iteratively completes the data, then applies the standard structure learning procedures to the completed data [13]. Similar to the standard EM algorithm [14], SEM involves two steps - expectation (E-step) and maximization (M-step). Figure 2 shows the basic principle of SEM algorithm. Firstly, it considers a BN structure (e.g., an empty one) for the incomplete data. Then it applies the iterative two-step, alternating E-step and M-step. The E-step estimates the values of missing data by computing the expected statistics using the current network structure. The M-step maximizes the scoring function and updates the resulting network structure. This continues until convergence is met [15]. The framework of SEM was first proposed by Friedman [16]. His simulation results suggest that although there is a degradation of learning performance with an increased percentage of missing data, SEM shows promise for handling data involving missing values and hidden variables [16]. Friedman [15] later improved his work so that SEM is not limited to using scoring matrices like minimal description length (MDL) or Bayesian Information Criterion (BIC) that only compute the approximations to Bayesian posterior probability, enabling direct optimizations of the Bayesian posterior probability that incorporates prior information (e.g., Dirichlet priors) over network parameters into the learning procedures.
In this study, we evaluate methods for addressing incomplete data using a simulation framework. Simulation provides a vital mechanism for understanding and evaluating the performance of approaches before applying them to real-world cases. Here we simulate multiple incomplete categorical data sets, including three different missing data mechanisms, various number of variables and amounts of missing data. We concentrate here on categorical, or discrete, data due to its ubiquity in population health and social science data (e.g., categorical survey responses, presence or absence of disease). We then evaluate and compare the performance of MICE and SEM with each other and with the standard expedient of using only samples without missing data, by comparing their resulting network structures with the original network structure.
We then apply the best working method (SEM, see Results) to a real-world health and social survey dataset to investigate concurrent chronic diseases in the US elderly population. Multimorbidity (the concurrence of two or more chronic diseases in an individual) places an enormous burden on individuals and health systems, and is expected to grow more in importance as populations age [17,18,19]. Researchers have used a variety of methods to unpick the complexity of combinations of diseases, and identify clusters and risk factors [20, 21]. Among these, BNs have great potential to tackle such complex problems and can help us understand multimorbidity as a complex system of biosocial disadvantage. In our network analysis, we investigate the interactions between presence and treatment of several chronic diseases, cognition, and their associations with health behaviours and other factors including race, gender and socioeconomic status.
Overview of our simulation
Figure 3 shows an overview of our simulation approach. We compare the performance of MICE and SEM on incomplete categorical (discrete) data, and both against doing nothing (e.g., using only complete cases). The main steps are as follows:
1. Generate a random graph. This random graph is also referred to as the original structure in the final step for comparison.
2. Sample data points from the random graph to get the complete data.
3. Introduce missing values to the complete data.
4. Learn the Bayesian network structure, either: (a) from all complete cases, (b) from the data set completed via MICE, or (c) using SEM.
5. Compare learned Bayesian network structures with the original structure.
Flowchart of our simulation approach
We analysed networks with numbers of variables ranging from 2 to 20. For each number of variables, we analysed a range of missing proportions from 0.1 to 0.6 at intervals of 0.1. Each variable number/missing proportion was repeated 100 times. We completed the whole analysis for each of 1000, 5000 and 10,000 sampled data points.
Simulated data
Random networks and sampled data
We first generated a randomly connected network structure with the specified number of nodes (variables) using method Ide's and Cozman's Generating Multi-connected DAGs (ic-dag) algorithm in the function random.graph from R package bnlearn [22]. We set maximum in-degree for any node at 3, and each node had 3 discrete levels. Various descriptive statistics of these random network structures are shown in Additional file 1; the networks had expected changes: increasing out-degrees, reduced density and clustering, and increased diameter with larger networks. We obtained conditional probability tables (CPTs) for each node by generating random vectors from the Dirichlet distribution using function rdirichlet from R package MCMpack [23]. The parameter \(\alpha\) of Dirichlet distribution was 0.5 for nodes with parents and 5 for nodes without parents. This provided our random parameterised BN. We then randomly sampled 1000, 5000 or 10,000 data points from the parameterised BN to get our sampled data using the function rbn from R package bnlearn [22].
For each missing data mechanism, we introduced different amounts of missing data to the sampled data using the function ampute from R package mice [24]. This function requires a complete data set and specified missing patterns (i.e., the variable or variables that are missing in a given sample). We used the default missing pattern matrix for all simulations, in which the number of missing patterns is equal to the number of variables, and one for each variable is missing. We also used the default relative frequency vector for the missing patterns, so that each missing pattern has the same probability to occur. Thus, the probability of being missing is equal across variables. The data is split into subsets, one for each missing pattern. Based on the probabilities of missingness, each case in each subset can be either complete or incomplete. Finally, the subsets are merged to generate the required incomplete data. The allocated probability for each value to be removed in each subset depends on the specified missing proportion and missing data mechanism [25]:
MCAR The missingness is generated by chance. Each value in the sampled data has the same probability to be incomplete and such probability is computed once the missing proportion is specified [25].
MAR The probability of each value being incomplete is dependent on a weighted sum score calculated from values of other variables. We used the default weights matrix in our simulation, in which all variables except the missing one contribute to the weighted sum score [25].
MNAR Simulating MAR and MNAR data share most procedures during amputation. The only difference is that it is the value of the potential missing value that contributes to the probability of its own missingness [25].
Bayesian network structure learning
During the whole study, we used the same BN structure learning procedures to learn from data either before processing or after. That is, procedures were all the same for methods "None", "MICE" and "SEM" in Fig. 3: we used a score and search algorithm, using the BDe score [2] and the tabu search algorithm for searching the best network structure [26]. The imaginary sample size used by BDe was set equal to 1 (default value). A test for the impact of scoring function was performed by also assessing structures learned using the BIC and BDs scores for one dataset configuration (MNAR data, 1000 data points, 0.3 missingness; BDs imaginary sample size set to 1 as default; BIC also used default value for penalty coefficient: log(number data points)*0.5). For "None" and "MICE", we applied the tabu function from R package bnlearn [22]; for SEM the search was incorporated into the iterative steps as described below.
No imputation
We used the complete cases of simulated incomplete data for BN structure learning.
Structural EM
We applied the SEM algorithm to the incomplete data using the function structural.em from R package bnlearn [22]. We used the default imputation method ("parents") in the E-step, which imputes missing data values based on their parents in the current network. We applied tabu search and BDe scoring matrix for structure learning and the default method Maximum Likelihood parameter estimation (mle) for parameter learning in the M-step. The maximum number of iterations was 5 as default.
Multiple Imputation by Chained Equations
As all the variables in this study were categorical and unordered, we used the polytomous logistic regression model for prediction using the function mice from R package mice [24]. The number of iterations was 5 as default.
A toy example comparison across four skeleton networks (from left to right): original network, None (complete cases), SEM, and MICE. The original networks work as the reference network for comparison. Blue arcs indicate the arcs that are missed by methods but exist in the original network (\(False\;Negative\)). Red arcs represent the arcs that are additionally found by methods but not in the original network (\(False\;Positive\)). Bold arcs are found by methods that are also in the original network (\(True\;Positive\))
Evaluation of recovered network structures
To compare the learned BN structures with the original ones, we compared their skeletons using functions compare and skeleton from R package bnlearn [22]. We compared skeletons, which represent all links in the network as undirected links, to deal with variation of link direction due to different equivalence classes. We explored comparison of equivalence classes, but a single missing/extra link could significantly change equivalence class, giving erroneous results for those dependencies accurately recovered. For example, a link which was directed in the equivalence class of the simulated network could, due to a missing link elsewhere, be undirected in the equivalence class of the recovered network; this would result in not only recording one missing link but also an additional, incorrect, extra link. Comparison of the undirected skeletons resolved this issue. We measured the performance of each method by computing the precision and recall (sensitivity) based on their comparison results. Precision measures the level of a method making mistakes by adding false arcs to the network, while recall evaluates the sensitivity of a method to capturing positive arcs from the targets. Their equations are as follows:
$$\begin{aligned} Precision = \frac{True\;Positive}{True\;Positive+False\;Positive} \end{aligned}$$
$$\begin{aligned} Recall = \frac{True\;Positive}{True\;Positive + False\;Negative} \end{aligned}$$
where \(True\;Positive\) represents finding arcs present in the original structure, \(False\;Positive\) represents finding arcs that are not in the original structure, and \(False\;Negative\) represents lack of an arc that is present in the original structure (Fig. 4).
We divided the number of variables into 6 groups for analysis: having number of variables 2-5, 6-8, 9-11, 12-14, 15-17 and 18-20. For each group with each missing proportion in each sampled data amount, we performed a one-way ANOVA to test whether there were any statistically significant differences between the means of the three methods. We applied a Bonferroni correction to correct the resulting p-values in these multiple comparisons. If there were significant Bonferonni-corrected results (p < 0.05) in a variable group/missing proportion combination, we performed the honestly significant difference (Tukey's HSD) test on the pairwise comparisons between the three methods. For both precision and recall, the same procedures were applied.
Evaluation of imputed data values
We explored the accuracy of MICE's and SEM's imputation, using a subset of the simulations. We extracted the completed datasets from the last iteration of SEM and MICE for each missing mechansim (MCAR, MAR, MNAR) for 1000 data points at missing proportion 0.3, using 10 datasets each of 10 and 20 variables. We calculated the Hamming distance between the imputed datasets from the original (no missing values) simulated dataset. We performed Student's t-test to test whether there were any statistically significant differences between the means of the Hamming distance of imputed versus original data of the two methods.
Real-world data application
We use self-reported and nurse-collected data from the United States Health and Retirement Study (HRS) [27,28,29], a representative study of adults aged 50 and older. We merged the interview data (N = 42233) [27] collected in 2016, the harmonised data (N = 42233) [29] and the laboratory data (N = 7399) [28] that were collected in the same year. As we are focusing on imputation methods, we set any provided imputed values to missing (i.e., to use our method). To ensure a representative sample of older respondents, and due to the focus on multimorbidity, we excluded those aged below 50 (N = 279). To ensure biomarker and survey data were collected concurrently, we excluded respondents whose interviews were finished in 2017 and 2018 (N = 1394). Our analysis dataset consisted of 29 categorical variables each with two to four levels. Supplementary Table 1 in Additional file 1 shows the detailed description of each variable. This cleaned subset contained 5726 observations, in which only 2688 cases were complete (corresponding to a missingness proportion of 0.53).
We applied the best-working method, SEM (see Results), to this real-world data. Because SEM includes random elements in the algorithm, we averaged across multiple repeats to capture the most complete picture of relationships among real-world variables. To accomplish this, we set different random seeds using the base function set.seed in the R environment, before applying the function structural.em from R package bnlearn [22] (using tabu search and BDe scoring metrics in the M-step, as above). In this way, we learned 100 network structures using the SEM algorithm from the whole incomplete data set. We determined the average network across the 100 repetitions based on an arc strength of each learned structure, calculated from the completed partially directed acyclic graph using the function arc.strength also from bnlearn. As the resulting arc strengths were strongly bimodal (see Results), we included in a final average network all links in the higher mode. While the resulting networks were partially directed, we show as results the skeletons – all links as undirected – because we do not wish to imply causal relationships between these measured variables; we are presenting statistical associations only.
We then further explored relationships among real-world variables based on the network structure by applying hierarchical divisive clustering from the R package igraph [30] to detect the densely connected variables in the learned average network. This identifies community groups consisting of nodes that are densely connected together but sparsely connected to others based on the edge betweenness of the edges without considering the directions.
Performance on MCAR data with 1000 data points. Precision (A) and recall (B) of three different methods of handling incomplete data: none, multiple imputation by chained equations (MICE) and structural expectation-maximization (SEM). Rows represent different missing proportions and columns indicate different groups of number of variables. Barplots show means with error bars representing standard error of the mean. Adjusted p-values for ANOVAs are displayed in those panels that are significant at least the 0.05 level. Lines representing significant Tukey's HSD pairwise tests are shown and annotated as: *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001
Performance on MAR data with 1000 data points. Precision (A) and recall (B) of three different methods of handling incomplete data: none, multiple imputation by chained equations (MICE) and structural expectation-maximization (SEM). Rows represent different missing proportions and columns indicate different groups of number of variables. Barplots show means with error bars representing standard error of the mean. Adjusted p-values for ANOVAs are displayed in those panels that are significant at least the 0.05 level. Lines representing significant Tukey's HSD pairwise tests are shown and annotated as: *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001
Performance on MNAR data with 1000 data points. Precision (A) and recall (B) of three different methods of handling incomplete data: none, multiple imputation by chained equations (MICE) and structural expectation-maximization (SEM). Rows represent different missing proportions and columns indicate different groups of number of variables. Barplots show means with error bars representing standard error of the mean. Adjusted p-values for ANOVAs are displayed in those panels that are significant at least the 0.05 level. Lines representing significant Tukey's HSD pairwise tests are shown and annotated as: *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001
Distribution of the difference in means of recall of three pairwise comparisons among three methods when there are 1000 data points: MICE's increase over doing nothing (red), SEM's increase over nothing (blue), and SEM's increase over MICE (green) A. MCAR data. B. MAR data. C. MNAR data. The y-axis represents the difference of the mean recall (averaged over the 100 simulations). The x-axis represents the number of variables from 2-20. Column panels represent missing proportions
Recovered network structures
A total of 1026 scenarios and 102,600 data sets were analysed.
Results of all three missingness mechanisms shared similar features among three levels of sampled data points. Detailed results are shown in Fig. 5 for MCAR, Fig. 6 for MAR, and Fig. 7 for MNAR with 1000 data points. In general, there was enhanced performance of methods of addressing missing data over doing nothing, and better performance of SEM over MICE. There were more significant differences looking at recall than precision. There were more significant differences with increasing proportion of missingness and number of variables. This observation was consistent when there were 5000 and 10,000 data points, although the out-performance of SEM over MICE decreased with 5000 data points and was even less obvious with 10,000 data points. Detailed results for 5000 and 10,000 data points are shown in Additional file 1.
In addition to the pairwise comparisons between the three methods regarding precision and recall, we also compared the performance of each method across the three missing data mechanisms (MCAR, MAR and MNAR) for each level of data points. However, our results did not show any significant differences in performance across the mechanisms.
We summarise patterns of recall across the simulation experiments in Fig. 8 when there are 1000 data points. This demonstrates substantial improvements in performance when using either method (compared to doing nothing), which start to emerge consistently at a 0.3 level of missingness, and increase as levels of missingness and number of variables increases. Generally, SEM outperforms MICE, but the difference does not appear to be conditioned by levels of missingness or missing data mechanism. There is an increase in SEM's outperformance through low numbers of variables, and then appears to reach an asymptote above 5 or 6 variables. This pattern was also observed when there were 5000 and 10,000 data points (see Additional file 1). However, their scale of observed difference was much smaller than with 1000 data points (differences around 0.01-0.02 compared to 0.1-0.2).
The same general pattern of SEM outperforming MICE, and both imputation methods outperforming doing nothing, also held with the test using the BIC and BDs scores (see Additional file 1).
Imputed data
We further compared the performance of MICE and SEM in terms of missing data completion, using 1000 data points with a 0.3 level of missingness. The data completed by SEM in the last iteration is more similar to the original simulated data than MICE (Fig. 9). SEM has a significantly better performance than MICE in data imputation and this finding is consistent when there are 10 variables and 20 variables and across all three missing mechanisms, with p < 0.0001 for all comparisons.
Comparison of the mean Hamming distance of MICE and SEM imputed data from the simulated data at 0.3 level of missingness with 1000 data points, using 10 datasets each of condition. Barplots show means with error bars representing standard error of the mean. Rows represent different numbers of variables and columns indicate different missing mechanisms. Lines representing significant Student's t-tests are shown and annotated as: ****, p < 0.0001
Figure 10 displays an overview of the levels of missingness in the cleaned HRS data set. Most have less than 5% of missing values; a few have \(\sim\)10% or greater, with the highest value being 33.1% missing for household income hhincome. There is a large amount of missing patterns that are different combinations of various variables. Only a few variables are missing individually.
The arc strengths averaged over the 100 repetitions of SEM applied to this data were strongly bimodal, with individual links having strength 0.87-1.0 (representing presence in 87-100% of the networks) or 0.05 or less. Thus, we generated a final averaged network with arc strengths of 0.87 or greater (Fig. 11).
Five community groups were identified within this network structure (nodes of each community are coloured the same in Fig. 11). Common cardiovascular conditions, such as heart disease, stroke and high blood pressure (HBP), are clustered with total cholesterol level and treatment for those conditions. Diabetes, HbA1c level and diabetes treatment are clustered. Another cluster contains arthritis, self-assessed memory decline and BMI level. Diabetes is directly linked to HBP, HbA1c and BMI levels. The other two clusters contain a mixture of diseases and social factors. Cognitive impairment (TICS-M) is clustered with cancer, lung disease, smoking and race. It is also directly linked to education whereas education clusters with high-density lipoprotein (HDL), drinking, exercise, gender, cohabitation and household income. We find expected links between health behaviours and chronic conditions, e.g., smoking and lung disease. Biomarkers are directly linked to socio-demographic and socio-economic factors, e.g., alcohol use is directly linked to HDL cholesterol level and gender. We also find some unexpected links and clusters: arthritis is directly linked to lung disease, and cancer treatment is directly linked to individual income.
The main aim of this work was to quantitatively evaluate and compare the performance of a common form of imputation (MICE) and SEM on learning BN structures from incomplete data, such as is commonly found in observational health and social datasets. According to our simulation results, as might be expected, both MICE and SEM performed better than no imputation. In addition, significant improvements in recall and precision were observed with SEM versus MICE. This disparity might be explained given that SEM is using additional information, i.e. the structure of the network, to deal with missing data, whereas MICE relies only on the multivariate associations between variables.
We note that SEM performs comparatively well under the MNAR mechanism. This is significant because MNAR is a complex problem to which there is no obvious solution. In MNAR data, a particular value's missingness rate depends on the real value itself and some unobserved predictors. Although it is theoretically achievable to calculate the missing data rate given the correct set of explanatory factors, in practice it is very hard to find out the combinations of factors that influence the missing rate [31]. Taking an example of blood glucose measurements, people suffering from hyperglycemia will be more likely to drop out of clinical surveys because they feel unwell. However, this assumption is unverifiable using the observed data, and in practice we cannot distinguish between MAR and MNAR data [31]. Multiple imputation methods would therefore generate biased results if we apply them on MNAR data, and the issue can only be addressed by sensitivity analysis to evaluate the difference under different assumptions about the missing data mechanism [31]. In the case of BN structure learning, our results suggest that SEM may be a principled approach to deal with MNAR data. However, this finding should be validated by conducting further experiments under varying MNAR conditions.
The validity of multiple imputation methods also depends on the choices of statistical approaches in analysing the sampled complete data sets and the resulting distribution of estimates for each missing value [8]. More sophisticated approaches are required if the mechanism MNAR appears in different types of variables. Galimard and colleagues [32] recently proposed a new imputation model based on Heckman's model [33, 34] to address the issue caused by MNAR binary or continuous outcome variables. They then integrated this model into MICE for managing MAR predictors at the same time. We can use function mice.impute.hecknorm from R package miceMNAR [32] to impute incomplete data with MNAR outcome variables and MAR predictors. Although it has been proposed that applying imputation methods on multivariate data before learning BNs can be problematic [32, 35], this novel method might be helpful for the further development of BN structure learning from incomplete data.
While SEM did consistently perform statistically significantly better than MICE, we point out that the differences were relatively small (on the order of <5% for both precision and recall). The overwhelming signal in our results is that imputation is far superior to using only complete cases (e.g., see Fig. 8). SEM can be more computationally intensive than MICE, particularly with higher missing proportion, thus there could be a trade-off between accuracy and computation time. However, these computational times are relatively small (seconds–minutes), thus we still recommend using the better performing SEM.
We showed the usefulness of SEM by applying it to real-world linked biomedical and survey data on chronic diseases, in a dataset which had a high level of missingness. The network we recover from real-world data highlights pivotal interactions among several chronic diseases, health behaviours and social risk factors [20]. As seen in other studies we observe clustering of cardiovascular diseases [36] and metabolic conditions, and treatments for them (e.g. diabetes). Known risk factors of HBP, BMI and smoking either directly or indirectly link to these conditions, although HBP stands apart as being directly linked to diabetes, stroke and heart disease. The connections between cognitive impairment, education and race have been previously observed in the US context [37]. Our analysis also highlights potential areas of investigation. Cognitive impairment is closely associated with cancer, but stands alone from self-assessed memory decline. Cancer treatment is directly linked to individual income, suggesting socioeconomic disparities in cancer treatment, and/or differential survival patterns by income.
Our simulation study showed better performance of SEM, and our real-world case study was able to reveal features of interest from a dataset with high levels of missingness. As in most simulation studies, the main drawback in our simulation is that simulated data sampled from random network is not guaranteed to reflect real data. Our simulation data has two main limitations. First, our simulation used all categorical variables and an even distribution of missing values among variables, which is not very plausible in real-world social science data. For example, some survey questions (e.g., income) will suffer higher levels of missingness due to refusal than other less sensitive ones (e.g., gender). These features probably help to reduce the difference between missing data mechanisms, especially the difference in data with MNAR. This perhaps could also help to explain why there were no significant differences across three missing data mechanisms in our simulation results, particularly with MICE method. Thus, future extensions of this work should incorporate more realistic simulations of mixtures of variable types and uneven missingness patterns. Second, our simulation study deals with cross-sectional, non hierarchical data, and in real social science data observations are often clustered or contain repeat measures from individuals. This can lead to a different, complex and important form of missingness – survey attrition. In future work, we could investigate the application of SEM using more complicated real-world data, using more complex missing patterns (e.g., longitudinal data).
Distribution of missing values in the real-world data set. (A) Proportion of missing values in each variable (named as in Supplementary Table 1 of Additional file 1), shown as a bar chart. (B) Missing patterns, shown as a heatmap with proportions to the right of the plot. Rows represent a single missing pattern ('Combinations') and columns variables, with the variable missing in a given pattern coloured green (blue otherwise). The proportion of each missing pattern is shown as a horizontal bar chart to the right of the heatmap (summing to 0.53 for missing patterns). The very bottom row represents the pattern with no missing values, with its proportion bar in blue with value 0.47
The average network learned from SEM. Nodes are labelled with variable names as found in Supplementary Table 1 of Additional file 1. Nodes are coloured to represent the different groups as discovered by community analysis on the network structure
Our simulation results indicate that both SEM and MICE improve the completeness of BN structures learned from partially observed data. In most circumstances, especially when there are relatively high number of variables and missing values, SEM performs better than MICE. This suggests that making use of extra information from the BN structure within SEM iterations could enhance its capability of capturing the real network structure from incomplete data. In our real-world data application, SEM identified expected interactions between common chronic diseases, and provided additional insights about the links between socio-demographic, socio-economic factors and chronic conditions. Our study suggests that BN researchers working with incomplete biomedical and social survey data should use SEM to deal with missing data.
The data that support the findings of this study are publicly available from the University of Michigan Health and Retirement Study (HRS; https://hrsdata.isr.umich.edu/), based on relevant data sharing policy.
MICE:
SEM:
Structural expectation-maximization
BNs:
DAGs:
Directed acyclic graphs
MCAR:
Missing completely at random
Missing at random
MNAR:
Missing not at random
CPTs:
Conditional probability tables
MDL:
Minimal description length
BDe:
Bayesian Dirichlet equivalent
Bayesian Information Criterion
BDs:
Bayesian Dirichet sparse
HRS:
Health and Retirement Study
TICS-M:
Telephone interview for cognitive status measurement
HDL:
High-density lipoprotein
HBP:
Pearl J. Probabilistic reasoning in intelligent systems: networks of plausible inference. Burlington: Morgan Kaufmann; 1988.
Heckerman D, Geiger D, Chickering DM. Learning Bayesian networks: The combination of knowledge and statistical data. Mach Learn. 1995;20:197–243.
Lin JH, Haug PJ. Exploiting missing clinical data in Bayesian network modeling for predicting medical problems. J Biomed Inform. 2008;41:1–14.
Varshney D, Kumar S, Gupta V. Predicting information diffusion probabilities in social networks: A Bayesian networks based approach. Knowl-based Syst. 2017;133:66–76.
Werhli AV, Husmeier D. Reconstructing gene regulatory networks with Bayesian networks by combining expression data with multiple sources of prior knowledge. Stat Appl Genet Mol Biol. 2007;6:15.
Smith VA, Yu J, Smulders TV, Hartemink AJ, Jarvis ED. Computational inference of neural information flow networks. PLoS Comput Biol. 2006;2:e161.
Milns I, Beale CM, Smith VA. Revealing ecological networks using Bayesian network inference algorithms. Ecology. 2010;91:1892–9.
Sterne JA, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. Brit Med J. 2009;338:b2393.
Rubin DB. Inference and missing data. Biometrika. 1976;63:581–92.
Schafer JL, Graham JW. Missing data: our view of the state of the art. Psychol Methods. 2002;7:147–77.
Raghunathan TE, Lepkowski J, Van Hoewyk JH, Solenberger PW. A multivariate technique for multiply imputing missing values using a sequence of regression models. Surv Methodol. 2001;27:85–95.
Azur MJ, Stuart EA, Frangakis C, Leaf PJ. Multiple imputation by chained equations: what is it and how does it work? Int J Meth Psychiatr Res. 2011;20:40–9.
Scutari M. Bayesian network models for incomplete and dynamic data. Stat Neerl. 2020;74:397–419.
Lauritzen SL. The EM algorithm for graphical association models with missing data. Comput Stat Data Anal. 1995;19:191–201.
Friedman N. The Bayesian Structural EM Algorithm. In: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence. UAI'98. San Francisco: Morgan Kaufmann; 1998. p. 129–38.
Friedman N. Learning belief networks in the presence of missing values and hidden variables. In: Fourteenth International Conference on Machine Learning (ICML). San Francisco: Morgan Kaufmann; 997. p. 125–33.
Uijen AA, van de Lisdonk EH. Multimorbidity in primary care: prevalence and trend over the last 20 years. Eur J Gen Pract. 2008;14:28–32.
Johnston MC, Crilly M, Black C, Prescott GJ, Mercer SW. Defining and measuring multimorbidity: a systematic review of systematic reviews. Eur J Public Health. 2019;29:182–9.
Kingston A, Robinson L, Booth H, Knapp M, Jagger C, project M. Projections of multi-morbidity in the older population in England to 2035: estimates from the Population Ageing and Care Simulation (PACSim) model. Age Ageing. 2018;47:374–80.
Prados-Torres A, Calderón-Larrañaga A, Hancco-Saavedra J, Poblador-Plou B, van den Akker M. Multimorbidity patterns: a systematic review. J Clin Epidemiol. 2014;67:254–66.
Cezard G, McHale CT, Sullivan F, Bowles JKF, Keenan K. Studying trajectories of multimorbidity: a systematic scoping review of longitudinal approaches and evidence. BMJ Open. 2021;11:e048485.
Scutari M. Learning Bayesian Networks with the bnlearn R Package. J Stat Softw. 2010;35:1–22.
Martin AD, Quinn KM, Park JH. MCMCpack: Markov Chain Monte Carlo in R. J Stat Softw. 2011;42:22.
van Buuren S, Groothuis-Oudshoorn K. mice: Multivariate Imputation by Chained Equations in R. J Stat Softw. 2011;45:1–67.
Schouten R, Lugtig P, Brand J, Vink G. Generating missing values for simulation purposes: A multivariate amputation procedure. J Stat Comput Simul. 2018;88(15):1909–30.
Glover F. Tabu Search-Part I INFORMS J Comput. 1989;1:190–206.
Health and Retirement Study, (RAND HRS Longitudinal File 2018 (V1)) public use dataset. Produced and distributed by the University of Michigan with funding from the National Institute on Aging (grant number NIA U01AG009740). Ann Arbor; 2021.
Health and Retirement Study, (2016 Biomarker Data (Early, Version 1.0)) public use dataset. Produced and distributed by the University of Michigan with funding from the National Institute on Aging (grant number NIA U01AG009740). Ann Arbor; 2020.
Health and Retirement Study, (Harmonized HRS (VERSION C)) public use dataset. Produced and distributed by the University of Michigan with funding from the National Institute on Aging (grant number NIA U01AG009740). Ann Arbor; 2022.
Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal 2006;Compex Systems:1695.
Molenberghs G, Fitzmaurice GM, Kenward KG, Tsiatis AA, Verbeke G. Handbook of Missing Data Methodology. Chapman & Hall/CRC Handbooks of Modern Statistical Methods; 2014.
Galimard JE, Chevret S, Curis E, Resche-Rigon M. Heckman imputation models for binary or continuous MNAR outcomes and MAR predictors. BMC Med Res Methodol. 2018;18:1–13.
Heckman JJ. The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. In: Annals of Economic and Social measurement. vol. 5. Cambridge: National Bureau of Economic Research, Inc; 1976. p. 475–92.
Heckman JJ. Sample selection bias as a specification error. Econometrica. 1979;47:153–61.
Kalton G. The treatment of missing survey data. Surv Methodol. 1986;12:1–16.
Vetrano DL, Roso-Llorach A, Fernández S, Guisado-Clavero M, Violán C, Onder G, et al. Twelve-year clinical trajectories of multimorbidity in a population of older adults. Nat Commun. 2020;11:1–9.
Vásquez E, Botoseneanu A, Bennett JM, Shaw BA. Racial/ethnic differences in trajectories of cognitive function in older adults: Role of education, smoking, and physical activity. J Aging Health. 2016;28:1382–402.
The authors acknowledge the Research/Scientific Computing teams at The James Hutton Institute and NIAB for providing computational resources and technical support for the "UK's Crop Diversity Bioinformatics HPC" (BBSRC grant BB/S019669/1), use of which has contributed to the results reported within this paper. Access to this was provided via the University of St Andrews Bioinformatics Unit which is funded by a Wellcome Trust ISSF award (grant 105621/Z/14/Z and 204821/Z/16/Z).
XK was supported by a World-Leading PhD Scholarship from St Leonard's Postgraduate School of the University of St Andrews. VAS and KK were partially supported by HATUA, The Holistic Approach to Unravel Antibacterial Resistance in East Africa, a three-year Global Context Consortia Award (MR/S004785/1) funded by the National Institute for Health Research, Medical Research Council and the Department of Health and Social Care. KK is supported by the Academy of Medical Sciences, the Wellcome Trust, the Government Department of Business, Energy and Industrial Strategy, the British Heart Foundation Diabetes UK, and the Global Challenges Research Fund [Grant number SBF004\1093]. KK is additionally supported by the Economic and Social Research Council HIGHLIGHT CPC- Connecting Generations Centre [Grant number ES/W002116/1].
School of Biology, Sir Harold Mitchell Building, Greenside Place, KY16 9TH, St Andrews, UK
Xuejia Ke & V. Anne Smith
School of Geography and Sustainable Development, Irvine Building, North Street, KY16 8AL, St Andrews, UK
Xuejia Ke & Katherine Keenan
Xuejia Ke
Katherine Keenan
V. Anne Smith
XK performed the analyses on simulated and real data, designed the figures and wrote the initial draft of the manuscript. KK assisted with the case study data and interpretation. VAS assisted with Bayesian network analyses. VAS conceptualised the general study idea. All authors conceptualised specific questions. All authors revised and agreed on the final manuscript.
Correspondence to V. Anne Smith.
Supplementary Figs. 1-8, showing simulation results for 5000 and 10,000 data points, Supplementary Fig. 9, showing simulation results of scoring functions BIC and BDs on MNAR data with 1000 data points and 0.3 missing proportion, Supplementary Table 1, showing description of variables in the real-world dataset, Supplementary Tables 2 and 3, showing descriptive statistics of random network structures.
Ke, X., Keenan, K. & Smith, V.A. Treatment of missing data in Bayesian network structure learning: an application to linked biomedical and social survey data. BMC Med Res Methodol 22, 326 (2022). https://doi.org/10.1186/s12874-022-01781-9
Simulation study | CommonCrawl |
Subsets and Splits